AI Engine API User Guide (AIE) 2022.1
Changelog

Table of Contents

Vitis 2022.1

Documentation changes

  • Small documentation fixes for operators
  • Issues of documentation on msc_square and mmul
  • Enhance documentation for sliding_mul operations
  • Change logo in documentation
  • Add documentation for ADF stream operators

Global AIE API changes

  • Add support for emulated FP32 data types and operations on AIE-ML

Changes to data types

  • unaligned_vector_iterator: add new type and helper functions
  • random_circular_vector_iterator: add new type and helper functions
  • iterator: add linear iterator type and helper functions for scalar values
  • accum: add support for dynamic sign in to/from_vector on AIE-ML
  • accum: add implicit conversion to float on AIE-ML
  • vector: add support for dynamic sign in pack/unpack
  • vector: optimization of initialization by value on AIE-ML
  • vector: add constructor from 1024b native types on AIE-ML
  • vector: fixes and optimizations for unaligned_load/store

Changes to operations

  • adf::buffer_port: add many wrapper iterators
  • adf::stream: annotate read/write functions with stream resource so they can be scheduled in parallel
  • adf::stream: add stream operator overloading
  • fft: performance fixes on AIE-ML
  • max/min/maxdiff: add support for bfloat16 and float on AIE-ML
  • mul/mmul: add support for bfloat16 and float on AIE-ML
  • mul/mmul: add support for dynamic sign AIE-ML
  • parallel_lookup: expanded to int16->bfloat, performance optimisations, and softmax kernel
  • print: add support to print accumulators
  • add/max/min_reduce: add support for float on AIE-ML
  • reverse: add optimized implementation on AIE-ML using matrix multiplications
  • shuffle_down_replicate: add new function
  • sliding_mul: add 32b for 8b * 8b and 16b * 16b on AIE-ML
  • transpose: add new function and implementation for AIE-ML
  • upshift/downshift: add implementation for AIE-ML

Vitis 2021.2

Documentation changes

  • Fix description of sliding_mul_sym_uct
  • Make return types explicit for better documentation
  • Fix documentation for sin/cos so that it says that the input must be in radians
  • Add support for concepts
  • Add documenttion for missing arguments and fix wrong argument names
  • Fixes in documentation for int4/uint4 AIE-ML types
  • Add documentation for the mmul class
  • Update documentation about supported accumulator sizes
  • Update the matrix multiplication example to use the new MxKxN scheme and size_A/size_B/size_C

Global AIE API changes

  • Make all entry points always_inline
  • Add declaration macros to aie_declaration.hpp so that they can be used in headers parsed by aiecompiler

Changes to data types

  • Add support for bfloat16 data type on AIE-ML
  • Add support for cint16/cint32 data types on AIE-ML
  • Add an argument to vector::grow, to specify where the input vector will be located in the output vector
  • Remove copy constructor so that the vector type becomes trivial
  • Remove copy constructor so that the mask type becomes trivial
  • Make all member functions in circular_index constexpr
  • Add tiled_mdspan::begin_vector_dim functions that return vector iterators
  • Initial support for sparse vectors on AIE-ML, including iterators to read from memory
  • Make vector methods always_inline
  • Make vector::push be applied to the object it is called on and return a reference

Changes to operations

  • add: Implementation optimization on AIE-ML
  • add_reduce: Implement on AIE-ML
  • bit/or/xor: Implement scalar x vector variants of bit operations
  • equal/not_equal: Add fix in which not all lanes were being compared for certain vector sizes.
  • fft: Interface change to enhance portability across AIE/AIE-ML
  • fft: Add initial support on AIE-ML
  • fft: Add alignment checks for x86sim in FFT iterators
  • fft: Make FFT output interface uniform for radix 2 cint16 upscale version on AIE
  • filter_even/filter_odd: Functional fixes
  • filter_even/filter_odd: Performance improvement for 4b/8b/16b implementations
  • filter_even/filter_odd: Performance optimization on AIE-ML
  • filter_even/filter_odd: Do not require step argument to be a compile-time constant
  • interleave_zip/interleave_unzip: Improve performance when configuration is a run-time value
  • interleave_*: Do not require step argument to be a compile-time constant
  • load_floor_v/load_floor_bytes_v: New functions that floor the pointer to a requested boundary before performing the load.
  • load_unaligned_v/store_unaligned_v: Performance optimization on AIE-ML
  • lut/parallel_lookup/linear_approx: First implementation of look-up based linear functions on AIE-ML.
  • max_reduce/min_reduce: Add 8b implementation
  • max_reduce/min_reduce: Implement on AIE-ML
  • mmul: Implement new shapes for AIE-ML
  • mmul: Initial support for 4b multiplication
  • mmul: Add support for 80b accumulation for 16b x 32b / 32b x 16b cases
  • mmul: Change dimension names from MxNxK to MxKxN
  • mmul: Add size_A/size_B/size_C data members
  • mul: Optimized mul+conj operations to merged into a single intrinsic call on AIE-ML
  • sin/cos/sincos: Fix to avoid int -> unsigned conversions that reduce the range
  • sin/cos/sincos: Use a compile-time division to compute 1/PI
  • sin/cos/sincos: Fix floating-point range
  • sin/cos/sincos: Optimized implementation for float vector
  • shuffle_up/shuffle_down: Elements don't wrap around anymore. Instead, new elements are undefined.
  • shuffle_up_rotate/shuffle_down_rotate: New variants added for the cases in which elements need to wrap-around
  • shuffle_up_replicate: Variant added which replicates the first element.
  • shuffle_up_fill: Variant added which fills new elements with elements from another vector.
  • shuffle_*: Optimization in shuffle primitives on AIE, especially for 8b/16b cases
  • sliding_mul: Fixes to handle larger Step values for cfloat variants
  • sliding_mul: Initial implementation for 16b x 16b and cint16b x cint16b on AIE-ML
  • sliding_mul: Optimized mul+conj operations to merged into a single intrinsic call on AIE-ML
  • sliding_mul_sym: Fixes in start computation for filters with DataStepX > 1
  • sliding_mul_sym: Add missing int32 x int16 / int16 x int32 type combinations
  • sliding_mul_sym: Fix two-buffer sliding_mul_sym acc80
  • sliding_mul_sym: Add support for separate left/right start arguments
  • store_v: Support pointers annotated with storage attributes