Fueled by a trifecta of rapid advances in network training, big data, and ML research, so-called "Deep Learning" is rapidly becoming mainstream. Nowhere is this statement more true than in embedded vision applications where the end game for Deep Learning is teaching machines to "see". The range of embedded vision applications is seemingly endless, from machine vision cameras that ensure zero defects on the production line, to pole-mounted "Smart City" cameras that monitor traffic, detect theft and disasters, to robots that deliver your online purchases right to your doorstep. However, CNN inference is computationally expensive, requiring billions of operations per inference. Moreover, many critical applications require extremely low latency and must support high frame rates. Given these constraints, and given a need for sub-10W power consumption, high-reliability, security, and product longevity, how do we design an integrated camera which can provide the required levels of ML inference performance?
During this webinar we will explore this topic from a variety of different perspectives: