EE Times Europe – Embedded AI Processors: The Cambrian Explosion

Half a billion years ago, something remarkable occurred: an astonishing, sudden increase in new species of organisms. Paleontologists call it the Cambrian Explosion, and many of the animals on the planet today trace their lineage back to this event.

Embedded Vision

A similar thing is happening in processors for embedded vision and artificial intelligence today, as the recent Embedded Vision Summit made clear. The in-person event, held last month in Santa Clara, California, focused on practical know-how for product creators incorporating AI and vision into their products.

These products demand AI processors that balance conflicting needs for high performance, low power, and cost sensitivity. The staggering number of embedded AI chips on display at the Summit underscored the industry’s response to this demand. While the number of processors targeting computer vision and machine learning (ML) is overwhelming, there are some natural groupings that make the field easier to understand. Here are some themes we’re seeing.

First, some processor suppliers are thinking about how to best serve applications that simultaneously apply ML to data from diverse sensor types — for example, audio and video. The Katana low-power processor from Synaptics, for example, fuses inputs from a variety of sensors, including vision, sound, and environmental. Xperi’s talk on smart toys for the future touched on this as well.

Second, a subset of processor suppliers is focused on driving power and cost down to a minimum. This is interesting because it enables new applications. For example, Cadence addressed the Summit on additions to its Tensilica processor portfolio that enable always-on AI applications. Arm presented low-power vision and ML use cases based on its Cortex-M series of processors. And Qualcomm covered tools for creating low-power computer-vision apps on its Snapdragon family.

Third, although many processor suppliers are focused mainly or exclusively on ML, a few are addressing other kinds of algorithms typically used in conjunction with deep neural networks, such as classical computer vision and image processing. An example is quadric, whose new q16 processor is claimed to excel at a wide range of algorithms, including both ML and conventional computer vision.

Finally, an entirely new species seems to be coming to the fore: neuromorphic processors. Neuromorphic computing refers to approaches that mimic the way the brain processes information. For example, biological vision systems process events in the field of view, whereas classical computer-vision approaches typically capture and process all the pixels in a scene at a fixed frame rate that has no relation to the source of the visual information.

The Summit’s keynote talk, “Event-based Neuromorphic Perception and Computation: The Future of Sensing and AI,” presented an overview of the advantages to be gained by neuromorphic approaches. It was delivered by Ryad Benosman, professor at the University of Pittsburgh and adjunct professor at the CMU Robotics Institute. And Opteran presented on its neuromorphic processing approach to enable vastly improved vision and autonomy, the design of which was inspired by insect brains.

Whatever your application is, and whatever your requirements are, somewhere out there is an embedded AI or vision processor that’s the best fit for you. This year’s Summit highlighted many of them. Check back in 10 years, when we’ll see how many of 2032’s AI processors trace their lineage to this modern-day Cambrian Explosion.

This article originally ran on sister site EE Times.


Read also:

June 2022 EE Times Europe Magazine.

EE Times Europe Magazine – June 2022

The June 2022 edition of EE Times Europe Magazine covers topics such as embedded systems security, optics and photonics, computer vision, as well as initiatives to accelerate the transition to a clean energy society.

LEAVE A REPLY

Please enter your comment!
Please enter your name here