Thursday, April 23, 2020

Rise of Event Cameras

EETimes publishes Tobi Delbruck's (ETH/Zurich University) article "The Slow but Steady Rise of the Event Camera." The article refers to an excellent Github repository of event-based camera papers maintained by ETH/Zurich University.

Neuromorphic silicon retina “event camera” development languished, only gaining industrial traction when Samsung and Sony recently put their state-of-the-art image sensor process technologies on the market.

In 2017, Samsung published an ISSCC paper on a 9-um pixel, back-illuminated VGA dynamic vision sensor (DVS) using their 90-mn CIS fab. Meanwhile, Insightness announced a clever dual intensity + DVS pixel measuring a mere 7.2-um.

Both Samsung and Sony have built DVS with pixels under 5um based on stacked technologies where the back-illuminated 55-nm photosensor wafer is copper-bumped to a 28-nm readout wafer.

Similar to what occurred with CMOS image sensors, event camera startups like Insightness (recently acquired by Sony), iniVation (who carry on the iniLabs mission), Shanghai-based CelePixel and well-heeled Prophesee are established, with real products to sell. Others will surely follow.

I now think about of DVS development as mainly an industrial enterprise, but it was the heavy focus on sparse computing that has led us over the last five years to exploit activation sparsity in hardware AI accelerators. Like the spiking network in our brains, these AI accelerators only compute when needed. This approach—promoted for decades by neuromorphic engineers—is finally gaining traction in mainstream electronics.


I came up with the DVS pixel circuit. This pixel architecture is the foundation of all subsequent generations from all the major players (even when they don’t say so on their web sites).

1 comment:

  1. All evolved animals including human work with relatively simple retina structure. Only few animals work with such specialized retina, frog is the most typical example. Saying that this kind of simple local absolute value change detection could a generic solution for artificial is totally wrong. Since artificial vision in many cases has to resolve complexe, sophisticated and precise tasks. Simply because artificial machine is energy and failure limited. We can not react as a frog which will eat longitudinal moving small objects to survive like a frog.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.