Toby Delbruck, Zurich University, Switzerland, posted a Youtube demo of his Dynamic Vision Sensor: "Conventional vision sensors see the world as a series of frames. Successive frames contain enormous amounts of redundant information, wasting memory access, RAM, disk space, energy, computational power and time. In addition, each frame imposes the same exposure time on every pixel, making it difficult to deal with scenes containing very dark and very bright regions.
The Dynamic Vision Sensor (DVS) solves these problems by using patented technology that works like your own retina. Instead of wastefully sending entire images at fixed frame rates, only the local pixel-level changes caused by movement in a scene are transmitted – at the time they occur. The result is a stream of events at microsecond time resolution, equivalent to or better than conventional high-speed vision sensors running at thousands of frames per second."
Quite interesting, nice demo Tobi. But I have two questions:
ReplyDelete1. There just does not seem to be enough light to operate at microsecond time scales. Unless you mean tens or hundreds of microseconds, or the pixel is a lot bigger than I think. What do you mean by microsecond time scale?
2. I think a more fair comparison would be to high speed conventional sensors with ROI readout so there was a WFOV and activity in a limited area. The DVS approach would probably continue to compare favorably even for tracked ROI readout.