Fraunhofer presentation at the Vision Show held in Stuttgart, Germany, on Nov 6-8, 2018 offers a different approach to the data minimization in machine vision applications. To simplify the use, Fraunhofer embedded vision sensor even offers Python language-like scripting interface:
Prophesee too presented its Event-Driven Sensor at the Vision Show:
Thanks to TL for the pointers!
Well, nitpicking a bit, seems like Prophesee's timeline is a little strange, not showing CMOS until 2015, and then claiming their focal-plane image processing chip as a successor in 2020. And as far as that goes, as I have mentioned before, "motion detection" is something that was demonstrated in the early 90's. See slide #5 of this presentation, for example: http://ericfossum.com/Presentations/Part%204%20-%20JPL%20Chips.pdf
ReplyDeleteand published in 1995 at ISSCC, as led by Bell Labs*:
Dickinson, Alex, et al. "A 256/spl times/256 CMOS active pixel image sensor with motion detection." Solid-State Circuits Conference, 1995. Digest of Technical Papers. 41st ISSCC, 1995 IEEE International. IEEE, 1995,
and there is also a related Bell Labs patent issued in 1997, probably expired.
Surely the Prophesee and Toby Delbruck DVS+image sensor forms of motion detection work probably better, but hardly a new era, esp. as I have been working focal plane image processing since the 1980's!.
We all stand on the shoulders of those that go before us, including me. But integrating focal plane image processing has been my goal all along, for at least 35 years. So, let's not claim it as a new era, at least not until volume shipments in the millions/month start to occur. Until then it is just continuous evolution down the path to smart image sensors.
*motion detection mode was discovered at Bell Labs when they accidentally reversed the SHS and SHR timing on our joint experimental CIS chip. It was later incorporated in the next gen JPL camera on a chip as an optional timing reversal mode. With motion detected, you could switch back to regular imaging mode.
They claim that from 2020, CMOS+ will take over, enhanced 3D stacking with full solution on edge, as they write. I cannot see that they claim that their sensor will take over from 2020?
ReplyDeleteNo doubt that research has been done in the field before. It is always nice to read your anecdotes from back in the days.
Well, as one gets older, your appreciation of history improves. What seemed "ancient" as a young man, seems almost "recent" as you get older. And I am not that old yet!
DeleteThe Prophesee of Pierre Cambou is coming!
ReplyDeletein the previous discussion in http://image-sensors-world.blogspot.com/2018/11/event-based-vision-to-dominat-mv.html I think the impression occured that using the term 'disruptive' was an invention of the anonymous poster. But if you look at the slides above you see that actually Prophesee uses this term to put their invention on the same level than what CCD/CIS was in retrospect to film based photography.
ReplyDelete"the same level" as moving to solid-state from chemistry-based imaging?
DeleteCome on, get serious and stop vaping.
It depends on what you mean with "disruption". I would invite you to check prof. Clayton Christensen definition. Also, "disruption of meaning" according to prof. Roberto Verganti could be applied here: events are not frame hence you are changing the "meaning" of an image sensor.
Deletehttps://en.wikipedia.org/wiki/Disruptive_innovation says "In business, a disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market-leading firms, products, and alliances.". I think "and eventually disrupt" will not happen. I think DVS will create new markets and add new sensoric data. It will not make frame based CIS obsolete in machine vision as some of the examples on the wikipedia page (e.g. what wikipeda did to traditional encyclopedias or the examples on the prophesee presentation like what "digital photography" did to "film photography").
Delete