Lists

Thursday, December 14, 2023

Lecture by Dr. Tobi Delbruck on the history of silicon retina and event cameras

Silicon Retina: History, Live Demo, and Whiteboard Pixel Design


 

Rockwood Memorial Lecture 2023: Tobi Delbruck, Institute of Neuroinformatics, UZH-ETH Zürich

Event Camera Silicon Retina; History, Live Demo, and Whiteboard Circuit Design
Rockwood Memorial Lecture 2023 (11/20/23)
https://inc.ucsd.edu/events/rockwood/
Hosted by: Terry Sejnowski, Ph.D. and Gert Cauwenberghs, Ph.D.
Organized by: Institute for Neural Computation, https://inc.ucsd.edu

Abstract: Event cameras electronically model spike-based sparse output from biological eyes to reduce latency, increase dynamic range, and sparsify activity in comparison to conventional imagers. Driven by the need for more efficient battery powered, always-on machine vision in future wearables, event cameras have emerged as a next step in the continued evolution of electronic vision. This lecture will have 3 parts: 1. A brief history of silicon retina development starting from Fukushima’s Neocognitron and Mahowald and Mead’s earliest spatial retinas; 2: A live demo of a contemporary frame-event DAVIS camera that includes an inertial measurement unit (IMU) vestibular system, 3: (targeted for neuromorphic analog circuit design students in the BENG 216 class), a whiteboard discussion about event camera pixel design at the transistor level, highlighting design aspects of event camera pixels which endow them with fast response even under low lighting, precise threshold matching even under large transistor mismatch, and temperature-independent event threshold.

17 comments:

  1. Is here scientific grade event cameras ?

    ReplyDelete
  2. Yes, at around 45:30 he talks about SciDVS.

    ReplyDelete
  3. What are principal applications for which event sensors are must to have please? thanks

    ReplyDelete
    Replies
    1. You might want to watch the lecture to answer that!

      Delete
  4. I mean the real commercial applications ...

    ReplyDelete
    Replies
    1. I think Car industry main driver. May be military little bit. Someone who have very high dynamic range scenes, need low power consumption, automatic movement detection without complex programming

      Delete
    2. The ability to stream visual information at milli-second increments and at very low power opens many potential applications in which Frame based approaches are limited. Blurr correction is an imediate application that could be applied on Mobile cameras, high speed imaging in automation is another. In the future, event cameras may be the most awaited breaktrough for AR/VR to interface correctly with human eyes and also the outside world. The same could be said in the context of Automotive and Autonomous Driving.

      Delete
  5. Wonderful to see the link to the Toby Delbruck lecture here. I would additionally suggest "The Silicon Eye" by George F. Gilder - well written book that allows great insights into the neuromorphic imagers development and people connected to it. It has always been marvelous, how early on the key components have been invented and how long it takes mainstream products to develop from those inventions.

    ReplyDelete
  6. Maybe the MP won't come until spectacles finally come to MP.... we'll see. But obviously visual prosthetics for the blind that have to have low latency, HDR and low power consumption along with a sparse output that can drive inference hardware that can exploit this sparsity and a form of data that is closer to what the brain uses already.

    ReplyDelete
    Replies
    1. But main Problems eyes implants not sensors. Rather cost and connection to neurones.

      Delete
  7. I want Eric to comment on what I said about his definitions of the "perfect image sensor" around slide 4... what do you say Eric?

    ReplyDelete
    Replies
    1. hi Tobi, I think the main thing I would like to say is that I think it is wonderful that after all these years working on this technology, it is going niche-mainstream. Congratulations! Regarding off-hand silly comments on a "perfect image sensor," we can add zero power and zero latency to the list. I appreciate just being mentioned in your talk! Take care Tobi. Looking forward to our next meeting. We missed you in Scotland.

      Delete
  8. 50:00,cancel the offset of comparator with preamp. But why not to use the Auto zero method?

    ReplyDelete
    Replies
    1. this should be for reducing kick-off

      Delete
  9. Can this technology complement ToF sensors such as SPAD? I think if algorithms allow it for a low lattency but more reliable object identification will be a good application. Does anyone has a reference to such applications today? any products maybe on the market?

    ReplyDelete
    Replies
    1. There already was a paper where an EVS sensor determined ROI to guide a scanning Lidar (https://ieeexplore.ieee.org/document/9665844). That Lidar, of course, can be SPAD based. Some SPAD sensors are actually read-out event-based. But that's very different. So, yes. Event-sensors can complement ToF sensors and they can inspire ToF sensor designs. But they can't replace ToF sensors (SPADs have ns response, sub-ns jitter, and single-electron sensitivity - in comparison Event-based vision sensors have us-ms latency - but they can be very low-power).

      Key players are iniVation, Prophesee, Samsung, OMNIVISION and Sony. And there's a new player Ruisi Zhixin (AlpsenTek) but they haven't published anything peer-reviewed yet (to my knowledge). You can buy cameras and dev kits from iniVation and Prophesee. Samsung was the first one to target a mass product (security camera). OMNIVISION, Sony and AlpsenTek appear targeted at mobile phones. And Sony, Prophesee and iniVation made announcements/publications directed at AR/VR.

      Delete

All comments are moderated to avoid spam and personal attacks.