Saturday, February 12, 2022

Event Guided Depth Sensing

University of Zurich and ETH Zurich publish a paper "Event Guided Depth Sensing" by Manasi Muglikar, Diederik Paul Moeys, and Davide Scaramuzza.

"Active depth sensors like structured light, lidar, and time-of-flight systems sample the depth of the entire scene uniformly at a fixed scan rate. This leads to limited spatio-temporal resolution where redundant static information is over-sampled and precious motion information might be under-sampled. In this paper, we present an efficient bio-inspired event-camera-driven depth estimation algorithm. In our approach, we dynamically illuminate areas of interest densely, depending on the scene activity detected by the event camera, and sparsely illuminate areas in the field of view with no motion. The depth estimation is achieved by an event-based structured light system consisting of a laser point projector coupled with a second event-based sensor tuned to detect the reflection of the laser from the scene. We show the feasibility of our approach in a simulated autonomous driving scenario and real indoor sequences using our prototype. We show that, in natural scenes like autonomous driving and indoor environments, moving edges correspond to less than 10% of the scene on average. Thus our setup requires the sensor to scan only 10% of the scene, which could lead to almost 90% less power consumption by the illumination source. While we present the evaluation and proof-of-concept for an event-based structured-light system, the ideas presented here are applicable for a wide range of depth-sensing modalities like LIDAR, time-of-flight, and standard stereo. Video is available at"


  1. Cool publication!
    Motion information is also directly available from indirect TOF sensors,
    making them an efficient solution for such applications and they are able to withstand intense ambient lighting conditions, unlike event imagers.

    1. I do not see how indirect ToF would provide "direct" motion. Indirect ToF is an integrating approach (accumulate numerous pulses and read out several subframes to compute depth image) conversely to event sensing being based on time-continuous photocurrent. I'm neither able to see how indirect ToF shall be able to withstand intense ambient lighting conditions - it's universally appreciated that they're really not. Outdoor operation under full sunlight has always been an issue for ToF. Conversely, event sensors work on time-continuous photocurrent and are sampling the temporal changes only. I am not aware of a work illustrating limitations of event sensors at strong illumination. Quite the opposite actually, if you look at dynamic range measurements of event sensor publications, they have a lower cutoff where the signal is too weak to reliably create events without excessive latency, but most people seem to struggle to actually measure where these devices clip. I would appreciate if you could provide references and more elaborate explanations to support your view.


All comments are moderated to avoid spam and personal attacks.