Wednesday, February 12, 2025

Lucid Vision Labs tech report on IMX636 and IMX637

Link: https://thinklucid.com/tech-briefs/triton2-evs-explained-optimizing-event-based-imaging/

Lucid Vision Labs has published an in-depth article on their analysis of the bias and threshold features of Sony's IMX636 and IMX637 event-based sensors. The report goes into details on how the sensor can be controlled so that users can tweak and tune the event based data.




10 comments:

  1. What are the real applications of EVS camera please?

    ReplyDelete
    Replies
    1. High Temporal Resolution: Changes are recorded in microseconds, making it ideal for fast-moving objects.
      Asynchronous Output: Pixels independently report intensity changes, reducing redundant data and lowering latency.
      High Dynamic Range (HDR): They often handle extremely bright and dark areas simultaneously better than traditional cameras.
      Lower Power & Bandwidth: Sparse data means fewer resources are needed to process and transmit information.
      Reduced Motion Blur: Since data is event-driven, you don’t get the same blur seen in frame-based captures.

      Delete
    2. Those are not applications but properties or benefits of EVS

      Delete
    3. You should be able to derive Applications from those Properties.
      [Tracking/counting fast & small objects; Vibration maintenance; anything where you dont care for image background but extremely high "fps"]

      Delete
    4. Could you please give us some of what you have derived from these properties? For me, these properties are useless in most of real applications since most of visual information is lost in this simplistic "change detection".

      Delete
  2. the contrast depending latency is solved? when i remember my experiments with a davis sensor, one property was that depending on contrast, the events were fired with different latency, so objects "blur in time", their strong edges appear sooner (was still 100s us magnitude). whats the event latency in this sensors?

    ReplyDelete
    Replies
    1. That's fundamental to every EVS pixel currently used - see equation 9 in https://imagesensors.org/papers/10.60928/o8kk-2mew/ which is an analytical expression of the voltage change in front of the event-discriminating comparators. Rearranging this equation for time t it takes to give a desired signal change for the comparators to toggle, will show you that this is indeed dependent on the contrast. And that kind of makes sense. Imaging a signal that is just barely over the contrast threshold - similarly to the settling of an RC low-pass filter now it will reach the threshold only after the settling is almost completed. However, if the contrast change is significantly above the threshold the signal change quickly exceeds what is required to trigger an event. But therefore, now you may miss that your pixel should potentially have triggered two events... As long as your illumination condition is very low light that your latency is not limited by the readout but by the pixel gm/C this topic will remain. You can slow down your readout and yield a contrast-independent latency. But then you again may miss events. You could run a pixel-level ADC at very high sampling rates and determine events in digital which could resolve some of the issues I pointed out here, but then you burn a ton of power and also dynamic range would become a problem here. There is no free lunch unfortunately...

      Delete
  3. It's quite interesting sony open so many functions for users,is this common from every EVS vendor?

    ReplyDelete
    Replies
    1. Hello,

      Yes, please also consider Prophesee documentation: https://docs.prophesee.ai/stable/hw/manuals/esp.html

      JD

      Delete
    2. This simply means that they cannot find sounding applications and need the MV community to explore this device.

      Delete

All comments are moderated to avoid spam and personal attacks.