Thursday, September 09, 2021

Sony Announces Two 4.86um Pixel Event-Based Sensors Developed with Prophesee

Sony announces the upcoming release of two of stacked event-based vision sensors: 0.92MP IMX636 and 0.33MP IMX637. These sensors designed for industrial equipment are capable of detecting only subject changes, and achieve the industry’s smallest pixel size of 4.86μm.

These two sensors are a product of collaboration between Sony and Prophesee, by combining Sony's CMOS sensor technology with Prophesee's event-based method vision sensing technology. As part of the collaboration between Sony and Prophesee on these products,  Metavision Intelligence Suite  an event signal processing software optimized for sensor performance, is available from Prophesee. Combining Sony’s event-based vision sensors with this software will enable efficient application development and provide solutions for various use cases.

The new sensors employ stacking technology leveraging Sony’s proprietary Cu-Cu connection. In addition to operating with low power consumption and delivering high-speed, low-latency, high-temporal-resolution data output, the new sensors also feature a high resolution for their small size. All of these advantages combine to ensure immediate detection of moving subjects in diverse environments and situations.

These sensors are equipped with event filtering functions developed by Prophesee for eliminating unnecessary event data, making them ready for various applications. Using these filters helps eliminate events that are unnecessary for the recognition task at hand, such as the LED flickering that can occur at certain frequencies (anti-flicker), as well as events that are highly unlikely to be the outline of a moving subject (event filter). The filters also make it possible to adjust the volume of data when necessary to ensure it falls below the event rate that can be processed in downstream systems (event rate control).

6 comments:

  1. The low latency has been obtained only with small ROI only. This is a pitty with such system complexity!

    ReplyDelete
  2. How DVS sensors remove the the pixel changes due to the camera moving? E.g. if it is put on the moving car, the pixels would fire all the time. Does it mean the merit from the sparse data is gone?

    ReplyDelete
    Replies
    1. no as you can see in one of the images, even on a moving care scenario there is a lot of sparsity in the scene (e.g. the sky and road don't have much spatial contrast and don't generate changes everywhere)

      Delete
    2. What is the specific application of this in autopilot when it feels like the whole scene is moving? And this threshold needs to be adjusted dynamically with the scene in order to dynamically filter the information we need.

      Delete
    3. with 25% contrast threshold, you will not see much thing ...

      Delete
  3. We measured that even in dense driving scenes, only 10% of the pixels fire events in 20ms intervals... it is quite surprising that the activity is still so sparse but it seems to be true. In staring surveillance scenes, the sparsity can easily exceed 99% nearly all the time.

    And it is not true that 25% threshold hides most info. You still get informative output.

    BTW, it is possible to dynamically throttle the event output, a bit like AGC for frame cameras. See https://arxiv.org/abs/2105.00409 for our experiments. It seems like refractory period control is quite practical method to dynamically limit event rate.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.