Wednesday, February 19, 2020

Prophesee and Sony Develop a Stacked Event-Based Vision Sensor with the Industry’s Smallest Pixels and Highest HDR Performance

Prophesee S.A. and Sony announce they have jointly developed a stacked Event-based vision sensor with the industry’s smallest 4.86μm pixel size and the industry’s highest 124dB (or more) HDR performance. This sensor combines of Sony’s stacked CMOS sensor process with Cu-Cu connections, with Prophesee’s Metavision Event-based vision technologies leading to fast pixel response, high temporal resolution and high throughput data readout. The newly developed sensor is suitable for various machine vision applications, such as detecting fast moving objects in a wide range of environments and conditions.

The 1/2-inch format sensor has 1280x720 resolution and uses 40nm logic process. The 124dB (or more) HDR performance is made possible by placing only BSI pixels and a part of N-type MOS transistor on the pixel chip (top), thereby allowing the aperture ratio to be enhanced by up to 77%. High sensitivity/low noise technologies Sony has developed over many years of CMOS image sensor development enable event detection in low-light conditions (40mlx).

By adding time information at 1μs precision to the pixel address where a change in luminance has occurred, event data readout with high time resolution is ensured. Furthermore, a high output event rate of 1.066Geps*5 has been achieved by efficiently compressing the event data, i.e. luminance change polarity, time, and x/y coordinate information for each event.


  1. Didn't Sony acquire recently Insightness?

  2. A company in Shanghai China already produce this kind of sensor 3 years ago

  3. I guess you're referring to Celepixel
    Indeed they have a 1MP array but the pixel pitch is 9.8µm and the image sensor format is 1" so Prophesee and Sony actually announces here a sensor with a pitch which is half of what Celepixel is offering.

    1. What is the benefit in having a smaller pixel? isn't sensitivity important for event driven sensors?

    2. Sensitivity does not change if the photodiode area remains the same. For example, they can reduce the pixel pitch by just stacking the photodiode and circuitry, still keeping the same sensitivity.

      Stacking could introduce extra parasitic cap which could impact capacitively coupled detection input cell (dependent on the min. intensity variation it is supposed to detect).


  4. When I was PhD student, I talked with my professor often about the one-pixel-one processor concept. I think that the modern technologies are offering such kind of possibility now. This one-pixel-one-processor gives much more flexibility for vision tasks including simple contrast based differentiation.

  5. Since it is based on the events, is it actually limited to the stationary applications? In the figure of cars, most of the pixels are active. In such a scenario, is the conventional sensor still a better choice?



All comments are moderated to avoid spam and personal attacks.