Thursday, October 23, 2014

Melexis Announces 0.005 lux Automotive Sensor

Melexis introduces its 3rd generation automotive image sensor, the MLX75421 Blackbird. The new 1.35MP 1/3-inch HDR sensor is aimed to automotive safety applications like automatic emergency braking (AEB), electronic mirrors/camera monitor systems (CMS) and autonomous evasive steering. The device is also highly optimized for next generation viewing applications like surround/rear view systems with object detection functions.

The 1344x1008 pixels MLX75421 features a 4 stabilized kneepoint HDR response curve which provides up to 125dB of HDR inside every frame. The HDR full frame rate is 45fps, while 60fps can be achieved when capturing image data from 800 rows. This makes the image sensor also compatible with 720p applications. The device runs fully automatically in HDR mode without the need for a companion chip.

The MLX75421 Blackbird has a dark leakage of less than 5 electrons/s at 25°C and dark temporal noise of less than 4.4 electrons for a 1/30s exposure time. This results in an SNR1 illuminance value of 0.005 lux for light with a wavelength of 535nm over a 1/30s exposure time. The device has a wide operating temperature range of -40°C to +115°C, which benefits its temperature robustness.

For almost every automotive camera function, car manufacturers and their technology partners are telling us that they need markedly improved low-light performance on what is currently available from image sensor suppliers. Drivers who have earlier generation camera systems in their cars generally want to see more at night. Also upcoming New Car Assessment Program (NCAP) 5-star car ratings are expected to include night conditions for pedestrian and bicycle detection. That’s why we made camera sensitivity and low noise design paramount when developing the Blackbird series. The result is best in class low-light and HDR behavior,” says Cliff De Locht, Product Marketing Manager at Melexis.

The MLX75421 Blackbird sensor has 4 versions: monochrome, RCCC, color RGBG and color RGBC. Production start is planned for early 2015.

17 comments:

  1. Any idea what RGBC pattern it uses?

    ReplyDelete
    Replies
    1. RGBC is Red-Green-Blue-Clear instead of Red-Green-Blue-Green but this does not mean that the pattern follows common Bayer-pattern. Anyway, the chip seems to be SoC ("...HDR mode without the need for a companion chip") which means the they include the ISP for RGBG-RAW-2-YUV aswell as RGBC-RAW-2-YUV??? The processing is very different in order to achieve good image quality for RGBC... I doubt that this is doable. Anyone can comment?

      Delete
    2. It was already done with the previous version of the device.

      Delete
  2. I think that they talk about 0.005lux faceplate. Sony talked about 0.005lux scene. With a F1.4 lens, the difference is 10x.

    ReplyDelete
  3. dark temporal noise in e-/s and at 535 nm ?
    weird unit and why does the wavelength matter in dark.

    ReplyDelete
    Replies
    1. It says that SNR is dependent on wavelength. SNR = signal / (dark noise + signal noise); dark noise is independant of wavelength but is needed to calculate SNR.

      Delete
    2. Hi Arnaud, do you know how the knee points are implemented in such 4T based pixel please? I understand well the method with 3T pixel by using the reset command level on the reset transistor, but 4T .... please help.

      Delete
    3. It can be the same... or not.

      -> http://spie.org/Publications/Book/903927

      Delete
    4. Hi Arnaud, without buying your book right now. Is it based on a partial charge spilling via a antiblooming transistor in the pixel?? Thanks!

      Delete
    5. That's a question you should ask to Melexis ...

      Delete
  4. Any idea what means "output LVDS compatibility" in this case? Ususally LVDS-serializer (like TI, Apix, etc) is needed right after sensor... Does this mean you can implement 2-box camera solution without serializer because the chip offers that already? This is very strange...

    ReplyDelete
    Replies
    1. Many sensors, usually high speed, have LVDS outputs.

      Delete
  5. A rolling shutter sensor with multi-exposure dynamic range extension mounted on a car? I thought motion artefacts were a severe problem when used in fast moving objects.. Why do we need global shutter at all?

    ReplyDelete
    Replies
    1. It all depends on the relative motion from frame to frame. Most in-car applications for machine vision look forward where the relative movement is not roll, pan or tilt but only a forward motion "zoom" and this is the least critical motion component in terms of relative motion.

      Also HDR and relative motion are not directly related as this is a HDR pixel and not a reconstructed HDR out of multiple exposures.

      Delete
    2. Arnaud, thanks for the explanation. Indeed for front and back view there is not much roll movement. But applications of this sensor also include object detection e.g. pedestrians on the side of the street so global shutter could help. I suspect, though, that reaching this low noise level with a global shutter pixel could be more difficult.

      As of the HDR, it is said it is a 4-knee method. They probably refer to 5 different exposures per pixel to limit the SNR dip between adjacent exposures. To achieve 125dB DR, a bright pixel has an exposure time about 1000x shorter than a dark one with plenty of room for artefacts.

      I don't think they use the partial reset method as the anonymous above said. That would kill the dark current performance which seems to be very good here.

      Delete
    3. Tradeoff between motion blur and shutter artefacts... A rolling shutter can be more sensitive and less suffer from motion because of the shorter exposure.

      10 years ago the tier1 requirements were global shutter only but it seems that they changed their minds.

      Delete
    4. it is the 'partial reset method' to achieve HDR, not mutiple exposures

      Delete

All comments are moderated to avoid spam.