Thursday, June 16, 2011

Sony View-DR HDR Technology

I seem to have missed this news from almost a year ago that Sony has announced View-DR HDR technology for security cameras. A fast sensor simultaneously (!) captures four exposures combined in a single image:


There is a nice Youtube video demoing the HDR capabilities:

8 comments:

  1. The technology presented in IISW2007 has been applied to their security cameras as View-DR.

    Y. Oike et al., "A 121.8dB Dynamic Range CMOS Image Sensor using Pixel-Variation-Free Midpoint Potential Drive and Overlapping Multiple Exposures," IISW 2007.

    ReplyDelete
  2. How is this implemented on a sensor level? I assume this is a rolling shutter CMOS from Sony, so how could one "simultaneously" take images? As the integration times are shorter for some of the "frames", they are not taken simultaneously, maybe within 1 frametime. My hunch is this is similar to other piece wise linear response approaches.

    ReplyDelete
  3. Agree, 4 different exposures in 1 timeframe is a more accurate definition - just like shown on the figure in the post.

    ReplyDelete
  4. Yes. Their paper says this is one of piece wise linear response approaches but the longest integration can step over other integrations to get a longer exposure for dark area.

    ReplyDelete
  5. @the longest integration can step over other integrations
    How can this be done in a PPD? How can they be sure the pixel is not saturated in between? Or do they use integration time prediction methods?

    ReplyDelete
  6. The paper shows lowering the potential barrier between the photodiode and the floating diffusion during the long integration in order to dump some of the charge to the latter. It looks like in low light, no charge gets transferred, while under higher illumination some charge is transferred during a first barrier reduction, and further charge is transferred during a second barrier reduction, with the short integration determined by measuring the difference. At the end of the long integration, whatever charge is left in the photodiode is transferred to the floating diffusion per usual. So you get something like a measurement of total photon captures during the long integration period and a measurement of how many of that total were captured during the time between the barrier reductions. The pixel could saturate before, between, or after the barrier reductions, but when it does so after one could presumably just use the figure from the shorter interval.

    ReplyDelete
  7. @CDM, but in case that the floating diffusion has a large leakage, this approach will be in trouble.

    ReplyDelete
  8. The electrons, integrated in the short period and transferred to the floating diffusion, are immediately read out to ADCs. The paper describes each readout is the same as a normal one with true CDS (i.e. no retention at FD).
    The sensor will operate 120fps to achieve 30fps with three additional short exposures.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.