Tuesday, July 06, 2010

Lattice and Helion Announce Full HD HDR FPGA Cores for Security Applications

Marketwire: Lattice and Helion announced IP cores for the video security and surveillance camera market. Targeting the LatticeXP2, LatticeECP2M and LatticeECP3 FPGA families, Helion has demonstrated its IONOS video pipeline IP and Vesta evaluation platform.

Helion offers a selection of video pipelines from VGA to 12MP, including 1080p60, all the way through high resolution advanced HDR color pipelines. Depending on the pipeline selected, it will consist of a number of individual video processing IP cores, such as defective pixel correction, logic-efficient 3 x 3 De-Bayering, high quality 5 x 5 De-Bayering, color-correction matrix, gamma correction, auto-exposure, auto-white balance and more.

Working with an Aptina A-1000 image sensor, Helion IP can deliver a scene dynamic range of 120dB and a system dynamic range up to 170dB.

32 comments:

  1. What does "system" dynamic range mean? You think they use a 3T APS readout?

    ReplyDelete
  2. This claim is purely marketing and has no technical meaning. 120dB = 1:1000000. full moon night, the ground illumination is about 0.1lux and full sun daytime, the illumination is about 100K lux. These guys have no idea about what 170dB means !!!

    ReplyDelete
  3. If you follow Lattice link in the post, the company even talks about "192dB (32-bits) system dynamic range". I have no idea where 170dB number came from.

    ReplyDelete
  4. Just a few things to think about. Take a HDR sensor, like MT9M023 from Aptina. This sensor has a intra scene dynamic range of 20Bit or ~120dB per color channel. The MT9M023 uses a multi exposure approach, with a fixed number of integration times (3) and a fixed relationship between all integration times (e.g. ~16ms = ~16x 1ms = ~16x 16x 65 us). The given numbers are used in dark or low light scenes. By changing the scene it is necessary to shorten these integration times, to avoid saturated pixels (YES 120dB is NOT enough for all scenes in nature). By shorten these integration times (divide these numbers with 8.3 ~ 50dB, of course sensor dependend), you will get instead a new set of integration times: ~1.92ms = ~16x 0.12ms = ~ 16x 16x 7.5us. By comparing two different scenes (low light vs. bright sun light) captured with these two different integration time settings, you will get a so called system dynamic range of 170 dB (brightest pixel of the bright scene, divided by the darkest pixel of the low light scene).

    ReplyDelete
  5. Nice attempt to explain 170dB. If we use this logic, a regular non-HDR sensor too can have 170dB of "system DR". It starts from DR of 60dB and then we can scale exposure from 10us to 3s to add another 110dB, right? Together this gives us 170dB.

    BTW, in your example the ratio of 8.3 is not equal to 50dB. One needs a ratio of 300.

    ReplyDelete
  6. For multi-exposure HDR sensor, you have to have overlaps between different exposures. This means that 20bit can not cover 2^20 dynamic range, because there is a more or less large overlap between the two 10-bit dynamic range. The claimed dynamic range in this method is always with minimum overlap between the different exposures. In order to keep an overall minimum SNR, a much larger overlap is needed. This reduces the final dynamic range. Also the management of this overlap is not an easy job.

    The spec is nice on the paper, but not always easy to get good results. The claimed 170dB is totally fantastist from my point of view.

    ReplyDelete
  7. yes, my fault. I was thinking about 2^8.3.

    You can 'produce' HDR images with a normal SLR camera just by choosing different exposure times and by combining these images. The HDR communities are full of these stuff (take a look at qtpfsgui or photomatix). And you can buy SLR cameras which this feature.

    The reason is quite simple and many photographers are using it. System dynamic range means: You can adjust the exposure time. By combining images with different exposure times into one image, you will have one image with a wider dynamic range than the single images.

    Intra scene dynamic means you will get at the end image data with this amount of bits. We measured 'normal' scenes (underground parking side with a window to the outer world, illuminated with direct sun light) of about 80-90dB. But do not forget sun light reflections or lamps. By comparing these scene to a night scene, you will measure even lower values and therefore you will need more than 120dB.

    Other thing is the processing pipeline. To process these kind of data you have to avoid rounding errors. Therefore the pipeline for itself needs a higher internal dynamic range. That was the reason for Lucas Film to develop the so called HDR file format OpenEXR, which was used the first time for Star Wars. nVidia graphic cards are also supporting these kind of HDR data with OpenEXR

    ReplyDelete
  8. We don't talk about the same HDR camera. A real HDR camera has to capture a HDR scene in one single shot. The main challenge is not really how to resist to high illumination side, the main challenge is how to decrease to low illumination side.
    Of course the night scene can have a huge dynamic range, but the question is: can a HDR sensor capture less than 0.01Lux scene if it has 140dB dynamic range ??
    Other usefulness is the fast reaction time to huge illumination change in mobile environment or under pulsed light sources such as welding arcs, etc ...

    ReplyDelete
  9. The HDR/WDR problem has been solved repeatedly many times in many ways for CMOS since at least the mid 1990's, some 15+ years ago.
    Capturing a high dynamic range image has not been the problem since then. The problem has been the rest of the system, from sensor to display to deal with 96 bit color instead of standard 24 bit RGB (8 bits per color plane). With 8 bits of luminance signal, the displayed image dynamic range is, more or less, 256:1
    To me, the announcements by Lattice and Helion mean we are that much closer to HDR systems in consumer space and I think that is good news.
    It has been a long wait.

    ReplyDelete
  10. I'm not that certain as Eric ...

    ReplyDelete
  11. Not 15+ years ago?
    Not 256:1?
    Not good news?
    Not a long wait?

    Which of these are you uncertain about?

    ReplyDelete
  12. There are a lot of good comments here.

    Personally, I doubt that the 170 dB figure is meant to indicate an image sensor dynamic range from a full-sunlight maximum signal to the noise floor. I'd guess it reflects the precision required for some modest image processing of raw HDR images. Probably for real-time DSP using FPGAs the 170 dB figure has some merit that isn't obvious to front-end image-sensor specialists.

    ReplyDelete
  13. This comment has been removed by the author.

    ReplyDelete
  14. Dear Eric,

    What I mean "uncertain" is that the things may noy be that perfect as you stated. Please don't misunderstand me...

    ReplyDelete
  15. OK. Of course HDR image sensors are not perfect, but I think they are good enough and have been for quite a while. That is what I mean about HDR imaging being a system problem, from ISP to storage format to HDR (not tone mapped) display, and not a sensor problem.

    ReplyDelete
  16. It's a system problem from application point of view. Because it depends a lot of the usage of the HDR sensor's output and your interpretation on this HDR image. The problem at sensor's side is still open and cannot be considered as "finished". Otherwise this kind of "ideal" HDR sensor could be used in all applications. This is not the case yet, so there should be still a lot of problems to resolve. Marketing language is very often misleading ....

    ReplyDelete
  17. Just having tried Aptina HDR sensor (WVGA one), franckly speaking, it's ZERO !

    ReplyDelete
  18. From my point of view it's a shame that the logarithm APS are not often used. Form the HDR point of view they have the best performance and the FPN problem is more or less solved with the cheap memory.
    For example:
    http://www.ims-chips.de/home.php?id=a3b15c1en&adm=

    ReplyDelete
  19. Dear Anonymous,

    The classic log APS doesn't work well and the FPN compensation is hard to work. It's not only the problem of frame buffer memory.

    Our MAGIC pixel design gives a very good solution. You can take a look :
    http://www.youtube.com/watch?v=Kn8dxre71FI

    This USB camera uses our sensor, no image processing is needed. It's a real Log sensor.

    ReplyDelete
  20. YN: The black and white video looks fine. What is the noise floor (volts rms) and max signal (volts), pixel-referred, for your photovoltaic-mode device? I am guessing 100 uV rms and 200 mV respectively under normal lighting conditions. Voc for solar cells under direct sunlight is of course higher.

    We observed imaging under this condition at JPL way back but it was not pursued because I was concerned about charging/discharging time constants (lag) as well as minority carrier injection due to the forward biased junctions.

    Why aren't you showing a color video?

    The main problem with all log sensors is that under typical flat lighting conditions (indoors, or cloudy day), the pixel-referred contrast is poor and this is where FPN and noise rears its ugly head. Comments?

    ReplyDelete
  21. Comments above aside, success of any HDR solution will be in the sales numbers, so I suppose we will just have to see if your HDR pixel solution is not only another solution but in fact the winning solution. Right now I would bet on Aptina from a technical point of view. Sorry.

    ReplyDelete
  22. Eric - why would you bet on Aptina and not Altasens with their new HDR approach? Obviously Aptina is interested more in the mass market than Altasens. Both rely on rolling shutter imagers, which is unfortunate.

    ReplyDelete
  23. From a technical point of view log Sensora are the best. The Human eye is also a log sensor and its has a development time of a lot more than 20 Years.
    With linear and stepwise linear sensor you alway lose ADC resolution, because at the end you have to make a log compression(gamma). So the best is to do this compression in the analoge domain.
    At the end you want have a constant contrast resolution. At linear sensor the contrast resolution rise expotentional over the illumination. At a log sensor it is constant over the illumination.
    For log APS that use a transistor in the subthreshold area, to do the log compression, a FPN correction is neasesary.
    Then you can get sensors with a intra scene dynamic range of 170dB and a good contrast resolution and low FPN.
    By the side, you will get 170dB, but what optic can deliver it?

    ReplyDelete
  24. Dont agree with the last anonymous...Who said with a linear sensor you lose ADC resolution? If you have a good resolution the noise limitation is the photon shot noise. Then, who said you need to do what the human eye does? Is the human eye able to capture 100000 frames per second?
    I guess for automotive application a linear characteristic would be easier to process then a log one.

    ReplyDelete
  25. I hope Image sensor blog will pay more attention to the wide dynamic range topic in this blog since anytime there are a lot of comments.

    ReplyDelete
  26. ANDY- I don't know much about the Altasens approach but it seems to be a fused short and long exposure in each frame. So that would put in the same general category as Aptina. But, from a sales volume (#'s, $) I would say Aptina is better poised to succeed in the near term.

    ANON- The human eye response has been a great model. But due to reliance on the iris and adaptation, both having long time constants, its log-linear response is not sufficient for many imaging applications today. Also, we aren't so good at perceiving things in shadows even if the lighting is only 10x less.

    ReplyDelete
  27. Another obstacle to HDR deployment might be the industry's success in other areas.

    Here on this blog, there is frequently a flurry of follow-up comments whenever there's an announcement of a new image sensor for mobile gadgets or speculation on what image sensor is in the latest highly-anticipated mobile gadget. More obscure applications typically get no comments, unless there happens to be a marketing claim or prior comment that generates some indignation.

    To me it seems like the large volumes and profits in the mobile gadget sector have grabbed everyone's attention, and that few people in the community (if any?) are actually working on HDR-friendly, small-market, small-profit problems which require doing something clever with the data other than just displaying it.

    ReplyDelete
  28. The charme of logarithmic real HDR sensor is that the camera design is ultimately simplied and the application development too.

    Just imaging that you have an image sensor with :
    - no exposure control
    - no gain control ?

    Can you mention one sensor capable to do this ??

    ReplyDelete
  29. CDM - there are people out there in niche applications that are looking at or using HDR imaging; we just can't talk a lot about our applications. But some of the applications can afford to spend more than $20 on a sensor; imagine what a camera with a $5000 high resolution HDR sensor could do...

    ReplyDelete
  30. Wow ... 5000$ sensor !!!

    ReplyDelete
  31. SCMOS is sold at 40K$ .... It's a bargain 5000$ sensor !

    ReplyDelete
  32. It's good to know that someone out there is doing some interesting HDR work.

    ReplyDelete

All comments are moderated to avoid spam.