There is only one image sensor among EDN's 2009 Innovation Award Finalists: Aptina's MT9M033. The 1.2MP WDR sensor delivers 720p/60fps video and has 120db pixel DR. Its pixel size is 3.75um. More information is at Aptina site.
The Innovation Award winners will be announced on April 26.
This comment has been removed by the author.
ReplyDeleteOK, can't seem to embed a link properly....Let's try again.
ReplyDeleteEveryone deserves a peak professional year. This one must be Johannes Solhusvik's year.
2009 IISW Tech Program Chair, 2010 ISSCC, and this technical work. Congratulations JS!
I believe this 2009 IISW paper goes along with the MT9M033:
http://www.imagesensors.org/Past%20Workshops/2009%20Workshop/2009%20Papers/081_Solhusvik_HDR_DCG_final.pdf
Many thanks, Eric! Much appreciated!
ReplyDeleteAny idea about the working principle of this WDR sensor ???
ReplyDeleteRead the paper that Johannes presented. It's in the link that Eric posted.
ReplyDeleteIt uses a multi-exposure (3 in this case) image capture where each exposure is integrated differently. The 3 exposures are later to form one HDR image (20 bit depth according to the paper)
I would assume that post processing tone mapping would need to be performed to make it display properly on LCDs...unless we start getting HDR displays
If this sensor can produce >110dB (and I believe that this is just preliminary data and can actually be improved... let us know if it is)... if so, then I would definitely vote for this as a winner in EDN!
Best of luck Johannes and Aptina! Great Job!
how to compare with Pixim solution ?
ReplyDeleteAnd also TI solution ??
First of all, Pixim is a very expensive part for no clear advantage (in my opinion). You have to be targeting an extremely highend system for that to be feasible. Part of the reason is that Pixim has a per pixel ADC. That must make it expensive... and large die size.
ReplyDeleteYou would get better FPN SNR with them because you dont get the column artifcats that would likely be there in column ADC architecture.
But then, Aptina's sensor has sub 2e- rms noise floor. That has to be impressive (this allows you to use a cheap lens and still get good readout at decent frame rate). Pixim's solutions are VGA-like resolutions (slightly larger), Aptina's is >1M-pixel. And both are 1/3" format!
I would be slightly concerned with motion blurring artifacts with Aptina's multi-capture solution. But it depends on how the pixels were combined. Any comments Johannes? :)
In the end of the day, both Pixim and Aptina sensor gets you 120dB. So why go for an expensive one unless there's a clear advantage.
TI has an HDR image sensor solution??
I'll also add that Pixim has 7x7um pixels and gains no advantage in dynamic range over Aptina's 3.75x3.75 pixels.
ReplyDeleteLarger pixel means lower MTF (sharpness). So you have to offset it in someother way... maybe sensitivity or HDR? which is not the case with Pixim.
Don't forget the new logarithmic sensor from NIT. The image quality shown during Vision Stuttgart both by IDS and NIT was impressive. No image processing needed at all in their approach.
ReplyDeleteI have not seen the NIT log sensor, but generally all logarithmic sensors work well under high contrast conditions, but really give a poor image under flat lighting (e.g. outdoors cloudy day). If you capture the signal linearly, it is always easy to convert to a log image or any other transformation. If you capture it logarithmically, stretching contrast often reveals a lot of artifacts. Perhaps NIT solved these traditional practical problems.
ReplyDeleteAlso, the customer does not care how much internal processing takes place, as long as the system cost and imaging performance are what they need.
We have tested Pixim camera one year ago and found that the multi-exposure method has some difficulties in low light conditions where whole the frame time has to be allocated for the dark scene. In this case, the effective dynamic range is very small !
ReplyDeleteOther problem is that the fast illumination can not be accommodated correctly with Pixim camera.
For example a human face cannot be corrected captured when it appears suddenly in the strong backlit scene.
Multi-exposure method works well in a static scene where you have plenty of time to capture several frames for dynamic range extension. This is used in all DSP surveillance cameras.
Hi Anonymous,
ReplyDeleteIf your whole frame is in low light conditions, then that's not much of a high dynamic scene is it? You cannot expect the sensor to give you a higher dynamic range than your scene has.
The other thing to notice is that the "standard" multi-exposure image capture has been out there for a while. Where you capture sequential images with any camera can combine them downstream in the DSP or even on Photoshop later to get a single HDR image.
The unique thing about the Aptina solution as described in Johannes' paper is that:
1) You don't have fully sequential image capture. Instead you're almost "interlacing" the T1, T2, T3 exposures in ERS fashion. So that helps eliminates the need for fully static, non-moving scene. The paper demonstrates that you have 3 read/reset pointers in the rolling shutter fashion.
2)The sensor itself does the pixel combination and outputs full 20-bit image. So no need for post processing to combine the images. (although you still need post processing for tone mapping to make it look decent on an 8 bit display... but that's true of all HDR sensors). The key is that all pixel combination is done in sensor.
The key to remember is that not all Multi-exposure sensors are the same. The reason why I would guess that Aptina's sensor is sitting on the finalist panel for EDN is due to the innovative way they've done multi-exposure HDR, not because they have an HDR sensor.
Dear WN,
ReplyDeleteYou are wrong, the most high dynamic range scene occurs during the night time.
The Aptina sensor can do 720p 60fps full HDR 120dB???
ReplyDeleteI need to see this in real life to believe it. If it is true, Cudos to Aptina team!
Is there any hardware ISP that can actually process 20bit HDR data? I hear Altera is working on an HDR pipe. Anyone heard any updates?
I agree that in night time you will get HDR scenes(lamps, shiny objects, car headlights etc)
ReplyDeleteBut I was just commenting on the statement: "whole the frame time has to be allocated for the dark scene".
If all pixels are dark enough to belong to one of the exposures (i presume the largest exposure), then the dynamic range you get is of 1 exposure only because you're not making any use of the other 2 or 3 exposures that you have.
Please try a Pixim camera or any multi-exposure sensor you will understand this.
ReplyDeleteI think that split-pixel like in OV sensor or logarithmic one are better solutions.
120dB = 1:1000000.
ReplyDeleteFull moon night is 0.1lux and summer sun shine is 100 000 Lux.
So 120dB means that no exposure control is needed ... Is it true ?
Dear WN,
ReplyDeleteWhat I mean "whole frame time ..." :
If one part of a scene has 1lux and the other part is at 100lux (an illuminated window for example). For a TV rate sensor, you have 40ms frame time, this is just enough for the 1lux sub-scene, so you can not find time to make a second exposure in this same frame. The 100-lux sub-scene will be saturated in this case.
Actually, two additional exposures could be quite short and take, may be, 10% of your 40ms frame time. So it would be not a big degradation in 1 Lux sub-scene.
ReplyDeleteRegarding 120dB DR and the need in exposure control, 0.1 Lux full moon scene has its own contrast. So, some control is probably needed. Also, extending the DR to 120dB requires some SNR trade-offs. I'd guess that Aptina's sensor could give better picture, if it's not pushed to the maximum DR.
I see what you mean about the 40ms frame time. If you're trying to maintain a certain frame rate then of course you'll be limited.
ReplyDeleteI can see a logarithmic sensor helping. But how does a double pixel help? Double pixel (aka iHDR) would still limit your exposure time to 40ms. So in the case of multi-exposure HDR, your max exposure is 40ms minus a small number (7% in case of 16x ratio...even less for 32x), in iHDR you would just get the full 40ms. In your application, is that small difference significant?
As Image sensor world said, extending DR would cause SNR drops at the capture transition points, but I presume this also exists with iHDR, right?
split-pixel means that two sensitivities inside one pixel. This can be done by using a attenuation mask like the patent of SONY (spatial-varying-exposure), or by collecting antiblooming charge (SONY-then TI JP solution), etc ....
ReplyDeletethe integration time is the same for Hi and Lo sensitivity pixels.
I don't think this is what the OVT sensor is doing. They have split pixels where each one has a different exposure time as WN suggested. This information is based on a presentation that Dr. Howard Rhodes (VP of Engineering from OVT) presented at the HDR symposium at Stanford University in September 2009.
ReplyDeleteThanks
Congratulations, Johannes.
ReplyDeleteE.S.
The worst thing about these long back and forth threads is that I inevitably spend several sleepless nights reading papers and patents, mulling over the details, and scribbling calculations on envelopes (a form of back-side illumination, I suppose). Of course, it's also the best thing.
ReplyDeleteDo anyone know that there's a holder/socket can meet MT9M033 series sensor? What brand? and P/N? Thanks.
ReplyDelete