Pixpolar published a post on advantages of using its MIG pixel in security and surveillance applications.
At low light illumination, the frames in video stream might need to be combined to get a sufficient SNR for identification purposes. In most sensors, the frame addition also adds the read noise, so the SNR grows as square root of the number of frames. Due to the non-destructive read capability, Pixpolar's MIG pixel can be reset once per many video frames. Then the video frames are extracted by subtraction, while the integrated long frame between the resets carry just one-time read noise component.
Pixpolar's demo of the idea is marred by the fact it assumed the CIS sensor to have lower QE than its MIG pixel (not sure why), but it illustrates the approach:
Nice simulation. Reminds me of DIS. But would be far more impressive if there was a real sensor here.
ReplyDeleteSo, what would happen if you didn't reset the CMOS APS FD between frames? What would the simulation look like then?
In this case, the FD will be simply saturated by its dark current :)
DeleteIn this model, dark current is 0e-/s
DeleteYou get me!!!
DeleteThe reason for the low dark current rates in the simulation is because thermo-electric cooling of the sensors is assumed which is a viable assumption for high end outdoor security and surveillance applications.
DeleteIn case the CMOS APS FD would not be reset between the frames one would obtain, first of all, the normal destructive CDS results of signals integrated during single frames in standard 4T pixel read-out fashion. On top of that one could monitor the accumulation of the entire signal during the 2 second examination period in non-destructive non-CDS 3T pixel read-out fashion. Now the question is whether there is any use of the non-destructive non-CDS read-outs.
Bearing in mind that in the non-CDS read-out the low frequency components of 1/f noise cannot be removed and that the noise power of 1/f noise increases dramatically towards lower frequencies one can expect the 1/f noise during the 2 second examination period to shoot up the roof. In addition the interface leakage current collected by the floating diffusion would increase the dark noise considerably. Thus I don't see any practical reason for acquiring the the non-destructive non-CDS read-outs. Besides, in order to cope with pixel saturation one would need to incorporate additional reset selection transistors into the pixels.
Artto, my point is that you have to compare apples to apples. In fact, if it is low light, the CMOS APS will not saturate. In this case one would not want to read multiple frames, one would just integrate for a longer period of time. On the other hand, integration with multiple reads does give one a chance to best-fit a straight line to the multiple outputs from the pixel and between this and a final reset at the end of the multiple frames, most of the 1/f noise will be suppressed. This is well known in scientific IR imaging circles.
DeleteI think the MIG is a fine device concept, but it has to be compelling to be adopted, so these apple to apple comparisons are very important. You have to operate both devices in the optimal way for a particular scenario. If your scenario is low light, then QE and read noise are critical. Improving QE in NIR is easy in CMOS if you are going to change the process, as you propose with the MIG. And, with read noise at a few electrons rms, read noise is not the critical roadblock.
Eric, as far as I understand the 1/f noise cannot be suppressed by averaging correlated read-out results. In other words, the amount of 1/f noise in the final read-out in the end of the integration period does not depend on the amount of correlated read-outs performed during the integration period. With non-destructive non-CDS read-out (e.g. 3T CMOS pixel) you can only suppress uncorrelated noise components in the total noise, i.e. you cannot reduce 1/f or shot noise.
DeleteAlthough the CMOS APS is cooled and operated in low light it will at some point eventually saturate and needs to be reset.
The reason why the CMOS image sensor is read multiple times is that in security and surveillance applications video stream e.g. at 25 Hz is stored in memory. If a lower than 25 Hz frame rate is used the humans start to notice the individual frames which irritates the user and results in bad user experience.
DeleteOne could naturally argument that the frame rate or the integration time of individual frames could be adjusted according to the subject movements. This requires, however, a lot of processing power and sophisticated algorithms to be used and can be implemented only if the movements of the subject are predictable since the integration time would need to be preset.
However, if the subject suddenly stops moving and remains still for an arbitary long period of time there is now way to preset the integration time accordingly. It leads actually to the contradiction that you would need to read-out the sensor continuously in order to know when the integration should be stopped.
Besides there is the problem that in case the frame rate or integration time is preset according to the movements of a slow moving subject there may be other faster moving subjects in the scene which image quality would be ruined due to image blur.
The QE in NIR can be improved relatively easily in a 3T CMOS pixel by applying thick fully depleted sensor architecture. The 3T pixel suffers, however, from high dark and read noise. The QE in NIR is, however, far more difficult to improve in 4T CMOS pixel by the application of a thick fully depleted sensor architecture. There are several reasons for this.
DeleteFirst of all one should be able to design the floating diffusion in such a manner that it does not collect any signal charges, i.e. only the pinned photo-diode should collect the signal charges. Secondly, one should be able to design the pinned photo diode, the transfer gate, and the floating diffusion so that complete signal charge transport from the pinned photo diode to the floating diffusion is realized. Thirdly, one should be able to design the transfer gate such that no interface leakage is collected by the pinned photodiode.
The problem is that the fully depleted substrate changes radically the design of the entity comprising the pinned photo diode, the transfer gate, and the floating diffusion. Thus the task is considerably challenging. And even if it could be realized the 4T CMOS pixel would still suffer from accumulation of read noise.
Sorry Artto but I don't agree with any of your 3 posts above, as far as the overall points are. But, thanks for taking time and expressing your opinion (vs. my opinion!).
DeleteEric, could you please explain a bit about your statement: "integration with multiple reads does give one a chance to best-fit a straight line to the multiple outputs from the pixel and between this and a final reset at the end of the multiple frames, most of the 1/f noise will be suppressed."
DeleteIn this matter I must agree with Artto. I don't think it's possible to suppress 1/f noise in that way. I would appreciate if you could point me a publication or some other source where this is explained.
Thanks.
http://adsabs.harvard.edu/full/1990ApJ...353L..33F
DeleteI am not sure this is the first time this was published - possibly not, but I think in NIR astronomy circles this has become called "Fowler sampling" after Alan Fowler.
Thanks for the link. I went through the publication and there was no explanation on how to suppress 1/f noise. I'm familiar with Fowler and "up the ramp" sampling and while those methods are effective in suppressing white noise I don't think they can tackle 1/f noise. Please correct me if I'm wrong and let me know if there is any publication on the matter!
DeleteThat is what I get for responding at 4 am. This was not actually the paper I was thinking about even though it is interesting. I will have to find the right paper. My recollection is that sampling an initial and a final reset level helps suppress lower frequency 1/f noise that can otherwise impact the average slope of the integration curve. If I find that paper I will let you know.
DeleteIt's really non sense to compare a "virtual" technology with something available as commercial products.
ReplyDeleteIt is not nonsense. If you are contemplating spending money developing a new technology it is sensible to at least compare it as best as you can to existing technology. And, if you are trying to license a technology, like perhaps MIG, you have to show why you think it is better than existing technology before anyone else will consider spending funds to develop it. In this case, though, you should make a reasonable model for both devices, and make the effort to show both devices in the best possible light. I dont think that happened here.
ReplyDeleteHi there,
ReplyDeleteThe reason for the difference between the quantum efficiencies in CIS and MIG sensor is due to the fact that in the darkest conditions, i.e. during moonless overcast nights the near infra-red light dominates over the visible light. Since under such circumstances the photon flux per nm increases towards 1000 nm and since, on the other hand, the quantum efficiency of silicon based sensors decreases towards 1000 nm the quantum efficiency at 850 nm was chosen as an approximation for the weighted average of quantum efficiency against the night time spectrum between 400 nm and 1000 nm. This should have naturally been mentioned in the text - sorry for the lack of explination.
The reason for the better quantum efficiency in MIG sensor than in high end 4T CIS at 850 nm is due to the fact that the MIG sensor has a thick fully depleted back-side illuminated design. The fully depleted sensor architecture reduces also the cross-talk improving identification in all lighting conditions.
ReplyDeleteI guess the most challenging problem with this device is controlling each pixel signal linearity. If the linearity fluctuation is too large, thre is no way to correct.(^^)
ReplyDeleteIn our single pixel demonstrators we have analyzed the linearity against accumulation of dark current and we did not see any problems with linearity. In such a test the non-linearity would only be masked in case changes in charge conversion gain and dark current generation rate would cancell each out through-out the whole pixel operation range. And no need to point out - in order to fully characterize the linearity optical tests would need to be performed.
DeleteI actually would like to see the simulation with the same QE for both MIG and CIS. Only then it's a fair comparison.
ReplyDeleteThe comparison is perfectly fair from QE point of view in low light outdoor security and surveillance applications since the NIR dominates at the smallest photon flux levels (moonless overcast nights). At 850 nm the difference in QE is as presented. In case you want to have the comparison at the same QE you can easily make this at imager-simulator.appspot.com by just plugging in the numbers.
DeleteThe comparison is, however, not fair from cross-talk point of view since the difference between the Point Spread Function (PSF) in thick NIR optimized CIS and MIG sensor is not taken into consideration. Due to the fully depleted MIG sensor architecture the MIG sensor has significantly better SPF and therefore better cross-talk performance meaning that the image quality and therefore the identification is improved in all lighting conditions.
Before talking about moonless overcast nights, there is a lot of road to go. Maybe Artto lives on the moon, so there is always moon light??
DeleteMaybe there are already a lot of people along the road avoiding the moonless overcast nights; which way do you prefer?
DeleteArtto, you have spent so much time to simulate, compare, etc. Please make a simple device to demonstrate your revolutionary technology!!!
ReplyDelete