On a popular edaboard.com forum user xihuwang asks:
"need cmos image sensor expert's help
Hi ,
Anyone can teach me how to design pixel ( using pinned photodiode) and timing control, and analog process chain to get pixel bin function ( change resolution 1/4) ?
Thanks forwards !"
DPReview forum has an interesting question on whether the photon shot noise is fundamental. User Detail Man points to Encyclopedia of Laser Physics and Technology article on shot noise discussing a rarely mentioned Amplitude-squeezed Light sources that get lower intensity light variations at the expense of an increase in phase noise. That way a sub-shot-noise light can be generated. Most of the referred squeezed light papers are from 80s - sort of esoteric knowledge now.
Phase space representation of amplitude-squeezed light. |
Leaving aside the limitations of the squeezed light sources, the sensor QE needs to be close to 100% in order to not re-introduce the shot noise in the process of photon absorption. Anyway, it's an interesting way to push the seemingly fundamental limit.
The wave-particle duality is always an interesting conundrum when considering photon shot noise. We often like to think of low light as a very low amplitude single phase EM wave. So, what does an EM wave look like that has shot noise since shot noise theoretically arises in the photon emission process. Or is shot noise only realized when the EM wave is occasionally absorbed, even when the QE is 100%? I have discussed this with heterodyne and homodyne detection experts and they speak a completely different language it sometimes seems.
ReplyDeleteSo, suppose you had an LED-type device with 100% emission efficiency. And, suppose you had a gated device in series that could admit exactly one electron at a time (think, CCD with one electron full well). Thus, one electron in, one photon out. In this case you could control the photon emission rate with almost no shot noise. Basically it is a pulsed LED with one photon per pulse. This would be an interesting experiment to make.
Deleteany answer to the edaboard question ;)?
ReplyDeleteThe edaboard question is pretty easy. 1) go to an accredited school with a BS program in EE or similar discipline. 2) complete degree 3) go to accredited school for MS in EE or related discipline. Specialize in device physics, optics or analog design. 3) complete degree 4) Optional: do it again at Ph.D. level. 5) Gain employment at CMOS imager company or laboratory 6) Tackle one aspect of the problem posed, while learning from knowledgeable experts in the parts you're missing. 7) Move around job functions over the years until you have your answer, provided the company stays solvent. I'm still working on steps 6 and 7.
ReplyDeleteWhat are the implications of phase uncertainty in the measurement process? Since, according to the article links, absorption drives the light to a classical region, this could not be used with standard Bayer-type device (or anything that uses something like CFA). But we're probably talking about very narrow bands anyway, so maybe that does not matter. Could this be modeled with advanced FDTD methods? This approach is classical, but tends to converge to the right answers and is used for photonic band gap and nonlinear materials.
ReplyDeleteAt high intensity light-wave duality may not be necessary. A photon re-emitter can be a nice invention. It can absorb photons which arrive randomly. Then it can re-emit photons with fixed time-difference between them. Entropy change during the process is another story, but uniformly emitted photons(in time) will give higher SNR.
ReplyDeleteHowever, as Vladimir pointed, this time silicon sensor becomes the problem. You'll never know at what time and what depth the silicon will absorb the photon. It needs a new material...
At high intensity, the field can be treated classically. But at high intensity, you're also not shot-noise limited, so it's kind of moot. What would be cool is a low-light sensor that is not governed by the shot noise limitation. Odds seem pretty slim- compression techniques can't be applied to natural lighting conditions.
ReplyDeleteLast Anonymous: "But at high intensity, you're also not shot-noise limited"
DeleteIs this true?
At higher illumination levels, SNR of the image sensor is typically photon-shot-noise limited. Probably what was meant was that the SNR is already so high we don't worry so much about noise.
DeleteHigh intensity generically means the classical limit, but not always, and a sufficiently sensitive detector can still be shot noise limited. A neat example comes from experiments on gravitational wave detection; these use high-powered lasers to try to (interferometrically) detect the movement of mirrors in a laser cavity caused by a passing gravitational wave (such waves are predicted by Einstein's theory of gravity). One of the limitations on the sensitivity and bandwidth of the detector is shot noise in the laser light; researchers are gradually increasing the laser power to improve SNR, because of the shot noise limit, and are experimenting with the use of squeezing of the laser light in the interferometer in order to boost the performance:
Deletehttp://phys.org/news131883538.html
Thanks for the catch, yes ERF capture what I meant to say!
DeleteThat was the best Opening Ceremony yet! (Probably with image sensors in there somewhere...) This morning I'm out before sunrise training for a 2020 try-out with added zip in my step. More fun than 6 and 7, though admittedly there's a lot of getting up early when I'd rather be sleeping. Kind of like 1 through 4 that way.
ReplyDeleteAnother interesting question is whether the noiseless light reflection from a scene re-introduces the shot-noise. Say, if some subject reflects 20% of noiseless light, do these 20% remain noiseless? It looks to me not, as reflection should be a statistical process.
ReplyDeleteVery nice question. I think that 20% of light will still be less noisy than the natural light source.
DeleteWhen a noiseless(or lower noise) light is incident onto an object, a single reflection from the surface may not be enough to make the noise perfectly white in frequency domain. A series of random reflections is needed. After each reflection, the noise will become whiter and whiter ... and finally approach to the theoretical Poisson distribution of photons with random arrival times.
Since all the natural sources and reflections have white noise, we should focus on making them noiseless just in front of the photodetector.
That's what I thought... It's good to see such discussions on the blog, instead of series of reverse engineering tear-down reports.
I guess Vlad you mean a Lambertian surface with 20% reflectance. And, I think you mean a real 3D world, not a 1D world. So consider a wavefront propagating in the z-direction. How are noiseless photons on a wavefront spaced apart in x-y in your question? Are they on some regular grid? Due to the properties of the surface, photons reflected toward a camera aperture come from a variety of positions. As such, there will be considerable randomization in the their x-y spacing as well as in the z (i.e. arrival time) direction. So even 100% reflection will not preserve noiseless light in this case.
DeleteNow, to the question of turning shot noise light into noiseless light. The purpose of noiseless light is to be able to accurately estimate the photon flux with the fewest number of detected photons. Such a noisey-to-noiseless light "filter" must collected photons and then re-emit them at a rate that accurately reflects the average photon flux. Thus, it must also perform some integration step as part of its filter function, not unlike a low-pass electrical circuit. The time response of this filter must be exactly equal to the time required to integrate noisey photons with a detector and estimate the photon flux. Thus, at least in my thought experiment, there is no advantage to such a noisey-to-noiseless photon filter before the detector.
Actually, I did not consider Lambertian surface. I thought in 1D. My logic was that if the noiseless light is weak, like 1 photon per second (exactly), every photon hitting the surface should have 20% chance to be reflected. Obviously, the photon or the surface have no way to know the history, that is, whether the previous photon arrived a full second ago was absorbed or reflected. Since there is no history, each photon has independent probability of 20% to be reflected. This should generate close to Poisson statistics and almost shot noise standard deviation. As other commenter mentioned above, multiple reflections should bring us even closer to Poisson. But even after a single one a good part of shot noise would be reinstated.
DeleteI agree, if we consider Lambertian reflection and 3D effects, there are even more reasons to get the noise back.
Very interesting question! If the spatial coherence of the noiseless light can be controlled (and that's a can of worms right there), this suggests some interesting contrast and depth mapping techniques. Use multiple sources and segregate regions of the image by noise level, everything else held constant. Just thinking out loud...
DeleteGood idea. Current ToF imaging sensors don't use noise part of the reflected laser light. Only intensity is used. There is still extra information out there with future's CMOS operating frequencies...
DeleteYikes. Getting a good estimate of the noise, even when it is shot noise, is much harder than getting a good estimate of the average flux. So aside from the near fantasy above (using noise added by reflection to a squeezed light source)you now have fantasy-squared.
DeleteAnyone who has measured noise vs. signal knows how much data has to be gathered to properly estimate the noise. Basically, the noise has noise!
"a low-light sensor that is not governed by the shot noise limitation"
ReplyDeleteFor squeezed light applications (such as Gravitational wave detector), homodyne detectors are used. See http://www.squeezed-light.de/