Pixpolar's blog post on low-light imaging mentions NDCDS readout allowing picture deblurring at long exposures. The company prepared HTML5 demonstrator showing NDCDS capabilities here (for modern desktop browsers only). The demo shows 2s exposed images blurred by camera shake and says: "If you are lucky and your camera has a MIG sensor you can correct this image. From MIG sensors you get more low noise data and with intelligent algorithms the data can be used to correct the image blur."
What is a person is walking by in the 2s exposure? What will happen to the picture?
ReplyDeleteI'd guess it works pretty much like any image stabilization - it reduces camera shake, not scene changes.
ReplyDeleteeven a MIG passes with 2s exposure, they can still find the type of the MIG, say 15, 17, 21, 25, etc...
ReplyDeleteI really want them to show how the image would like if a person walk by - a ver common scene anyways. Perhaps many "ghosts" would show up in the image?
ReplyDeleteJust to make it clear, this app is only a concept demonstrator. There is no true image deblur algorithm implemented in it. Pixpolar is developing a novel image sensor technology which enables Non-Destructive CDS. Image deblur is a hot concept in computational photography as Pixpolar is writing in their blog http://www.pixpolar.com/2011/11/low-light-imaging/
ReplyDeleteWhy is NDCDS necessary here?
ReplyDeleteIn low light when very long exposure times are required the image sensor can be sampled with maximum frame rate in order to remove digitally image blur due to subject movements or camera roll motion. In traditional image sensors, however, the more often you sample the image sensor the higher the read noise. For example if an image sensor is sampled 100 times during a long exposure time the overall read noise will be roughly 10 times higher when compared to the read noise of a single read out due to destructive CDS read-out. In MIG sensors, on the other hand, the overall read noise of 100 read-outs will be roughly 0.1 times the read noise of a single read out due to non-destructive CDS read-out. In case we assume that the read-noise of one read-out is the same in MIG and in traditional image sensors the dynamic range in the low part of the dynamic range scale will be 100 times better in MIG sensors than in traditional sensors after 100 read-outs. This means that dynamic range in the multiple read-out method is sacrificed with traditional sensors whereas it is improved in MIG sensors.
ReplyDeleteTo give a practical example how the multiple read-out method would work in low light during a very long exposure time one can think e.g. a person walking by in the scene during the exposure. In digital domain the information of the person would be just cut away so that only information of the scene would be present in the image. Alternatively one could freeze the motion of the subject at any location of the person's path through the scene. The SNR of the person would, however, be quite a bit lower than the SNR of the scene.
Thanks for your explanation. I guess I am still missing something here. Isn't read noise remains the same no matter whether it is destructive or non-destructive readout? In my understanding, the reason why non-destructive readout is helpful is that it bins the signal before reading out and therefore it helps the SNR. But in your case, since you want every image to be sharp, I do not see the reason why you could sum the signals before readout. It would be helpful when you are doing HDR in which you mimic the long exposure but not the case here.
ReplyDeleteWith respect to the person walking by the scene, your proposal would probably need some optical flow or something like that to identify the person walking by. It may be especially challenging in low light given the low SNR of the images. Even if it is successful, perhaps the SNR transition as you have mentioned could be pretty visible. Anyhow, it's my guess.
In the example the read noise of one read-out was assumed to be the same for destructive and non-destructive CDS read-out and therefore, as you correctly pointed out, for one single read-out there is no difference between the two different read-out methods. However, if multiple read-out is utilized the read noise will add up in destructive CDS read-out whereas in non-destructive CDS read-out the read noise will be avaraged out. The reason for this is that the more often destructive read-out is performed the smaller the signals are that are measured and added together. Therefore, in multiple destructive read-out the overall read noise will be the read noise of one single measurement multiplied by the square root of the number of read times. On the other hand, in non-destructive read-out the signal keeps growing and it is not influenced by how many times read-out has been performed and therefore read noise can be averaged out.
ReplyDeleteIn CCDs the detection of faint signals can be improved with the expense of resolution by binning pixel signals together in spatial domain. For example, if the signals of 4 pixels are binned together the signal goes up by 4 times but the read noise will remain the same and therefore 4 times fainter signals can be detected. The resolution will, however, be reduced to one fourth. In non-destructive CDS read-out binning is not necessary. By measuring the signal in one pixel 16 times the overall pixel specific read-noise can be reduced to one fourth meaning that 4 times fainter signals can be detected without loosing any resolution when compared to the situation that the pixels are read only ones.
The form of the person walking by can be determined by utilizing information both in spatial and temporal domain. This could be done e.g. with nocturnal vision's algorithms (www.nocturnalvision.se). In order to be certain that no information of the person is used to compose the final image one can always remove from each frame a slightly larger area than what is absolutely necessary.
Thanks. Hope to see more demo in your site soon.
ReplyDelete