Tech-On: Fraunhofer Institute for Applied Optics and Precision Engineering exhibited a "super-slim camera" with 300μm-thick optical system at Nano Tech 2011.
The camera has 150 x 100 pixel array and based on the compound eyes of insects. The view angle of the camera is 80° x 65°. A thin glass part is placed between the lens and the image sensor. The total thickness of the lens, glass part, spacer, etc is 300μm.
The possible applications for the camera are in various sensors and monitoring devices as well as for medical purposes.
I'd really like to see some images from any of the plenoptic cameras recently posted. A low light image would be a bonus.
ReplyDeleteI've tried this kind of stuff many year ago by placing a SelFoc lens array extracted from a cheap document scanner in front of a CMOS sensor. I saw myself multiplicated in the image, quite funny. But the image quality was bad. From what I see, I guess it should be also a Selfoc lens array, non ??
ReplyDelete-yang ni
such kind of stuff : http://www.nsgeurope.com/sla.shtml
ReplyDelete-yang ni
I was actually thinking of the post-processed image.
ReplyDeletethe use of SelFoc is that the image is not optically reversed. So depending on the optical configuration, you can also get an uniform optical image on the sensor surface which is the case in a scanner.
ReplyDelete-yang ni
There are several approaches to such a "super-slim camera" using multi-aperture optics. The one shown here uses microoptical fabrication technology on wafer-level. Hence, a batch of micro-images is created by an array of tiny microlenses. The final image is then indeed formed by a post-processing of the number of micro-images from all optical channels (like Mr. Fossum pointed out). A recent prototype is able to acquire real-time video with VGA resolution at a thickness of only 1.4mm. With a future improvement in resolution the applications could address consumer devices like mobile phones or notebooks.
ReplyDeleteDetails about this approach and example images which have been acquired with the device may be found here: http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-24-24379
Andreas Brueckner
Thanks Andreas. I enjoyed the paper and was very happy to see an image. Not so bad for early work! I hope you can find a solution to the MTF drop off.
ReplyDeleteSo, is the main advantage of this approach a reduction in camera thickness? It seems the cost is rather high. MTF (at least for now), noise (could be severe in low light) and chip size for a given output resolution (e.g,. 3 MPix -> ~VGA) Have you been able to produce 3D images yet?
Any comments you have would be welcome. I like the plenoptic approach in many ways but I am worried about the drawbacks.
Oh, another cost is the post processing hardware and energy.
ReplyDeleteVery interesting !!!!
ReplyDeleteTo comment on Eric's questions:
ReplyDeleteYes, you're right the main advantage/purpose is to reduce the thickness of a camera lens module. Mobile phone and notebook makers ask for ever-thinner camera modules so that for a fixed pixel size the only way to do that could be a paradigm shift for the optical setup. It may look like but it is not a plenoptic camera at all.
Concerning the MTF performance: we are already working on a new prototype with enhanced image quality per channel so that the overall MTF performance will improve. Dealing with noise is the same as in other miniature camera devices (e.g. see OVT CameraCube) - a task for the image sensor developers.
At the moment we are not interested in any 3D resolution for such a device. We are even trying to further shrink the lateral dimensions which will in return decrease the residual parallax. This, as you pointed out, has to be done in order to achieve a better fit of chip size for given output resolution. This ratio mostly dominates the costs for such a device.
For now, the post processing is quite simple so that it rund in real-time on any embedded platform like smart phones carry today.
Andreas Brueckner
Thanks again Andreas. I was probably too loose with the term "plenoptic". Maybe I should have said "multi-optic".
ReplyDeleteOn the noise however, I wonder if there is any subtraction involved in generating the output pixels or is it all additive computation? Generally SNR suffers with computed imaging unless it is just positively-weighted summation with shot-noise limited signals. The classic example is subtraction of two large signals each with shot noise. The result is a small signal with large noise. Seems you may be doing a lot of distortion correction etc. that may involve subtraction.
One drawback seems like the requirement for tight alignment between the pixel array and optics. Or you could calibrate in production. Mobile test engineers hate anything that cuts into manufacturing time.
ReplyDeleteHow fast could the processing run? Video rates? A major drawback with multi-optics is you can't turn them off. You have to run intensive processing for every image, all the time.
In the current version there is a software distortion correction involved which includes a gray-level interpolation (please see Proc. SPIE 7875, 78750B (2011) for details). We did not check yet, how much this processing step is influencing the noise. However, from the system architectural point of view there is the chance to even reduce temporal noise due to some redundant sampling accross the micro-images.
ReplyDeleteConcerning the alignment to the image sensor: The most critical tolerance is the back focal distance (z-height) which has to be mounted within about p/m 10µm from the nominal value. However, this accuracy level is also found in single-aperture WLO. The only big difference between both is that for the multi-aperture optics also the degree of rotation between the optics module and sensor plane matters. If there is residual rotation it makes the post processing much more complex. But keep in mind: The alignment of wafer-level optics is never going to be done by hand.
With the current approach video rates of 30 fps would be no problem.
Andreas Brueckner
Processing only uses additions and multiplications with constants at the moment, so I don't expect any detrimental effects on noise. As soon as you start to do demosaicing or deconvolution, that will involve negative coefficients, but that is true for single-aperture cameras as well.
ReplyDeleteProcessing can be decreased a lot or even eliminated with clever optics/sensor design.
Alexander Oberdörster
Well, Alexander, why not take and post an image at 100 lux comparing your sensor (fairly) some other benchmark sensor? I am not saying it will be a lot worse but it is an easy thing to do in the lab and will be useful for promoting your technology.
ReplyDeleteI'm wondering, are these systems fixed-focus?
ReplyDelete