Some excerpts below:
On the question of whether dedicated cameras are better than today's smartphone cameras the author argues:
“yes, dedicated cameras have some significant advantages”. Primarily, the relevant metric is what I call “photographic bandwidth” – the information-theoretic limit on the amount of optical data that can be absorbed by the camera under given photographic conditions (ambient light, exposure time, etc.).Cell phone cameras only get a fraction of the photographic bandwidth that dedicated cameras get, mostly due to size constraints.
There are various factors that enable a dedicated camera to capture more information about the scene:
- Objective Lens Diameter
- Optical Path Quality
- Pixel Size and Sensor Depth
Computational photography algorithms try to correct the following types of errors:
- “Injective” errors. Errors where photons end up in the “wrong” place on the sensor, but they don’t necessarily clobber each other. E.g. if our lens causes the red light to end up slightly further out from the center than it should, we can correct for that by moving red light closer to the center in the processed photograph. Some fraction of chromatic aberration is like this, and we can remove a bit of chromatic error by re-shaping the sampled red, green, and blue images. Lenses also tend to have geometric distortions which warp the image towards the edges – we can un-warp them in software. Computational photography can actually help a fair bit here.
- “Informational” errors. Errors where we lose some information, but in a non-geometrically-complicated way. For example, lenses tend to exhibit vignetting effects, where the image is darker towards the edges of the lens. Computational photography can’t recover the information lost here, but it can help with basic touch-ups like brightening the darkened edges of the image.
- “Non-injective” errors. Errors where photons actually end up clobbering pixels they shouldn’t, such as coma. Computational photography can try to fight errors like this using processes like deconvolution, but it tends to not work very well.
When I blow up the "correct" example image on my display, and then take a photo using my Iphone 11 pro, the image comes out almost exactly as it appears on my display, with no apparent re-creation with artifacts. Perhaps someone with an iPhone 14 pro can try the same. Of course an image of a high res display image is not the same as an image of an actual object seen through microscope optics, but does the iPhone know the difference enough to warp one and not the other? Perhaps the iPhone image via the microscope has fewer pixels on it compared to what is seen with the other camera the author tried?
ReplyDeleteThe limit of computational photography is quite possible. But not soon. Because such photography has only recently become possible. It's just that this task is not for the current "calculators" with cameras :)
ReplyDelete