Lists

Wednesday, February 22, 2023

PetaPixel article on limits of computational photography

Full article: https://petapixel.com/2023/02/04/the-limits-of-computational-photography/

Some excerpts below:

On the question of whether dedicated cameras are better than today's smartphone cameras the author argues:
“yes, dedicated cameras have some significant advantages”. Primarily, the relevant metric is what I call “photographic bandwidth” – the information-theoretic limit on the amount of optical data that can be absorbed by the camera under given photographic conditions (ambient light, exposure time, etc.).

Cell phone cameras only get a fraction of the photographic bandwidth that dedicated cameras get, mostly due to size constraints. 
 
There are various factors that enable a dedicated camera to capture more information about the scene:
  • Objective Lens Diameter
  • Optical Path Quality
  • Pixel Size and Sensor Depth
Computational photography algorithms try to correct the following types of errors:
  • “Injective” errors. Errors where photons end up in the “wrong” place on the sensor, but they don’t necessarily clobber each other. E.g. if our lens causes the red light to end up slightly further out from the center than it should, we can correct for that by moving red light closer to the center in the processed photograph. Some fraction of chromatic aberration is like this, and we can remove a bit of chromatic error by re-shaping the sampled red, green, and blue images. Lenses also tend to have geometric distortions which warp the image towards the edges – we can un-warp them in software. Computational photography can actually help a fair bit here.
  • “Informational” errors. Errors where we lose some information, but in a non-geometrically-complicated way. For example, lenses tend to exhibit vignetting effects, where the image is darker towards the edges of the lens. Computational photography can’t recover the information lost here, but it can help with basic touch-ups like brightening the darkened edges of the image.
  • “Non-injective” errors. Errors where photons actually end up clobbering pixels they shouldn’t, such as coma. Computational photography can try to fight errors like this using processes like deconvolution, but it tends to not work very well.
The author then goes on to criticize the practice of imposing too strong a "prior" in computational photography algorithms, so much that the camera might "just be guessing" what the image looks like with very little real information about the scene. 

2 comments:

  1. When I blow up the "correct" example image on my display, and then take a photo using my Iphone 11 pro, the image comes out almost exactly as it appears on my display, with no apparent re-creation with artifacts. Perhaps someone with an iPhone 14 pro can try the same. Of course an image of a high res display image is not the same as an image of an actual object seen through microscope optics, but does the iPhone know the difference enough to warp one and not the other? Perhaps the iPhone image via the microscope has fewer pixels on it compared to what is seen with the other camera the author tried?

    ReplyDelete
  2. The limit of computational photography is quite possible. But not soon. Because such photography has only recently become possible. It's just that this task is not for the current "calculators" with cameras :)

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.