Lists

Sunday, October 27, 2013

Nokia Lumia 1020 vs Nikon D800

The Electrical Engineer blog compares image quality of Nokia Lumia 1020 based on 1.1um pixels with Nikon D800e DSLR having 4.9um pixel size. As expected, D800 is better, but in many cases the images are quite comparable - amazing job on Nokia side.

22 comments:

  1. Nikon and Canon will quickly become irrelevant based on these images, considering Nokia is not a dedicated camera company and they are using a much smaller sensor and relative cheap optics relative to D800.

    ReplyDelete
  2. That's not true. SLR are much faster and offer different field of view or zoom. The exposure time not mentioned here is much shorter on SLRs so capturing moving objects would leave the lumia far behind. Still amazing job done by Nokia. This camera is a small piece of the state of the art tech

    ReplyDelete
  3. Very impressive. Not a fair comparison, since I would guess shallower DoF on the D800 is the reason for all the softness they find, but very impressive nonetheless.

    ReplyDelete
  4. I'm thinking for long time about the subjective image quality and pixel size. I arrived at the following personal conclusion :
    1. small pixels have more noise, denoising and image processing have to be applied in a DSC
    2. large pixels have less noise, less image processing is applied.
    3. small details are very important for human image perception.
    4. image processing techniques alter these details in very different ways.
    Conclusion: large pixel with less image processing, then you get a more realistic and more pleasant image for human eyes.

    Silver film didn't have this spatial image processing, so the small details are almost fully conserved, that is why it's so pleasant to our eyes.

    I've a simple method : look at the image with one eye, if you have a vivid 3D impression, then this image is good for human perception...

    -yang ni

    ReplyDelete
    Replies
    1. I disagree strongly, sorry. I think the only issue is cost at this point. Making 1.1 um pixels on a large sensor is much more costly than using 0.18 or 0.25 but otherwise filling a large area sensor with 1.1. um pixels (BSI, etc.) will give better image quality than the same size sensor with larger pixels (same optics of course).
      This experiment at camera-phone image sensor sizes has been done several times, mostly reported by Aptina at the IISW. The results are pretty clear, no pun intended.

      Delete
  5. Eric, you talk about constant sensor surface and I talk about pixel count constant. If the sensor surface is constant, of course smaller pixel size & more pixels give better visual perception. But if you keep the pixel count constant, larger pixel is of course better.

    -yang ni

    ReplyDelete
    Replies
    1. ha, ok, next time tell us your assumptions! I agree that for the same pixel count, larger pixels (with larger optics) always gives better image quality, as long as you can get complete charge transfer at readout. I think this is pretty universally agreed on!

      Delete
  6. Eric, I fully agree that having more pixels with the same sensor area is better for image quality perception, especially in good light conditions. However, I still think that for low light, smaller pixel count with larger pixel size has still the advantage of requiring lower readout speed electronics for the same frame rate as less pixels have to be readout. That means lower bandwidth electronics (e.g. bandwidth of column amplifier) is used, lowering the thermal noise. Please correct me if I am wrong.

    ReplyDelete
    Replies
    1. I think that depends on the details of the noise. In the limit where the read noise is zero, small pixels can be binned with no penalty. (In fact, this is the objective of the QIS). For non-zero read noise, 1/f noise gets worse at lower frequencies while white noise from the amp improves. Readout electronics power is also important to consider and increases as you increase the pixel readout rate (although there is a trade here too, since digital integration means the ADC can be lower resolution). With DIS and multi-bit QIS, we enter an interesting trade space that has not yet been well explored. Could be these turn out to be bad ideas but so far that does not seem to be the case. Reminds me of the choice of on-chip ADC which was and still is an interesting trade space. So, no single solution and we will all have jobs for years to come!

      Delete
    2. Eric, why do you said that 1/f noise gets worse when you bin non-zero readout noise pixels please ? I can undertsand that you average successively samples from the same source, 1/f noise will not be improved as thermal noise due to correlation between the samples. But when you do binning of spatially distributed pixels, 1/f noises from differently located pixels are not correlated. So 1/f should be improved. Am I right ?

      -yang ni

      Delete
    3. If I understand your question correctly then perhaps this will help. Imagine that the light signal is zero and the read noise non-zero. Now, if you start binning (adding) pixels (same pixel over multiple frames or adjacent pixels, same frame) the noise gets worse. Somehow, though, I don't think this is what you are asking me.

      Delete
    4. One more thing: in small pixel the conversion gain can be increased, as they do not need to accommodate a large full well of big pixel. This assumes that design rules are fine enough to allow that, that is the process is advanced enough and a fab is flexible enough and agrees to adapt the process.

      So, in theory, the small pixels with bigger conversion gain have lower noise, and give a similar noise performance to the bigger pixels on per area basis. Said that, there is a number of limitations:

      1. The process is never fine enough and limits the conversion gain scaling up.
      2. Fab is never flexible enough to agree to all the desired changes in the process.
      3. As pixels go smaller, RTN quickly becomes the dominant noise. While the RMS value of RTN might be not that big, it appears as salt and pepper noise in the image and is much more visible than its RMS value might suggest.
      4. Color crosstalk is much worse in small pixels. If this is not solved in a system or process way, one has to use a really bad CCM to get acceptable colors. A bad CCM results in a huge SNR loss, both in low light and good light.

      Delete
  7. Hi Yang Ni, can you please say here if i/f noise improves with oversampling of the same source follower? I'm quite confused. I know that 1/f is correlated noise but some papers claim oversampling still helps in reducing it. It would be nice if you, Eric or somebody else could clarify this. Thanks!

    ReplyDelete
  8. I would also like to get some comments on other aspects of image quality: detail and noise are very important, but, as a user, nowadays I actually care more about dynamic range. Am I right in assuming that bigger pixels allow for better DR? I would guess so, because of higher full-well capacity and lower readout speed required.

    ReplyDelete
  9. Not in reality
    here you can se how different sensors behaves , pixel size , FWC, read out noise and DR
    http://www.sensorgen.info
    Sony 36Mp sensor in D800 has 14 stops DR compared to Canon 11 stops in Canons flagship 1dx with 18Mp

    ReplyDelete
  10. Sorry for my confusion ! My statement is that if you average spatially sampled pixels, you should have an improvement according to square-root law. Eric said that 1/f noise gets worse when we apply binning operation among small pixels. I don't understand why 1/f becomes worse after the binning operation.

    When you average the signal from a same pixel, 1/f will be improved according to sqaure roor law. But if you average signals from different pixels, then 1/f noise should be reduced as dictated by square root law. Am I more clear now ? Thanks !

    -yang ni

    ReplyDelete
    Replies
    1. "When you average the signal from a same pixel, 1/f will be improved according to sqaure roor law. But if you average signals from different pixels, then 1/f noise should be reduced as dictated by square root law."

      You mean that in both cases 1/f noise is improved as square root law? Not sure it improves like sqrt(oversampling) if you oversample the same source follower!

      Delete
    2. Averaging the same Pixel implies "leaving the shutter open longer". That method is a 'common and popular' (traditional) method of obtaining more light on your Plate. The downfall being that the Sensor must remain motionless while the Shutter is open.

      A quick laypersons explanation might be to call that "HDR".

      Averaging different Pixels implies (though you may mean another technique) Binning (and and AA you are subjected to). That method is a 'poor man's method' (you could trade resolution for noise) method of obtaining more light on your Plate. The downfall being the need for a larger Sensor or lower resolution Sensor.

      A quick laypersons explanation might be to call that "sharpening and downscaling".

      One of those two methods does sound better than the other, but in certain circunstances it is indeed the other method that is preferred. That can lead to problems since you (the "end user", not the "Engineer") seldom can chose your method (Nokia's 41MP Camera with their "Zoom" being an exception).


      There is no reason you can not do both, have larger Pixels AND also do Binning.


      We have the first method which (IMO?) produces better results, but no one wants to pay for bigger pixels (in a lower cost Device), or do 'stop motion Videos' where it is the Videographer who must hold still for each Frame (instead of the model Subject).

      The second method fails (especially for Video) due to AA and the "averaging", as it is being called, is actually an average of errors (both in value and physical location). Thus the AA Filter and Optics resize the Image and the Pixels are not 1:1 but an average of an estimate (Not true Image or Color).

      It will always be best (I think) to have the Shutter open for the shortest period of time, onto the 'biggest Buckets'; and let the 'Bucket Depth' give the Dynamic Range while the 'Bucket Diameter' provides the ability to catch that Photon (and thus determine it's value (range)).

      I call that "getting it right the first time", that is the time to capture that Image, and that is how you want it captured. That implies for the cost conscious Videographer that they use a Camcorder with am Image Sensor that is of a slightly bigger (for oIS) resolution than that of the Videos they wish to take.

      Delete
  11. What I meant is, it depends on the whole chain from pixel , QE, FWC , read out noise.
    the analog path way to digitalization, ADC column wise at the sensor etc etc

    ReplyDelete
    Replies
    1. Yes, I get that. My question was: with everything else constant, do you get significantly improved DR with bigger pixels?

      My guess is that noise remains constant, but full-well capacity usually falls linearly with the number of pixels (i.e. it grows with the square of the pixel pitch). So, half the pixels, one stop better DR, potentially.

      Delete
  12. It proves Lumia is a much better bloc-note than D800.

    ReplyDelete
  13. if the noise remains constant ? well take a look at this http://www.dpreview.com/forums/post/52400156
    Bobn2, Eric Fossum, DSP have a different view
    My view is smaller pixels gives lower noise, talking about noise not signal

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.