Saturday, December 19, 2015

Image Sensors in 13 min

Filmmaker IQ publishes a short video lecture on image sensors. While not perfectly accurate in some parts, it's amazing how much info one can squeeze in a 13 min video:

15 comments:

  1. Not bad, but it is a shame that a few edits in the script and graphics could have made it much more technically accurate.

    It is a common misconception that the modern CMOS image sensor is the same as that proposed by Noble, but just with a better CMOS technology. In fact, the modern CMOS image sensor uses the best of Weckler & Noble's MOS-era ideas and those that make a CCD work well, and some new ideas. Microlenses, CFAs, backside illumination, trench isolation, pinned photodiodes, intra-pixel (complete) charge transfer, in-pixel charge amplification and shared readout, correlated double sampling, other on-chip analog signal processing and ADCs, and digital signal processing, and most recently 3D integration (stacking).

    ReplyDelete
    Replies
    1. +1
      CCD sensors will mostly not improve much in the future -vs- it's just the beginning of CMos sensors, opened to an infinity of new "on-chip" Signal processing, Analog & Digital...

      Delete
    2. I fully agree with Eric's statement. And going through his list of new technologies, such as microlenses (were already available with CCDs), CFAs (were already available with CCDs), backside illumination (was already available with CCDs), trench isolation (was already available with CCDs), pinned photodiodes (were already available with CCDs), intra pixel charge transfer (was already available with CCDs), in-pixel charge amplification (was already available with CCDs such as EMCCD), correlated double sampling (was already available with CCDs). Conclusion for me and for you (who-ever you are) : it is high time to come up with something new.

      Delete
    3. @Albert, I think random pixel access/ ROI and ADCs are the only novelty of CMOS compared to CCDs?

      Delete
    4. Right, there are of course a few more plusses for CMOS (power, integration, speed, ROI, ...) but I was surprised to read how many technologies invented for CCDs are still in use for today's CMOS devices.

      Delete
    5. The invention of the CMOS active pixel image sensor with INTRA-PIXEL CHARGE TRANSFER and camera-on-a-chip was motivated by charge-transfer efficiency (CTE) issues with CCDs in a space radiation environment. We had to preserve the best qualities of CCDs but get rid of the thousands of charge transfers. So, just transfer inside the pixel only, and put the output amplifier (Kosonocky floating diffusion amplifier) inside the pixel. Then we get all the early advantages of 3T active pixel devices in a CMOS platform, plus all the performance advantages of CCDs. That was the main idea. It seemed so straight forward that after having the idea, I was surprised no one had not done that before. (In fact I checked with Walter Kosonocky and Tom Lee just to be sure!) And if it wasn't for the nudge from Sabrina Kemeny, I may not have bothered to file a patent on it.

      Delete
    6. I have probably mentioned this a few times in the past, but it is worth repeating. Pretty much the entire image sensor industry thought this idea was a bad idea. Notable exceptions included Walter Kosonocky, Tom Lee (who we subsequently did the first PPD device with at Kodak), Junichi Nakamura and Gene Weckler. In 1993, Savvas Chamberlain (Plessey, DALSA) and I debated this approach on stage at the 1993 CCD workshop in Waterloo. He told me in front of the audience that this was a stupid idea and asked me why I was wasting my time on it. It was a long uphill battle for many years to get this technology accepted. While VVL and Omnivision soon adopted our approach, it was not really until Toshiba launched a product that the technology was taken seriously by the big players. The invention - it took just a minute - but the technology adoption took a decade. BTW, on-chip ADC was also thought to be a stupid idea, particularly by Japanese image sensor companies. I visited these companies to understand "why" but the only rational (sort of) response I received was that the camera customer could design the rest of the signal chain including ADC and could thus add their own value to the camera design.

      Delete
  2. Please could someone explain the significance of V/Lux-sec in the context of low light image sensors? I know manufacturers use dubious calculations to come up with ever more impressive lux ratings for minimum illumination, but I have been told that responsivity, shown in v/lux-sec is a more accurate way to determine an sesnsors' ability to 'see' in low light. There was an article on this site some time ago about a Chinese company called Brigates who make 1/2'' CMOS and MCCD sensors with responsivity ratings from 13v/lux-sec to 40v/lux-sec, and lux ratings of 0.00008.

    ReplyDelete
    Replies
    1. Lux does not make sense for CMOS image sensors unless and IR cut filter is applied to the sensor. Use DN instead of V, as you anyway can't access the pixel's voltage, and use J or W*sec for irradiance. You can also write very good marketing values of 0 lux for any sensor, independently of its performance!

      V/lux*sec or DN/J are units of responsivity. What you would like to look at for sensitivity is a value representing light, i.e. W or J.

      Read the EMVA1288 standard.

      Delete
    2. With on-chip ADC you can get DN's, without on-chip ADC you can get V's, but both are very much misleading. What you need are electrons ! To measure and compare devices and technologies with each other, there is only one unit correct : ELECTRONS. Electrons is the product of what an image sensor is creating under the influence of light. DN's and V's are as much misleading as lux is.

      Delete
    3. Electrons are definitely what real image sensor technologists care about. But the next thing that happens after the electrons are collected is conversion to a voltage. Furthermore, since over the visible range of light, one photon generates one electron (or zero), and no more than one, voltage per electron, or voltage per lux-sec makes sense. Voltage per Joule (or Watt) is good for some light detection devices but not a useful measure for visible light sensors. The voltage is often referred to after the source-follower, following the tradition of CCDs where that WAS the output voltage of the chip. For digital image sensors, DN or LSBs per lux-sec makes sense but you have to specify any voltage gain.
      Anyway, for the old dudes (and dudettes), we were calibrated in terms V/lux-sec so it is still useful.
      Using a visible light metric for invisible light is dumb, but when it comes to the number on the camera box, bigger is usually better.

      Delete
    4. EMVA1288 has:
      - QE*FF in %
      - K (conversion gain) in DN/electron
      - Saturation capacity in electrons
      - SNR as a curve
      - sensitivity in photons
      - responsivity in DN/photon
      - DR in dB
      - DSNU in DN
      - PRNU in % of signal

      Any other unit can be calculated from this.

      Delete
    5. If the conversion gain is known, you can recalculate any parameter expressed in DN back to the charge domain, and that is the right way to do. Comapring technologies with each other needs to be done in the charge domain. Without knowing the conversion gain, expressing the parameters in DN is misleading and confusing.

      Delete
    6. This comment has been removed by the author.

      Delete
    7. @AD. I first read your post as "any other parameter..." but you say unit, so I guess you mean conversion from DN to e- or v.v. I think uV/e- of the first stage conversion gain is an important parameter, among many others, and this is lost in your list. I guess it depends if you are a camera designer (in which case you may not care) or an image sensor technologist.

      Delete

All comments are moderated to avoid spam and personal attacks.