Lists

Tuesday, November 17, 2020

Holy Grail Claim: Perfect RGB‐IR Color Routers Instead of Color Filters

Wiley Journal of Advanced Photonics Research publishes a paper "Perfect RGB‐IR Color Routers for Sub‐Wavelength Size CMOS Image Sensor Pixels" by Nathan Zhao, Peter B. Catrysse, and Shanhui Fan from Stanford University.

"A critical capability of all image sensors is to separate light into its individual color components. In most technologies today, this is done via color filters. Filters, however, intrinsically waste a large fraction of the light by absorption or scattering. This affects image sensor performance since the amount of light incident on each image sensor pixels reduces quadratically with linear scaling of pixel size. This is particularly detrimental to the performance of (sub‐)wavelength size pixels. In this paper, we provide a conceptually novel approach for color functionality in image sensors, by designing a color router that achieves perfect RGB‐IR color routing for sub‐wavelength size pixels. In a color router, all incident light for each color channel is routed directly and without loss to the photodetector of the corresponding color channel pixel. We show that color routers can be designed to near‐perfectly match a prescribed spectral shape, which is important for color image processing. We further show that we can design these routers to achieve specific spectral bandwidth and to meet angular as well as fabrication constraints. This article is protected by copyright."

20 comments:

  1. Any obvious downsides to this?

    ReplyDelete
    Replies
    1. It only works with small pixels and can't be extended to the large ones. At least, not easily extended.

      However, the sensitivity boost for small pixels is huge. This can ignite another round of pixel size race.

      Delete
    2. This is a great advance for photon counting sensors, but is likely to be only slightly more sensitive than RGBW on a CIS in a non-IR, visible color-only setting.

      This is because luminance at each pixel in the proposed design will be contaminated by the read noise from 3 photosites whereas the corresponding W pixel in RGBW will suffer read noise from only 1 photosite. Only half the pixels in RGBW, on the other hand, are W and the demosaicker injects a small fraction of the RGB pixel noise into luminance.

      The low light SNR advantage over RGBW is likely to be 1.5dB on CIS and 3dB on QIS/other photon counting sensors.

      Delete
    3. So, using the old SNR10 figure of merit, what would be a difference in lux? 2x?

      Delete
    4. Yes, 2x for photon counting sensors and sqrt(2) = 1.4x for CIS

      Delete
  2. It seems all figure captions are located above the corresponding figures, instead of below. Therefore, the posted figures and captions are mixed up. For example, the caption for Figure 2 appears with Figure 1, but the correct caption for Figure 1 is not posted.

    ReplyDelete
  3. Let me ask you a question.
    Is this result a simulation?
    Also, when actually making a color router, how much pixel size
    Is it possible to process it?

    ReplyDelete
  4. I can't see the whole paper so maybe this question is unnecessary but here it is. The illustrations show a one-dimensional effect. Is this really two-dimensional so the routing can only be done into lines of same-colored pixels or is it really one-dimensional so each line of pixels can have a different routing matrix? In fact, since only one set of four pixels is shown, maybe it is really zero-dimensional and it only works in isolation for a single set of four pixels and another set needs to be far away. Maybe the authors can comment.

    ReplyDelete
    Replies
    1. The paper is in open access. You are supposed to be able to get a pdf of it:

      https://onlinelibrary.wiley.com/doi/pdf/10.1002/adpr.202000048

      Delete
    2. Thanks, I looked. It appears they don't discuss anything beyond a row of four pixels. Seems to me it would make more sense to build these in a 2x2 block. The distance the wavefronts have to span would be shorter and the blocks could be repeated and optically isolated.

      Delete
    3. With these being sub-wavelength sized, then a linear cell that is smaller than the diffraction-limited point spread function is fine.

      Delete
  5. There is no free lunch (except maybe for Foveon approach): It does not work for low f/#.
    "We find that the response remains above 70% and 60% of
    normal incidence performance for lens apertures down to f/4.0 and f/2.8, respectively."

    In general, it is hard to make diffractive elements perform over large ranges of wavelength and angle.

    ReplyDelete
  6. Please correct me if I am wrong. But wouldn't the scene information get all mixed up using this mechanism. In the diagram it clearly shows that light that was intended for a pixel is sent to a pixel 2 pixel over. I see this posing MTF challenges. Also AF performance might get adversly affected too.

    ReplyDelete
  7. The example shwon on the diagram is using normally incident light on the pixels, wonder what the crosstalk performance will be for light incident at oblique angles ?

    ReplyDelete
  8. What happens to the photons that are not captured? If they are mostly absorbed it's one thing, but if they are reflected back out it can be a major issue. Even a small percentage of reflection off the sensor can result in disastrous flare in high dynamic range scenes.

    ReplyDelete
  9. any image sensor who detect the pest and insect in agriculture field ?

    ReplyDelete
  10. the way they did may work for color separation , but the structure itself is very complicated to achieve in any of current process. another thing is that uniformity of image sensor nowadays requires less than 0.5% error, I cannot imaging how can they get this number if using this method.

    ReplyDelete
    Replies
    1. Albert Theuwissen - Harvest ImagingNovember 26, 2020 at 9:08 AM

      Maybe all your arguments are correct, but you would be surprised to see the very first attempts made with on-chip colour filters. The results were also very bad in comparison to what we have these days. Over time also this technology can further improve, hopefully ;-)

      Delete
  11. Does it violate conservation of Etendu?

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.