Lists

Saturday, October 31, 2015

New CFA Said to Give Better Color Accuracy and Low Light Sensitivity

EETimes, Phys.org: University of Utah Electrical and Computer Engineering Associate Professor Rajesh Menon has developed a new camera color filter that is said to let in three times more light than Bayer CFA. The new approach is described in an open-access paper in OSA Optica Vol. 2, Issue 11, pp. 933-939 (2015): "Ultra-high-sensitivity color imaging via a transparent diffractive-filter array and computational optics" by Peng Wang and Rajesh Menon.

If you think about it, this [Bayer CFA] is a very inefficient way to get color because you’re absorbing two thirds of the light coming in,” Menon says. “But this is how it’s been done since the 1970s. So for the last 40 years, not much has changed in this technology.

Menon’s solution is to use a color filter that lets all light pass through to the camera sensor. He does this with a combination of software and hardware. It is a wafer of glass that has precisely-designed microscopic ridges etched on one side that bends the light in certain ways as it passes through and creates a series of color patterns or codes. Software then reads the codes to determine what colors they are.

Instead of just reading three colors, this new filter produces at least 25 new codes or colors that pass through the filter to reach the camera’s sensor, producing photos that are much more accurate and with nearly no digital grain.

You get a lot more color information than a normal color camera. With a normal camera, you only see red, green or blue. We can do 25 or more,” Menon says. “It’s not only better under lowlight conditions but it’s a more accurate representation of color.” Ultimately, the new filter also can be cheaper to implement in the manufacturing process because it is simpler to build as opposed to current filters, which require three steps to produce a filter that reads red, blue and green light, Menon says.


Talking about small pixel results, the paper says: "As anticipated, the DFA, together with the regularization algorithm, works well for the 1.67μm sensor pixel except at the boundaries of abrupt color change, where crosstalk smears color accuracy. Scalar diffraction calculation estimates the lateral spread of the crosstalk (or spatial resolution) to be ∼13μm. This is approximately three image pixels in our configuration, since one DFA unit cell is 5μm×5μm.

However, in the areas of uniform color (areas #4 and 5), our reconstructions demonstrate negligible distortion and noise. The absolute error between reconstruction and true images averaged over the entire image space is well below 5%. For this object of 404×404 image pixels, it takes roughly 30s to complete reconstruction by regularization without implementing any parallel computation techniques on a Lenovo W540 laptop (Intel i7-4700MQ CPU at 2.40 GHz and 16.0 GB RAM) for simplicity.
"

Menon has since created a company, Lumos Imaging, to commercialize the new filter for use in smartphones and is now negotiating with several large electronics and camera companies to bring this technology to market. He says the first commercial products that use this filter could be out in three years.

13 comments:

  1. As the paper says, all the measurements are done for normally incident light. I wonder how well this works for low F-number optics? My guess is that it becomes difficult to reproduce a quality image. Also, there must be a lot of computation if it takes 30seconds to reconstruct one image on a laptop. In that case, I would expect that under low light, SNR would deteriorate quite a bit, not unlike using complementary CFA, and probably much worse than that. I think noise propagation in computational imaging is not something that most of the computer vision guys think about, and I don't think Menon has thought this through either. Still, there are probably some good niche applications of this interesting and clever technology.

    ReplyDelete
  2. With the 30-degree CRAs common in mobile cameras, you would need a spatially varying reconstruction algorithm - not in itself that big of a deal, but I agree with Eric's comment on noise propagation in computational imaging. Noise propagation and information theory in general are commonly under appreciated until people go all the way to building hardware and then try to figure out why they aren't getting the image quality or resolution gains they expected - e.g. light field cameras, non-traditional Bayer CFA patterns, Foveon-like sensors, etc.

    ReplyDelete
    Replies
    1. "you would need a spatially varying reconstruction algorithm - not in itself that big of a deal"
      I am hardly an expert on this but I think it is more complicated than that. Just considering all the wavefront diffractions that will contribute to the center pixel signal for a low F number compared to normal incidence. The contributions will also depend on object distance. Perhaps I am totally wrong about this and it is easy....but right now I think it will be extra complicated or maybe even intractable, relative to good image quality.

      Delete
  3. «... a New Color Filter that is said to let in 3x (three times) more light than Bayer CFA... »
    Wow !... their QE will climb up to 250% ?... or are they referring to Old Bayer CFAs with very selective and opaque filters ?...

    ReplyDelete
    Replies
    1. I think nearly all photons reach the silicon using the diffractive pattern, hence 3x.

      Delete
    2. I agree with Eric on QE.
      I am not sure the noise propagation is important at this point, and the long 30sec reconstruction for sure makes additional error accumulation.
      The next critical is the diffraction approach itself to filter arrays. The unavoidable diffraction orders in DFA as 1st, 2nd, etc. contribute to 0th order and a cross-talk. As authors mentioned "the crosstalk between spatial pixels also increases with increasing d." The minimal d they used of 300um is much larger, say hundred times of that in real CIS. So, one can expect a huge cross-talk in typical image sensor with DFA as compared to Bayer CFA. This cross-talk will increase for both optical and spectral components and to be huge.
      Then oblique rays, e.g. for CRA~30deg., make additional PRNU and polarization dependence. All these optical errors are expected to be too large for practical use.

      Delete
    3. I think signal losses from mosaic filters are often overstated. The factor of 3 may be seen in some cases (when looking at a nicely broadband gray patch), but for a practical coloured world much of the scene will have a limited spectral content that does not get reduced 3x when it goes through a colour filter - any given coloured patch did not have the broadband out-of-band content to begin with, it was already absorbed in the narrowband reflection of the illuminant.

      I very much share Eric's concern about noise propagation through the computation chain mixing uncorrelated noise from multiple pixels. I suspect that for low light scenes (which are arguably the only ones where improvements in light transmission are of practical interest), there may not be enough SNR to even reconstruct a coherent spatial distribution of intensities representing the scene. Mosaic capture may cost us some sensitivity due to cutting off the out-of-band signal energy, but at least we get the remaining signal recorded in a spatially-coherent pixel grid, so we have something to work on directly. You can filter that, or whatever.

      BTW, does anyone have a quick rule for uncorrelated noise mixing in case of multiplication or division of random variables? Most of the time we only consider addition of variables, and end up with sqr(2) increase in rms. How does that work out when we perform multiplication of random variables?

      I wonder how motion blur represents itself in this sort of a signal chain.

      Delete
    4. I can't agree with your conclusion. I do agree that most of the time an RGBG kernel is looking at the same spectral scene. In the best scenario (I think), it is green, so ~100% of the photons go thru two of the 4 elements in the kernel, and ~0% go thru the other 2, so photon utilization is only 50%. If the scene is white, then photon utilization is about ~33% across the kernel. Red or blue, ~25%. Mixed colors that are not white I think will fare worse than 50% but maybe there is a combination that does better. Thus, I think I would stick with a rough estimate that about 2/3 of photons incident on the sensor are lost in the CFA. True, it is a totally seat-of-the-pants argument, so I would welcome hearing about further analysis that reveals a better number.

      Delete
    5. Roughly it is right that 1/3 of photons go to every color channel in an ideal case.
      But in the real CFA the transmission is limited down to 80-90%, and as a result the photon utilization as max is 60-70% for RGB pixels. However, even for a good CFA all spectral BR components for G-pixel, and GR components for B-pixel, and BG components for R-pixel occur lost. As I see it, the DFA was intended especially for saving that lost components making them redirected to neighbor pixels. It is a good idea! It would ideally give 100% utilization of photons. But not for CMOS IS, because the diffraction orders and the distance d are very critical. If nevertheless one wants to estimate a benefit of DFA in numbers, one should compare three integrals under QE curves for RGB pixels divided by the integral under the white scene for pixels without CFA. In case of DFA allegedly all photons are employed because they are diffracted, not absorbed. But the diffracted rays after DFA become overlapped: the higher spectral orders overlap with the 0th order, and it's inevitable and it's sock! The smaller the distance d, the more severe the overlap.

      Delete
  4. I too question the near term practicality of this approach, but I lack the 3 or more Doctorates necessary to formulate a question (better Theory) nor understand the answer (were one forthcoming).

    There seems so much error and complexity designed into this approach that I can only wonder how many other methods would be better in terms of result and time needed to obtain this seemingly questionable result.

    An alternative approach using thin-film interference filter technology is shown here: http://www.photonics.com/m/Article.aspx?AID=57179 .

    A method using one or two Prisms and a pair of precomputed masks might produce a result such as this: http://www.cs.dartmouth.edu/~wjarosz/publications/hostettler15dispersion.html (that seems demonstrative of what results the DFA might produce).

    I propose a method where the 'Blue' (and UV) and the 'Red' (and IR) is frequency shifted to 'Green'.

    The human eye is 'RGBI' (Red = 559 nm, Green = 419 nm, Blue = 419 nm, and 'I' (night vision) = 496 nm), but we also need to consider the Gamut of the display device. Note: Eye's Pigment wavelengths http://thebrain.mcgill.ca/flash/a/a_02/a_02_m/a_02_m_vis/a_02_m_vis.html

    Were we to print the image we would want to allow for the Inks used ( http://m.epson.com/cgi-bin/Store/Landing/UltraChromeK3.jsp ).


    The method I suggest might be described as an "Al NP array" (derived from http://onlinelibrary.wiley.com/doi/10.1002/adom.201400325/abstract ).

    By having a triangular (or 4 pixel) array we can accept 'Green' as a central (unaltered) portion of the Spectrum and use a very wide bandwidth filter to clip the extremes, A 'tri-monochrome' approach.

    Shifting and wide bandwidth filtering seems a better approach than DFA, which seems to me like using a prism and a line scanner to represent each pixel.


    If I'll need a few Doctorates to make my Solution sound more intelligent you'll be waiting a couple of decades or longer for my 2 cents.


    Disclaimer: Typed on a Cellphone and proofread. Hope it is coherent despite it's brevity. Smarter people than I (in this Field) can explain my Theory better than I can.

    ReplyDelete
  5. I've attempted to refine this (image capture) to both as easy to explain, easy (for a large Company with well paid world's experts) to do, at lowest cost (a fortune), with as best results as possible.

    Take an existing image sensor (high quality not an absolute, aim for repeatability and / or correctability) and measure it against a known light source (try a calibrated sensor and the Sun) to find the error of each pixel.

    Calculate a MOC (Multivariate Optical Computer https://en.wikipedia.org/wiki/Multivariate_optical_computer ) to remove all error and make the sensor as perfect as possible, instead of it's more common use.

    Use Wafer Level Optics to apply the MOE (Multivariate_optical_element), (part of the MOC, see here: Multivariate_optical_element https://en.m.wikipedia.org/wiki/Multivariate_optical_element instead of using a CFA (or the DFA proposed in the Article, nor my prior suggestion, an "Al NP array").

    The per pixel (or per group of pixels) MOEs are used as optical beamsplitters with an exactly calculated spectral transmittance. The magnitude of the total transmitted and reflected intensities is measured with a corrected optical detector. The MOEs could be electronically controlled (or just use 6 pixels) to 'flip' the colorspace between RGB (see exact center frequencies in prior comment) and CMY (or other preferred colorspace) or use 6 pixels and record 2 (or with multiplexing, more) simultaneous colorspaces.

    These signals from each pixel and as is they should be so exact that they appear exact upon the Monitor upon which it is viewed (Scientific Term: Yup, you can use your cheap Monitor if you include it's error in your calculation).

    That's simple quite and supposedly fairly accurate, greater accuracy could indeed be obtained if complex (and in a few cases, trivial) techniques were employed to calculate any residuals and eliminate them.

    You would be able to calculate the bandwidths for each color exactly ahead of time (when creating the MOC) and some of the newly (post calculation) errors (EG: Quantization, on the one hand and heat drift or sensor aging) quickly enough (per frame) to obtain a high frame rate; in the event that the virtually perfect RAW Video were not exact enough.

    PS: Wish we could expand this Comment Area Box (and, easily on Mobile) to view more of one's Comment; to make it easier to see a paragraph on large screen Phones (when I was in School no one would have been able to understand what the prior sentence meant let alone believe the day would come that someone could even type that sentence, true or not; such a concept was beyond not. How far we've come so fast, pedal to the metal now).

    Etc. - Might need a tweak, should be easy to recalculate and reprogram as improved Algorithms are developed in the future without a 'Hardware Upgrade'; so I can only hope to have not designed in fault nor (recent / infant) obsolescence.

    ReplyDelete
  6. I would like to see this in action, Bayer has been around for a while, Its time to se if any new tech can boost performance

    ReplyDelete
  7. any further information about this technology until now?

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.