Monday, February 10, 2014

Sigma Announces Quattro Foveon Sensor

Sigma announces "Foveon X3 Quattro direct image sensor" with top blue sensing layer split into four pixels, on top of the single green and read pixels:

The new 1:1:4 structure (bottom 1: middle 1: top 4) offers higher resolution
that is equivalent to 39 megapixels for conventional color filter array sensors.
In addition to the higher resolution, the sensor offers enhanced noise characteristics
and faster processing and writing to memory of high-volume image data.

The new split-blue approach is said to increase the sonsor's resolution by 30% to be equivalent of that of 39MP Bayer sensor.

19 comments:

  1. What is the "real" increase in resolution of the 1:1:4 with respect to the standard 1:1:1 pixel?

    ReplyDelete
  2. Very very interesting! I have considered something very similar, myself. If there is a high degree of correlation between the either the red or green with blue, at the sampling frequency of red or green, then the higher resolution blue image can be used to extrapolate a four-fold increase in image data for red and green, producing a 60MP 100% fill-rate (RGB, 4:4:4) image. Of course this will not work for subjects which reflect very little blue light, or for textures that alternate the ratio of red, green, and blue light at high-frequency. Bottom line, the sensor should produce very high-resolution, detailed images for unsaturated subjects, but will likely resolve poorly highly-saturated colors (narrow-band) constituted by wavelengths longer spectral cyans.

    However, even at the comparatively low sampling frequency of 5MP for the two longer wavelength primaries, sampling is at nearly 100% fill-factor, so the images will be sharper than an 8MP CFA sensor.

    I REALLY hope that they designed the sensor for fast readout. The killer app for Sigma's Foveon sensors is video. And not 360p video like they have had in previous cameras. They need to provide more than 12 readout channels for pushing that much data off sensor at video frame-rates. >=48-channels would be good.

    Also, I hope color accuracy is decent. Foveon should make the cyan notch filter proposed in their white paper mandatory, in order to get good-looking color, if not accurate color.

    ReplyDelete
  3. Why splitting Blue than Red or Green?

    ReplyDelete
  4. I guess the eye is so sensitive to both blue light intensity and blue spatial frequency that this is a major advance....

    ReplyDelete
  5. Can you explain what you mean by "real"?

    ReplyDelete
  6. The topmost layer is the most sensitive to light (ie. it collects most light), the two bottom layers collect less, thus the SNR of the top layers is much higher than the bottom ones. Also, the top layer is likely to be the one where the image is focus, thus the bottom layers are slightly out of focus. Also the bottom layers are more problematic when the light is not perpendicular, thus larger bottom pixels should reduce or eliminate the colour cast issue of the previous generation.

    Resolution should be basicly identical to the previous pixel-for-pixel as the top layer has always been the one mostly responsible for resolution. The other two layers, especially the bottom just play a support role.

    This is certainly a smarter design than previous Foveonas and and will still certainly offer the same 10-15% resolution advantage over conventional sensors of the same pixel count. It will however still have the same problems Foveon has always had - weak colour separation, high noise (due to colour separation and lacking CDS), metamerism issues. Battery life and speed of operation should be improved though.

    ReplyDelete
    Replies
    1. Do you know for sure the top layer is the most sensitive to light? I guess it depends on layer thickness and the spectra of the scene.

      Why is the top layer in focus? Does Sigma do something other than scene-based focusing? In this case the focus is determined AFTER the image processing, which includes all layers. If some other focusing mechanism, I guess they calibrate relative to a captured image which amounts to the same thing.

      I don't understand why the 2 bottom layers only play a secondary role in the original X3. This seems out of balance. If anything, the layer contributing most to luminance ought to be playing the dominant role. Is this the top layer as you claim? If it were, why would Foveon not have made the top layer a pinned photodiode-like structure, and all other layers 3T type pixels?

      I am sorry but I really don't see this as a big improvement. I see this is as shift towards Bayer-thinking. I never thought that low resolution was considered a major drawback of Foveon type sensors, but of course it is a drawback. Other than that I agree with the last paragraph.

      Delete
  7. Anonymous #2 back. I don't think the topmost layer is much thicker than two microns. It is the same sensitivity per unit volume as the bottom two layers, if not, then very close. However, it is thinner, and therefor less sensitive in terms of the total number of photons absorbed. The point is that it needs only absorb the higher energy blue-spectrum photons. Herein lies the problem, that the top most layer absorbs also the longer wavelength, all the way up to the IR cut filter, just less of them when compared to the middle and bottom layers. So, to get accurate color, a cyan cut filter which shapes the topmost layers absorption curve to be close to the sum of two or three linearly-scaled cone absorption primaries is ideal. Indeed, Foveon's Richard Lyon and Paul Hubel have discussed such a setup and called what I'm calling a Cyan cut filter a "color-optimum pre-filter". They paper is called "Eyeing the Camera: into the Next Century". Look at pg.4. I sort of doubt this is what they've done, but I am thrilled if it is.

    Again, I'm just hoping for high frame rates. Hopefully, "faster processing and writing to memory of high-volume image data" means video frame rates. 60fps would be marvelous (but unlikely knowing SIgma and Foveon).

    ReplyDelete
  8. I think the top layer can absorb RGB all (no CFA) instead of only blue and give a kind of luminance by which resolution can be increased.

    ReplyDelete
    Replies
    1. Anonymous #2: I said it absorbs light across the whole spectrum, just not in those words. I said the topmost layer absorbs predominantly blue light but also absorbs all the longer wavelengths, being limited by only the IR cut filter.

      Delete
  9. There are no layers as such only a stack of seven alternating diffusions. The collection junctions for these are position approximately o.2, 0.8 ad 3.0 microns below the silicon surface. Only the top layer is (sort of) pinned photodiodes and the others are not because there is not straightforward way to pin the intermediate diffusions. "Pinning" was most important for the blue layer due to it relatively high dark current.

    The filter Hubel proposed was tried on the F7 very early on but did not improve anything over the naked sensor. If anything, the noise was worse for a particular arriving light level. It turned out to be better to not throw anything away.

    With the correct transformation matrix, the color accuracy of the Foveon devices is in the same range as any of the high-end sensors - about 5-6 average delta-E error. With a silicon detector, it is very difficult to do better than this with three detection channels. However, due to the large off-axis terms in the matrix, the Foveon corrected image is noisier at any specific light level than high-end Bayer CMOS sensors.

    Foveon originally used the 1:1:4 structure in its cellphone sensors because the pixels were so small that it was practical to fabricate pixels with full resolution only in the top layer. They did a lot of work to optimize the processing to use the resolution of the blue channel as information for overall image resolution improvement. Sigma must have seen some benefit to this to have had Foveon make a sensor with partitioned blue pixels where these are not mandated by the pixel geometry.

    Anyone interested in more details of the structure or operation of the Foveon sensor is welcome to send me an e-mail - dave@alt-vision.com. We have been selling these sensors for over 10 years so I can answer most questions.

    ReplyDelete
    Replies
    1. Regarding colour: since the colour separation is very weak, the colour accuracy go down significantly as the number of photons go down, right? What about metamerism? Considring the amount of trouble colours have caused over the years and the weak colour separation vs. oraganic colour filters made for the job, it is hard to believe them to be competetive.

      Eric Fossum wrote the paper http://ericfossum.com/Publications/Papers/2011%20IISW%20Two%20Layer%20Photodetector.pdf on pixel structures similar to Foveon and in the Table II in the end the colour deviation figures of Foveon are very high compared to the regular CFA. Have I misunderstood something?

      Delete
    2. In the referenced work I only did a quick study of a 3-layer structure for the purpose of comparison to the 2 layer structures. It was not the exact Foveon structure. In that case, yes, the color reproduction using a 3x3 CCM was poorer than Bayer. I understand Foveon uses a more sophisticated color correction methodology that is is also more computationally intense, and achieves better color reproduction than what I obtained.
      If you look at color images taken by Sigma cameras in bright light you will find that the color is pretty good. Considering what must be required to get good color starting from the raw signals, it is a credit to the Foveon team that they did such a good job.

      Delete
    3. Average delta-E error is 5-6 because they're likely using Macbeth patches under one of the standard illuminants as a reference (as the ISO standard calls for). Error increases as the spectral bandwidth narrows and varies depending upon what part of the visible spectrum is being imaged. The best treatment of color gamut and accuracy that I know of was presented at the SMPTE technical conference by Wayne Bretl-- “Theoretical and Practical Limits to Wide Color Gamut Imaging in Objects, Reproducers, and Cameras”.
      There is nothing wrong with silicon as a photodetector limiting its color accuracy, unless strictly limited to multi-layered photodetection, which Foveon is doing and is known for. Increasing the number of detection channels is the wrong way to go about increasing color accuracy. There is only one way to get truly accurate color, and no one is doing it.
      ~Anonymous #2

      Delete
    4. Color accuracy is independent of light level (for a given illuminant) only the noise increases and color rendering is usually tuned to desaturate colors so as to reduce noise but it is not a sensor attribute.
      Perfect color rending is only possible in the Luther-Yves conditions: the spectral sensitivities must be a linear combination of the human vision. However color accuracy is seldom an issue if IR are properly filtered, noise amplification from a color rendering that has to compensate for poor color separation is much more of an issue.

      Delete
    5. I'm glad someone understands this--that accurate color rendition requires meeting Luther-Yves conditions. I can't count how many times I have read that the CIE XYZ CMF are cone absorption and/or ideal as camera absorption primaries. That's plain wrong, and is a very widespread misconception, even among professed experts.
      ~Anonymous #2

      Delete
    6. Found on this website:
      http://www.kweii.com/site/color_theory/cri/foveon.html
      it claims that, if coupled to a proper (quite complex) external filter, foveon can have a much more precise color representation than a conventional camera, because it can match Luther-Ives conditions with a smaller error. There are no considerations about noise amplification: would it be larger for foveon in this case?

      Delete
  10. The resolution statement is highly questionable, just because they have one color channel with 20MP does not imply that it is equivalent to a Bayer sensor that also has one color channel with 20MP.
    For a given number of photosite, the Bayer sensor has probably the best resolution, frequencies between half Nyquist and Nyquist can be challenging but some algorithm can do a good job, the "chroma" resolution is usually downsampled by JPEG compression anyway.
    This architecture should shine is low-light because it can capture more photons than a Bayer sensor that filters many of them in order to get color information. But the Foveon poor color separation means more noise after color rendering than a Bayer. As long as they cannot fix the color separation with good QE, this is just another "nice idea" and a Bayer with the same number of photosites is superior.

    ReplyDelete
  11. I found this Dick Merrill et al. patent that seems to be the related IP to this blog post:
    US 8,039,916 B2
    It is a pretty interesting patent, esp. in terms of the evolution of thinking on the 3-layer stacked structure. Actually, I wish I had found this earlier and referenced it in the PPD review paper.
    Nice job Dick. RIP.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.