Thursday, March 12, 2015

Unispectral Pushes Sequential Color Imaging into Camera Phones

Unispectral, Israel, Tel Aviv University-originated startup proposes to move sequential color imaging into camera modules. The claim is that switching Bayer filter for a sequential design increases the camera resolution by a factor of 4, and the low light sensitivity by a factor of 2. These claims are presented in Optics Express paper "Sequential filtering for color image acquisition" by Ariel Raz and David Mendlovic. (David Mendlovic used to be the manager of Tessera camera module design and now serves as CEO of Corephotonics.)

Vimeo video shows the new company's promise:



Older project info at Tel Aviv University site says that they use a tunable filter for the sequential imaging and that "Comparing with a Bayer scheme, for green the sensor provides 2 times more resolution, and for red and blue 4 times more resolution."

Thanks to WRWWCTB for the pointer!

9 comments:

  1. This is a nice concept, but is that have any mobile feasibility?
    The optical path required is not that trivial to fit mobile thickness

    ReplyDelete
  2. this is a etalon based color filter, color purity will be one issue, the other issue will be the speed. it is a similar idea to Qualcomm's mirasol display, cannot believe those will meet the requirement's of today's camera application. Also IMEC have a stationary solution using similar idea. I also have some paper talk about this.

    ReplyDelete
  3. 70% water
    40% ethanol
    it's over 100% already

    ReplyDelete
  4. Don't think the target application is regular-use (mobile) cameras. Probably, more specialized niches like combined VIS+NIR or some other multi- hyper-spectral application.
    Also note that the whole existing image processing pipeline needs to be uprooted and changed. By the time this is done (if ever) the traditional sensors will have long ago surpassed the pitched performance improvements. This is (sigh) very similar to what had happened to Foveon.
    What's offered here is by no means revolutionary like Invisage, SiOnyx (if they're ever realized), so people's willingness to invest in a new process is IMHO highly questionable.

    ReplyDelete
  5. Using sequential color filters is an ancient (filter wheel) and well understood way of generating color images. I am not sure of their use of the ambiguous FOM of sensitivity. Is it the light equivalent noise (i.e. SNR=1) or what? And why is it 2x? (It is not defined in the paper). Nominally each pixel sees the same number of photons in the same total exposure period. Either 1/3 due to the CFA passband, or 1/3 due to the color subfield exposure time. Perhaps I am missing something. Anyway, LCTF sequential color has been looked at in the past and rejected for several reasons. Aside from the solid-state nature of this approach, not sure what other advantages might exist over LCTF.

    ReplyDelete
    Replies
    1. Maybe it comes from being able to control how much time the sensor spends at each color, so rather than the fixed Bayer 1/3-1/3-1/3 they could make it adaptable. In low-light conditions, most of the light is red anyway, so by making the exposure longer for colors that matter they could perhaps gain in SNR. Not sure where the 2x comes from, though.

      Delete
  6. It looks like this is great marketing buzz to attract VC's attention... time will tell if this happens to be true or just wind. But at first look, it isn't looking consistent :)

    ReplyDelete
  7. The drink analysis gives it way... that capability is not there in remote sensing in those conditions, and especially the details are incorrect and impossible.

    Which casts severe doubt on the other figures in the presentation.

    ReplyDelete
  8. Interspectral's technology using a scanning spectral Fabry-Perot filter is very similar to that developed by VTT in Finland, see below - it remains to be seen if it would allow image acquisition at video rates:
    http://webextra.vtt.fi/news/2014/26022014_hyperspektrikamera.jsp?lang=en

    For comparison, we made trials using IMEC's imager, which has no scanning or moving filters– see here:
    http://phys.org/news/2015-02-imec-snapshot-hyperspectral-image-sensors.html
    and can deliver try real-time hyperspectral video:
    https://vimeo.com/77218620

    We are working on a proposal to use hyperspectal imaging to identify object by purpose-engineered tags and evaluated several hyperspectral technologies.

    www.thewhollysee.com

    ReplyDelete

All comments are moderated to avoid spam.