Saturday, November 15, 2014

"Active Pixel Color Sampling" Story Goes On

SonyAlpha Rumors publishes more details on the allegedly oncoming Sony IMX189AEG "Active Pixel Color Sampling" sensor. The new material explains how the technology works. The RGB colored microlens layer is not rigidly attached to the sensor surface but rather is able slide along it. Each pixel has one sensing PD and three storage one that store the color info as the microlens layer slides:

A lot of data still looks illogical, such as over 9V saturation level, while the supply voltage is only 4.2V:


  1. Seems a little late in the year for April 1.

  2. Aside from the mechanics of moving the CFA, the general idea is sort of ok. 20+ years ago there was some excitement about liquid crystal tunable filters replacing CFA in image sensors (or filter wheels for that matter). If you could dither the color at high rate, and not get a lot of in-between colors (so you need a discard port for signals while the color was changing, i.e. 4-ports for each PPD) it would do ok. Not so good for flash. The ports could "pump" charge to a GS storage diode.
    and maybe an LCTF could choose "black" for standby to improve shutter efficiency.

    1. There is a new twist on that with disappearing mask over the phase AF pixels, so that AF pixels become normal ones in capture mode:

    2. Theres also an interesting Pentax patent in which the image sensor is shifted in CDAF using the sensor IS to obtain a phase measurement. I can see many advantages if this can work reliably/accurately in practice

  3. This is not looking any less of a hoax.

    There are so many problems with this:

    1. Terrible graphics. Poor English in the graphics. No explanatory text. This sort of explanation happens on Sony websites not in datasheets and scattered across drawings.

    2. Type 1.5 not Type 1.5-inch as pretty much every CIS datasheet writer would put it.

    3. The diagonal measurement for type 1.5-inch is incorrect (should be around 28mm) and no physical H or V dimensions are given.

    4. Using older Sony trademarks rather than Pregius (the current Sony global shutter CIS trademark) or a new one for this "APCS technology".

    It looks like someone has taken a machine vision datasheet and modified to make the sensor bigger, with faster frame rate and more DR. Then added some non-sensical numbers.

  4. I would be wondering about thermal expansion of the chip vs CFA.

    The CFA can catch quite a bit of light. The chip has its own heat source.

    For a temperature delta of 10 degrees, I get at a 4% mismatch at sensor edge using thermal properties of silicon, which sounds acceptable. But the microlens material may behave differently.

  5. For fun I estimated the energy needed to accelerate the CFA to the required speed (about 5 m/s when CFA is stationary 90% of the time) at these relative high frame rates:

    So assuming
    - the CFA is stationary during actual exposure (say 90% of the time)
    - 50 kSteps per second (frame rate of 16 kHz x 3 colors)
    - 0.1 gram of moving mass
    - no losses in the linear actuator (piezo I guess)
    - kinetic energy is lost every time it decelerates
    - negligible time spent accelerating/decelerating (optimistic, keeps the formula high-school level)
    So wasting kinetic energy (0.5*m*sqr(v)) at a frequency of 50 kHz gives me 62.5 Watt. This is obviously not acceptable: the energy is not available, and would melt stuff.

    If we absolutely had to make make the engineering work, we would replace the high frequency start-stop-start-stop motion by a say 12x longer stroke and accept that we have to capture the signals while the CFA "flies"by. That lowers the lower bound of the energy requirement to about 50 mW (speed squared gives 100x, frequency goes down another 12x). BUT this gives a lot of signal-to-noise issues. I also vote Hoax.

    1. Do you calculate the energy cost of sensor-based anti-shake system?
      and IBIS energy cost is much higher since
      1: it move whole sensor instead of just CFA
      2: it keep moving during 100% exposure time
      but IBIS works.
      So, there are something wrong in your assumption, I believe one of them are
      "50 kSteps per second (frame rate of 16 kHz x 3 colors)"
      why the CFA movement has to catch up with the highest theoretical internal sampling rate?
      during normal condition, 60fps continuous video shooting, you only need move CFA 3 times during exposure time, that's 180 Hz instead of 50 kHz. Even you increase 3 time for smooth color rendition to couple with object movement, that's still 1% of your estimate, makeit 0.625 w power consumption.

    2. The data sheet claims the "IMX189AEG" can reach frame rates of up to 36 kHz. And for each frame, the CFA needs to be moved 3 times. I know that is an absurd frame rate.

      But I calculated using a frame rate of 50/3 kHz to show that this mechanical approach cannot realistically move the filter in 10 micron steps in 10% of 20 microseconds . At 60 Hz there would be no problem at all (I get 3 microWatt) because the energy actually scales with f^3. This also explains why sensor-based stabilization works fine - even if the moving mass is 10 or 100x larger. And shows that moving CFA filter at least starts become credible if you manage to reduce the speed 10x.

      If you don't trust the physics: try shaking a 6 kg bowling ball at say 10 cm peak-peak amplitude at 5 Hz. Now that is 30 Watt: you need to accelerate the mass (F=m * a) and the energy required even doubles because you need to decelerate it again. By comparison, you can roll a bowling ball at 1 m/s on a smooth surface (ice bowling , anyone?), and it will only slow down due to friction (not included in the shaking calculations).

    3. My point is, although the alleged sensor spec can reach 36 kHz frame rate, it can simply use stationary CFA during 36 kHz sampling, make it as a conventional bayer sensor.

      With any real photography needs, it can switch to CFA-shifting mode, with the power consumption negligible even in video mode.

  6. Who wants to have moving parts in his/her camera just above the active area ?

  7. Any idea about what does "576Mpbs serial output" means ?. Is it the total sensor's output rate i.e 576/16 = 36Mpix/sec ? :)

    1. This is obviously per channel... The more important question: "Any idea what 12960 ch sub-LVDS mean?" Maybe some inter-die connection? (TSV?) Or a typo? Or someone had a late first April?

  8. what's the space between moving color/ML and Si? is the ML still effective in this case? how can it support large CRA?

  9. > "A lot of data still looks illogical, such as over 9V saturation level, while the supply voltage is only 4.2V:"

    If you mean 2 * 4.2 < 9 then this Overview of LVDS Technology might explain how that is possible:

    The 'movement' could be accomplished by an LCD Electronic Shutter (like in 3D Glasses, or Welding and Fighter Pilot Helmets). Instead of a simple on and off (binary) state of the "shutter" they may have developed a means to electronically move the LCD from position to position or more simply to shutter each filter individually. I'm not saying "that makes it easy", only "that makes it possible".

    PS: We really enjoy the Articles here.

    1. @ If you mean 2 * 4.2 < 9 then this Overview of LVDS Technology might explain how that is possible:

      Unfortunately, this is not directly applicable to the pixel design.


All comments are moderated to avoid spam and personal attacks.