Lists

Friday, May 28, 2021

Canon Article about its 1MP SPAD Sensor

Canon publishes Featured Technology article about its 1MP SPAD sensor:

"Until recently, it was considered difficult to create a high-pixel-count SPAD sensor. On each pixel, the sensing site (surface area available for detecting incoming light as signals) was already small. Making the pixels smaller so that more pixels could be incorporated in the image sensor would cause the sensing sites to become even smaller, in turn resulting in very little light entering the sensor, which would also be a big problem.

Specifically, on conventional SPAD sensors, structural demands made it necessary to leave some space in between the different sensing sites on neighboring pixels. The aperture ratio, which indicates the proportion of light that enters each pixel, would therefore shrink along with the pixel size, making it difficult to detect the signal charge.

However, Canon incorporated a proprietary structural design that used technologies cultivated through production of commercial-use CMOS sensors. This design successfully kept the aperture rate at 100% regardless of the pixel size, making it possible to capture all light that entered without any leakage, even if the number of pixels was increased. The result was the achievement of an unprecedented 1,000,000-pixel SPAD sensor."


"The SPAD sensor that Canon has developed is also equipped with a global shutter that can capture videos of fast-moving subjects while keeping their shapes accurate and distortion-free. Unlike the rolling shutter method that exposes by activating a sensor’s consecutive rows of pixels one after another, the SPAD sensor controls exposure on all the pixels at the same time, reducing exposure time to as short as 3.8 nanoseconds3 and achieving an ultra-high frame rate of up to 24,000 frames-per-second (FPS) in 1-bit output. This enables the sensor to capture slow motion videos of phenomena that occur in extremely short time frames and were previously impossible to capture."

6 comments:

  1. It is embarrassing to market this as Canon work. The chip was done at EPFL, by EPFL and measured at EPFL. The chip never touched Japanese ground.

    ReplyDelete
    Replies
    1. I think your comment disrespects Kazu Morimoto who did the majority of work that is reported. He was a Canon employee on educational leave at EPFL, and fully leveraged Canon resources to make realize this chip. While I served on his PhD committee, I don't recall where the chip was fabricated but I believe the cost and arrangements were done taken care of by Canon. Perhaps it was fabricated on Japanese soil but definitely NOT at EPFL. Of course Kazu was well-mentored at EPFL by his advisor Edoardo Charbon and it was certainly joint work of Canon and EPFL. It would likely not exist without EPFL but the work was done by a Canon employee. It would have been better if Canon had acknowledged that fruitful collaboration with EPFL, but as we know from several large companies (particularly in Japan), they become very self-centric in these press releases.

      Delete
    2. It seems Canon has acknowledged EPFL in past articles
      https://global.canon/en/news/2020/20200624.html
      https://www.canon-europe.com/view/travelling-light-megapixel-spad/

      Delete
  2. The question that I care the most is how many "effective" pixels it can provide after grouping a number of pixels for the necessary coincidence detection with the existence of ambient light (e.g. sunlight). iPad Pro uses Sony's 200*150 pixel SPAD for its LiDAR sensor while only produces 24*24 effective pixels (the seemingly richer point cloud is a combination of the interpolation from the NIR sensor images and the helps from hand movement "scanning"). That's 6*8 native SPAD pixel for one effective pixel! If this continuous to be true, the 1MP SPAD is merely a ~150*150 sensor.

    ReplyDelete
    Replies
    1. You seem to be highly confused between a SPAD depth sensor and a SPAD high DR sensor. Pls review them once. Coming to resolution query - Longer depths are more important than resolutions for depth sensing. And iToF, having higher resolution sensors, cannot do that.

      Delete
    2. Albeit pointing out that the sensor targets HDR may be correct - this could've been said nicer... Why are there so many unnecessarily aggressive comments? With regards to iToF - iToF can surely achieve longer depth by binning similarly to dToF. A key advantage dToF has over iToF is histogramming with which multipath/multi-camera can be overcome - but also here methods exist for iToF. So simply saying that iToF can't reach long depth sensing is oversimplified at best.

      Delete

All comments are moderated to avoid spam and personal attacks.