Lists

Saturday, April 24, 2021

Time-Based SPAD HDR Imaging Claimed to be Better than Dull Photon Counting

University of Wisconsin-Madison, USA, and Politecnico di Milano, Italy, publish Arxiv.org paper "Passive Inter-Photon Imaging" by Atul Ingle, Trevor Seets, Mauro Buttafava, Shantanu Gupta, Alberto Tosi, Mohit Gupta, and Andreas Velten.

"Digital camera pixels measure image intensities by converting incident light energy into an analog electrical current, and then digitizing it into a fixed-width binary representation. This direct measurement method, while conceptually simple, suffers from limited dynamic range and poor performance under extreme illumination -- electronic noise dominates under low illumination, and pixel full-well capacity results in saturation under bright illumination. We propose a novel intensity cue based on measuring inter-photon timing, defined as the time delay between detection of successive photons. Based on the statistics of inter-photon times measured by a time-resolved single-photon sensor, we develop theory and algorithms for a scene brightness estimator which works over extreme dynamic range; we experimentally demonstrate imaging scenes with a dynamic range of over ten million to one. The proposed techniques, aided by the emergence of single-photon sensors such as single-photon avalanche diodes (SPADs) with picosecond timing resolution, will have implications for a wide range of imaging applications: robotics, consumer photography, astronomy, microscopy and biomedical imaging."


8 comments:

  1. Interesting work! But it would be good if the authors gave more consideration to sensor and system power requirements, pixel pitch, optics (can you get tens of millions of dynamic range through a lens system without optical artifacts?) and application. One might also ask why, with dozens of HDR approaches proposed in the past, why hasn't hardware-based HDR caught on more? So as interesting as this approach is, academically speaking, I think it overreaches a bit on practical usefulness at this time.

    ReplyDelete
  2. Hi Eric,

    Thanks for the feedback!

    We discuss some practical limitations and speculate about future pixel implementations in Sections 4, 5 and Supplementary Note 7 in the paper.

    Unfortunately, we do not have definitive answers about power requirements and pixel pitch right now; these are important topics for future research.

    We do expect that at very high dynamic ranges the optics will become the limiting factor and will have to be improved, e.g., with better coatings. In our experiment results we see some lens flare artifacts.

    It is our understanding that the HDR method used in some of today's CMOS sensors is in fact a simple hardware adaptation (e.g. neighboring pixels have different exposure times or different ND filters). More sophisticated hardware based HDR methods like log cameras are commercially available, but have a rather small application range. Recent advances in SPAD arrays with in-pixel time-to-digital converters and histogramming circuits gives us some confidence that method we propose here could be implemented in practice.

    I agree with you that this paper is still an early stage academic project right now. Our main goal here is to present the idea of flux measurement using inter-photon timing and analyze its theoretical extreme dynamic range performance.

    Thanks,
    Atul

    ReplyDelete
    Replies
    1. Atul, maybe you can explain, in simple terms, the advantage of your approach. Particularly, if I measure the # of photons that arrive in some interval T, then I can deduce both the average time between photons, and come up with the Poisson arrival statistics (time-domain) for those photons, which should be equivalent to your approach of measuring an ensemble of arrival times directly.

      I guess another way of saying the same thing is. what is the disadvantage of working with the integral instead of the pulse train? Is it just that you can count the number of photons exactly?

      The best argument I can see for your approach is if you already have a SPAD with arrival timing hardware on the chip, then you could use it for HDR imaging applications at little extra cost (power, pixel pitch) compared to normal SPAD arrival-time operation. Curious what you and the team thinks.

      Delete
    2. The inter-photon time computed from just the number of photons in a given interval T (as you suggested) is indeed a good approximation for the average inter-photon timing in low-to-moderate flux levels. However, the key difference is at high flux. This is due to the following reason:

      A dead-time-limited SPAD pixel has a non-linear relationship between the number of incident and detected photons. So, if we only rely on discrete photon counts, at extremely high flux, we suffer from quantization error --- a large range of incident flux values may map to the same photon count.

      On the other hand, if we can measure the inter-photon timing directly, we can distinguish between two similar flux values --- at a higher flux value we expect to measure a smaller average inter-photon time even if the resulting photon counts are the same .

      This is shown pictorially in figure 3 screenshot in the blog post above.

      Delete
    3. Thanks for the further information Atul. It seems your approach helps with the specific dead time problem with SPADs under high flux conditions. Integrating devices like CIS or CIS-QIS don't have this problem so aside from FWC concerns for (highest flux) X (min. integration time), CIS and CIS-QIS would perform better than PF-SPAD and equal to IP-SPAD. (Assuming one used a range of shutter times.)

      I don't know enough about SPADs to resolve this question: How can you be able to measure inter-photon arrival timing but you cannot count the number of arrivals? Seems if you can do the former, then in principle you should be able to do the latter. Maybe it is a limitation of today's on-chip counting circuits for SPADs?

      Delete
    4. Our method relies on the *average* inter-photon time so we need both pieces of information --- photon timestamps and the total photon counts.

      Although our demonstration uses a SPAD, conceptually, what we are proposing is more general than a specific technology. Any real world image sensor will have certain physical limits to how many photons it can capture in a fixed exposure time (e.g. full well capacity for a CIS, or dead-time for a SPAD, or number of spatio-temporal jots in a QIS). The key takeaway is the following: knowing the inter-photon timing in addition to photon counts can increase dynamic range.

      Delete
  3. Let's consider a more concrete example to convince you of the usefulness of timing information in passive imaging.

    Suppose our image sensor pixel detects N photons in a fixed exposure time T while imaging a scene point with true brightness B. Let's also assume that N is very close to the sensor's saturation limit.*

    Now suppose the scene point brightness is increased to B+ε which corresponding to a slightly higher expected photon count of N+δ in the same exposure time T.** If ε is small, δ is much smaller than 1 (since we assume the pixel is close to saturation). So, in practice, the pixel will still measure N photons (with high probability), giving the same estimated brightness of B instead of the true brightness of B+ε.

    The reason for this limitation is the measurement of photon counts is inherently quantized. So the image sensor cannot distinguish between small changes in brightness levels when close to saturation.

    With inter-photon imaging, we measure the photon timing information --- a continuous-valued floating-point measurement --- in addition to the photon counts. The average inter-photon times will be slightly smaller for the higher brightness level, and we can distinguish them even if the two produce the same photon counts N.

    Our paper discusses one possible implementation of this inter-photon imaging idea using SPAD pixels.

    --------------
    * I am deliberately being vague here about whether this is a CIS, QIS or a PF-SPAD. The only thing that matters is that the pixel has a saturation limit.
    ** The exact value of δ will depend on B, ε and the pixel's response curve.

    ReplyDelete
  4. From a theoretical standpoint, if we treat photon inter-arrival times as exponentially distributed T~Exp(Phi), then the number of photons arriving per integration time t is N~Poisson(Phi*t). The Fisher information of the exponential distribution--as a function of Phi--is different from that of the Poisson distribution. So a sample of arrival times caries different information about the flux than the number of counts does. It may be possible to show that the Fisher information for the timing data is larger than that of the counts, which would potentially lead to another way to justify this.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.