Sunday, October 17, 2021

IEDM 2021: Canon Presents 3.2MP SPAD Imager for Low-Light Sensing

IEDM publishes few pictures from Canon's IEDM paper #20.2, “3.2 Megapixel 3D-Stacked Charge Focusing SPAD for Low-Light Imaging and Depth Sensing,” K. Morimoto/J. Iwata et al.

3D Backside-Illuminated SPAD Imager Sensors: Unlike the CMOS image sensors found in smartphones, which measure the amount of light reaching a sensor’s pixels in a given timeframe, single photon avalanche diode (SPAD) image sensors detect each photon that reaches the pixel. Each photon is converted into an electric charge, and the electrons that result are eventually multiplied in avalanche fashion until they form an output signal. SPAD image sensors hold great promise for high-performance, low-light imaging applications, for depth sensing, and for fully digital system architectures.

However, until now their performance has been limited by tradeoffs in pixel detection efficiency vs. pixel size, and by poor signal-to-noise ratios. Recently a charge-focusing approach was proposed to overcome these issues, but until now it remained to be implemented. In a late-news paper, Canon researchers will discuss how they did so, with the industry’s first 3D-stacked backside-illuminated (BSI) charge-focusing SPADs. The devices featured the largest array size ever reported for a SPAD image sensor (3.2 megapixels) and demonstrated a photon detection efficiency of 24.4%, and timing jitter below 100ps at 940 nm.

From the images below:
(a) is a full-resolution color intensity image captured by the 3.2megapixel SPAD image sensor at a high light level.

(b) is a monochrome intensity image captured by the device under a scene illumination of 2mlux (without post-processing)

(c) is a monochrome intensity image captured by the device under a scene illumination of 0.3mlux (without post-processing)

a full-resolution color intensity image captured
by the 3.2megapixel SPAD image sensor
at a high light level.
a monochrome intensity image captured
 by the device under a scene illumination
 of 2mlux (without post-processing)
a monochrome intensity image captured
by the device under a scene illumination
 of 0.3mlux (without post-processing)

Saturday, October 16, 2021

2021 Walter Kosonocky Award

 ST reports at its Facebook page:

"ST’s Francois Roy succeeds in solid-state image sensors

The International Image Sensor Society recently gave François Roy the Walter Kosonocky Award for his paper entitled Fully Depleted Trench-Pinned Photo Gate for CMOS Image Sensor Applications. The Photo Gate pixel concept improves image quality and the manufacturing process.

‘’It is a great honor for ST, the ST Imaging teams, my PhD students and myself to receive this prestigious award and I thank them all,’’ said François.

At ST we are proud of our inventors. We nurture strong competences and encourage our distinguished senior experts to coach young engineers."

HDR in iToF Imaging

Lucid Vision Labs shows two nice demos of HDR importance in ToF imaging. The demos are based on its Helios2+ camera with Sony IMX556 iToF sensors:

2014 Imaging Papers

IEEE Sensors keeps publishing 2014 papers video presentations:

Author: Refael Whyte, Lee Streeter, Michael Cree, Adrian Dorrington
Affiliation: University of Waikato, New Zealand

Abstract: Time-of-Flight (ToF) range cameras measure distance for each pixel by illuminating the entire scene with amplitude modulated light and measuring the change in phase between the emitted light and reflected light. One of the most significant causes of distance accuracy errors is multi-path interference, where multiple propagation paths exist from the light source to the same pixel. These multiple propagation paths can be caused by inter-reflections, subsurface scattering, edge effects and volumetric scattering.  Several techniques have been proposed to mitigate multi-path interference. In this paper a review of current techniques for resolving measurement errors due to multi-path interference is presented, as currently there is no quantitative comparison between techniques and evaluation of technique parameters. The results will help with the selection of a multi-path interference restoration method for specific time-of-flight camera applications.


Author: Mohammad Habib, Farhan Quaiyum, Syed Islam, Nicole McFarlane
Affiliation: University of Tennessee, Knoxville, United States

Abstract: Perimeter-gated single photon avalanche diodes (PGSPAD) in standard CMOS processes have increased breakdown voltages and improved dark count rates. These devices use a polysilicon gate to reduce the premature breakdown of the device. When coupled with a scintillation material, these devices could be instrumental in radiation detection. This work characterizes the variation in PGSPAD noise (dark count rate) and breakdown voltage as a function of applied gate voltages for varying device shape, size, and junction type.


Author: Min-Woong Seo, Taishi Takasawa, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Affiliation: Shizuoka University, Japan

Abstract: A low-noise high-sensitivity CMOS image sensor for scientific use is developed and evaluated. The prototype sensor contains 1024(H) × 1024(V) pixels with high performance column-parallel ADCs. The measured maximum quantum efficiency (QE) is 69 % at 660 nm and long-wavelength sensitivity is also enhanced with a large sensing area and the optimized process. In addition, dark current is 0.96 pA/cm2 at 292 K, temporal random noise in a readout circuitry is 1.17 electrons RMS, and the conversion gain is 124 uV/e-. The implemented CMOS imager using 0.11-um CIS technology has a very high sensitivity of 87 V/lx*sec that is suitable for scientific and industrial applications such as medical imaging, bioimaging, surveillance cameras and so on.

Friday, October 15, 2021

IEDM 2021: Samsung Presents 0.8um Color Routing Pixel, Sony 6um SPAD Achieves 20.2% PDE at 940nm

IEDM 2021 presents many image sensor papers in its program:

  • 20-1 A Back Illuminated 6 μm SPAD Pixel Array with High PDE and Timing Jitter Performance,
    S. Shimada, Y. Otake, S. Yoshida, S. Endo, R. Nakamura, H. Tsugawa, T. Ogita, T. Ogasahara, K. Yokochi, Y. Inoue, K. Takabayashi, H. Maeda, K. Yamamoto, M. Ono, S. Matsumoto, H. Hiyama, and T. Wakano.
    Sony
    This paper presents a 6μm pitch silicon SPAD pixel array using 3D-stacked technology. A PDE of 20.2% and timing jitter FWHM of 137ps at λ=940nm with 3V excess bias were achieved. These state-of-the-art performances were allowed via the implementation of a pyramid surface structure and pixel potential profile optimization.
  • 30-1 Highly Efficient Color Separation and Focusing in the Sub-micron CMOS Image Sensor,
    S. Yun, S. Roh, S. Lee, H. Park, M. Lim, S. Ahn, and H. Choo.
    Samsung Advanced Institute of Technology
    We report nanoscale metaphotonic color-routing (MPCR) structure that can significantly improve the lowlight performance of a sub-micron CMOS image sensor. Fabricated on the Samsung's commercial 0.8μm pixel sensor, MPCR structures confirms increased quantum efficiency (+20%), a luminance SNR improvement (+1.22 dB@5lux), a comparably low color error and great angular tolerant response.
  • 20-2 3.2 Megapixel 3D-Stacked Charge Focusing SPAD for Low-Light Imaging and Depth Sensing (Late News),
    K. Morimoto, J. Iwata, M. Shinohara, H. Sekine, A. Abdelghafar, H. Tsuchiya, Y. Kuroda, K. Tojima, W. Endo, Y. Maehashi, Y. Ota, T. Sasago, S. Maekawa, S. Hikosaka, T. Kanou, A. Kato, T. Tezuka, S. Yoshizaki, T. Ogawa, K. Uehira, A. Ehara, F. Inui, Y. Matsuno, K. Sakurai, T. Ichikawa.
    Canon Inc.
    We present a new generation of scalable photon counting image sensors for low-light imaging and depthsensing, featuring read-noise-free operation. Newly proposed charge focusing SPAD is employed to a prototype 3.2 megapixel 3D backside-illuminated image sensor, demonstrating the best-in-class pixel performance with the largest array size in APD-based image sensors.
  • 23-4 1.62µm Global Shutter Quantum Dot Image Sensor Optimized for Near and Shortwave Infrared,
    J. S. Steckel, E. Josse, A. G. Pattantyus-Abraham, M. Bidaud, B. Mortini, H. Bilgen, O. Arnaud, S. Allegret-Maret, F. Saguin, L. Mazet, S. Lhostis, T. Berger, K. Haxaire, L. L. Chapelon, L. Parmigiani, P. Gouraud, M. Brihoum, P. Bar, M. Guillermet, S. Favreau, R. Duru, J. Fantuz, S. Ricq, D. Ney, I. Hammad, D. Roy, A. Arnaud , B. Vianne, G. Nayak, N. Virollet, V. Farys, P. Malinge, A. Tournier, F. Lalanne, A. Crocherie, J. Galvier, S. Rabary, O. Noblanc, H. Wehbe-Alause , S. Acharya, A. Singh, J. Meitzner, D. Aher, H. Yang, J. Romero, B. Chen, C.Hsu, K. C. Cheng, Y. Chang, M. Sarmiento, C. Grange, E. Mazaleyrat, K. Rochereau,
    STMicroelectronics
    We have developed a 1.62µm pixel pitch global shutter sensor optimized for imaging in the NIR and SWIR. This breakthrough was made possible through the use of our colloidal quantum Dot thin film technology. We have scaled up this new platform technology to our 300mm manufacturing toolset.
  • 30-2 Automotive 8.3 MP CMOS Image Sensor with 150 dB Dynamic Range and Light Flicker Mitigation (Invited),
    M. Innocent, S. Velichko, D. Lloyd, J. Beck, A. Hernandez, B. Vanhoff, C. Silsby, A. Oberoi, G. Singh, S. Gurindagunta, R. Mahadevappa, M. Suryadevara, M. Rahman, and V. Korobov,
    ON Semiconductor
    New 8.3 MP image sensor for automotive applications has 2.1 µm pixel with overflow and triple gain readout. In comparison to earlier 3 µm pixel, flicker free range increased to 110 dB and total range to 150dB. SNR in transitions stays above 25 dB up to 125°C.
  • 30-3 A 2.9μm Pixel CMOS Image Sensor for Security Cameras with high FWC and 97 dB Single Exposure Dynamic Range,
    T. Uchida, K. Yamashita, A. Masagaki, T. Kawamura, C. Tokumitsu, S. Iwabuchi,. Onizawa, M. Ohura, H. Ansai, K. Izukashi, S. Yoshida, T. Tanikuni, S. Hiyama, H. Hirano, S. Miyazawa, Y. Tateshita,
    Sony
    We developed a new photodiode structure for CMOS image sensors with a pixel size of 2.9μm. It adds the following two structures: one forms a strong electric field P/N junction on the full-depth deep-trench isolation side wall, and the other is a dual-vertical-gate structure.
  • 30-4 3D Sequential Process Integration for CMOS Image Sensor,
    K. Nakazawa, J. Yamamoto, S. Mori, S. Okamoto, A. Shimizu, K. Baba, N. Fujii, M. Uehara, K. Hiramatsu, H. Kumano, A. Matsumoto, K. Zaitsu, H. Ohnuma, K. Tatani, T. Hirano, and H. Iwamoto,
    Sony
    We developed a new structure of pixel transistors stacked over photodiode fabricated by 3D sequential process integration. With this technology, we successfully increased AMP size and demonstrated backsideilluminated CMOS image sensor of 6752 x 4928 pixels at 0.7um pitch to prove its functionality and integrity.
  • 35-3 Computational Imaging with Vision Sensors embedding In-pixel Processing (Invited),
    J.N.P. Martel, G. Wetzstein,
    Stanford University
    Emerging vision sensors embedding in-pixel processing capabilities enable new ways to capture visual information. We review some of our work in designing new systems and algorithms using such vision sensors with applications in video-compressive imaging, high-dynamic range imaging, high-speed tracking, hyperspectral or light-field imaging.

Thursday, October 14, 2021

Prophesee CEO on Future Event-Driven Sensor Improvements

IEEE Spectrum publishes an interview with Prophesee CEO Luca Verre. There is an interesting part about the company's next generation event-driven sensor:

"For the next generation, we are working along three axes. One axis is around the reduction of the pixel pitch. Together with Sony, we made great progress by shrinking the pixel pitch from the 15 micrometers of Generation 3 down to 4.86 micrometers with generation 4. But, of course, there is still some large room for improvement by using a more advanced technology node or by using the now-maturing stacking technology of double and triple stacks. [The sensor is a photodiode chip stacked onto a CMOS chip.] You have the photodiode process, which is 90 nanometers, and then the intelligent part, the CMOS part, was developed on 40 nanometers, which is not necessarily a very aggressive node. Going for more aggressive nodes like 28 or 22 nm, the pixel pitch will shrink very much.

The benefits are clear: It's a benefit in terms of cost; it's a benefit in terms of reducing the optical format for the camera module, which means also reduction of cost at the system level; plus it allows integration in devices that require tighter space constraints. And then of course, the other related benefit is the fact that with the equivalent silicon surface, you can put more pixels in, so the resolution increases.The event-based technology is not following necessarily the same race that we are still seeing in the conventional [color camera chips]; we are not shooting for tens of millions of pixels. It's not necessary for machine vision, unless you consider some very niche exotic applications.

The second axis is around the further integration of processing capability. There is an opportunity to embed more processing capabilities inside the sensor to make the sensor even smarter than it is today. Today it's a smart sensor in the sense that it's processing the changes [in a scene]. It's also formatting these changes to make them more compatible with the conventional [system-on-chip] platform. But you can even push this reasoning further and think of doing some of the local processing inside the sensor [that's now done in the SoC processor].

The third one is related to power consumption. The sensor, by design, is actually low-power, but if we want to reach an extreme level of low power, there's still a way of optimizing it. If you look at the IMX636 gen 4, power is not necessarily optimized. In fact, what is being optimized more is the throughput. It's the capability to actually react to many changes in the scene and be able to correctly timestamp them at extremely high time precision. So in extreme situations where the scenes change a lot, the sensor has a power consumption that is equivalent to conventional image sensor, although the time precision is much higher. You can argue that in those situations you are running at the equivalent of 1000 frames per second or even beyond. So it's normal that you consume as much as a 10 or 100 frame-per-second sensor.[A lower power] sensor could be very appealing, especially for consumer devices or wearable devices where we know that there are functionalities related to eye tracking, attention monitoring, eye lock, that are becoming very relevant."

Hynix Aims to Capture Larger Chunk of CIS Market

BusinessKorea: "I am confident that CMOS image sensors (CISs) will be a pillar of SK Hynix's growth along with DRAMs and NAND flashes," said Song Chang-rok, VP of SK Hynix's CIS business, in an interview with SK Hynix Newsroom on Oct. 12. "The next goal is to join the leaders’ group."

Although SK Hynix is a latecomer, it aims to strengthen its R&D capabilities for image sensors and improve productivity early to join the leaders of the high-pixel count sensor market.

Currently, Sony and Samsung Electronics are the leaders of the CIS market, with a combined market share of about 80%. SK Hynix, OmniVision, and GalaxyCore are competing for the remaining 20%.

Technology gaps become meaningless when the market undergoes big changes,” Song said. “Competition factors will change from process technologies such as miniaturization to peripheral technologies such as information sensors and intelligence sensors.

The CIS market is expected to grow 7.3% annually from US$19.9 billion in 2021 to US$26.3 billion in 2025, Gartner said in June. During the same period, the overall semiconductor market is expected to grow by 4.0% annually and the memory market - by 4.1%.

Few more quotes from Song Chang-rok's article (in Google translation):

"As the memory semiconductor market grows and technology advances, new fabs are being built and new processes and equipment are introduced. Idle assets and prior art generated during this process can be applied to the CIS business. CIS requires a lower level of refinement compared to memory, but the equipment and processes required for production are similar.

In addition, the CIS business is important in that it serves as a bridgehead for SK hynix to expand into the non-memory market. We will respond effectively to market growth and create a successful story.

As a latecomer, we have experienced some trial and error, but we have continued to grow steadily. At first, customers doubted the possibility that SK hynix could do CIS business, but now it is recognized as a major supplier in the low-pixel area of ​​13MP (megapixel) or less. In order to expand into the 32MP or higher high-pixel market that can create high added value, we are strengthening our R&D capabilities and striving to secure productivity.

Above all, SK hynix has a great advantage in securing 'Pixel Shrink' technology, which determines the reliability of CIS. Cell miniaturization know-how has been accumulated for a long time in the DRAM field, and proven equipment is deployed on the production line. When our competitors go through multiple stages, we can find shortcuts. We will strengthen our competitiveness by making good use of these advantages.

Analog driven CIS cannot continue to reduce pixel size like DRAM. When the limit of miniaturization is reached, overcoming it requires new innovations in the surrounding technology rather than the process technology. In the future, CIS will evolve into an information sensor or intelligence sensor, not just a visual sensor. Accordingly, the paradigm of competition also changes.

The CIS business has different characteristics from the memory semiconductor business, such as a multi-variety small-volume production system and a complex industrial ecosystem, but there have been some inefficiencies by using the existing memory semiconductor process or system as it is. We focused on improving the speed of decision-making and improving work efficiency by improving it for the CIS business."

Yole on Camera Module Market

Yole Developpement reports on CCM market:

"After the outbreak of the pandemic in 2020, society and enterprise including Compact Camera Module (CCM) industry were temporarily suspended, but swift action was taken, and CCM industry quickly recovered.

The multi-camera approach in mobile not only increases photography functions (such as macro, telephoto etc.) but also greatly improves the photographic effects. As a result, the multiple-camera strategy is applicable in most of smartphones, resulting in an increase in the number of mobile CCMs to increase from 4.9B to 5.4B units from 2019 to 2020, a Yearover- Year (YoY) growth of 10.4%.

From imaging to sensing, the 3D camera could be in the front or rear (the front also includes optical fingerprint recognition), it will positively affect the developmental direction of the multicamera approach in mobile phones.

The automotive market started with rear imaging and has now developed to 360 surround-view, going from one camera to at least four even more. The autonomous driving is also increasing the need for more cameras. In addition, cars also need in-cabin cameras, as well as cameras to replace rearview mirrors. They are the next wave of CCM market.

In the consumer sector, products are becoming intelligent – connected to everything – allowing vision to play a more significant role in applications such as robots and home surveillance cameras. These will need more cameras.

Yole Développement expects the revenue of the global camera module to expand from $34B in 2020 to $59B in 2026, at a 9.8% CAGR."

Wednesday, October 13, 2021

Assorted Videos: Cordy, Teledyne e2v, SCD, Emberion, Isorg, FLIR

 Zhuhai Cordy Electronic Technology publishes a video of its image sensor testing machine:

Teledyne e2v publishes a promotional video about its Emerald 36M and 67M sensors:


SCD publishes a video from its 5MP MSIR imager, said to be world's highest resolution MWIR sensor:


QinetiQ publishes an interview with Emberion and a short Q&A session:

Isorg posts a demo of its large area fingerprint sensor integrated into a smartphone display:


Autosens publishes a short interview with Teledyne FLIR on automotive use case for thermal cameras:

MIPI A-PHY Unveils PAM16 in its Roadamap

MIPI Alliance announces a completion of the development of the next version of the MIPI A-PHY SerDes interface, which will double the maximum available downlink data rate from 16 Gbps to 32 Gbps to support evolving requirements of automotive displays and sensors (cameras, lidars and radars). The enhanced version, v1.1, also will double the data rate available for uplink control traffic and introduce options for implementing A-PHY’s lower speed gears over lower-cost legacy cables, providing additional flexibility for manufacturers to implement A-PHY.

Tuesday, October 12, 2021

BAE Systems Unveils Low-Light Image Sensor Enabling Night Vision in Overcast Starlight Conditions

BusinessWire: BAE Systems unveils its BSI “Hawkeye” HWK1411 ultra low-light image sensor, said to enable market-leading night vision capabilities with reduced size, weight, and power. The 1.6MP sensor is designed for battery-powered soldier systems, unmanned platforms, and targeting and surveillance applications.

The sensor achieving Overcast Starlight Conditions at 120 fps which is said to be really a groundbreaking results for a CMOS sensor.

ST to Present its Quantum Dot SWIR Sensor at IEDM 2021

IEDM Press Kit shows few figures from STMicro presentation of its quantum dot NIR/SWIR imager paper:

Paper #23.4, “1.62µm Global Shutter Quantum Dot Image Sensor Optimized for Near and Shortwave Infrared,” J.S. Steckel et al, STMicroelectronics

Record Quantum Efficiency for NIR/SWIR Sensors: STMicroelectronics researchers will report a 1.62µm pixel-pitch global shutter sensor for imaging in the near-infrared (NIR) and shortwave infrared (SWIR) regions of the light spectrum. It demonstrated record optical performance: an unprecedented quantum efficiency of >50% and a shutter efficiency of >99.94%. The breakthrough was made possible by use of a novel colloidal PbS quantum dot thin-film technology, and the devices were fabricated on a 300mm manufacturing toolset.

  • The top photo is of a qualification wafer showing (a) elementary quantum film (QF) test structures; (b) pixel matrix test chips; and (c) full image sensor products.
  • Going from left to right in the middle set of images/drawings are a QF photodiode array integrated on top of a CMOS readout IC; the QF photodiode cross-section; and a graphical description of the device stack.
  • At the bottom is an outdoor image taken with the 940nm NIR QF sensor (left) and with a high-end smartphone camera (right). The NIR image shows a significant difference in contrast, and the ability to clearly identify the black electrical wires hidden in the tree leaves, vs. the visible image.

Sheba Microsystems Introduces a Pixel-shift Technology for Smartphones

BusinessWire: Sheba Microsystems releases ShebaSR, said to be the world’s first miniature actuator that can improve the resolution of smartphone camera images by up to 9 times.

Image sensors for mobile phones are much smaller than professional cameras. The actuator needs to reliably shift the image sensor by sub-micron increments. The electromagnetic actuators that can work for DSLR and mirrorless cameras are too bulky to fit in the tight spaces of mobile devices and they lack the speed and accuracy to provide more than 4x the image resolution. Therefore, integrating pixel-shift technology remained difficult – until now.

By leveraging its expertise in the MEMS actuation field, Sheba today is bringing a new DSLR camera feature, long desired by smartphone camera users, into the smartphone world. ShebaSR™ can precisely shift the image sensor by a distance equal to 1/3 that of a pixel, with a repeatability of <20 nm, and an extremely high-speed response; making it not only match but surpass the most advanced DSLR pixel shifting capability using only simple super resolution algorithms.” said Faez Ba-Tis, CEO of Sheba Microsystems. “This technology is ready for market entry as the actuator’s fabrication process has been already scaled up for mass production at a prominent MEMS foundry, the MEMS driver is readily available in the market, and it has been qualified for the standard reliability tests of smartphones.

ShebaSR is also remarkably rugged. The product can be subjected to shocks, drops, tumbles, as well as temperature and humidity fluctuations without deterioration in performance.

Sheba publishes a demo of its technology:


Sunday, October 10, 2021

Sony Introduces Cloud Service to Process Data from its AI-Enabled Sensors

In a push to build a recurring revenue model, Sony Semiconductor Solutions announces that its AITRIOS edge AI sensing platform will launch in Japan, the U.S. and Europe starting from late 2021.

In May 2020, Sony announced an intelligent vision sensor, the IMX500, the world’s first image sensors to be equipped with AI processing functionality. Adding to the IMX500 intelligent vision sensor, AITRIOS offers partners a variety of features that help them craft solutions. Platform partners can include developers building AI that runs on cameras, developers building applications for AI-driven sensing applications, camera manufacturers/module integrators who produce AI cameras, and system integrators who build systems that integrate these AI cameras and sensing applications.

Saturday, October 09, 2021

Artilux Cooperates with Conti on Ge-on-Si SWIR LiDAR, Announces 12-inch Wafer Production Readiness

Artilux announces that "through collaborations with industry-leading customer such as Continental, Artilux will play a key role in delivering affordable ADAS and self-driving systems that operate at SWIR spectrum to provide safe sensing, imaging, and ranging applications from neighborhoods to highways."

The company also announces "the world’s first single-chip CMOS-based GeSi imager for a compact dual-mode (2D/3D) SWIR (Shortwave Infrared) sensing & imaging system. The high-resolution GeSi imager is fabricated at TSMC 12-inch CMOS production line, and is ready for commercialization at scale, enabling an ever-growing ecosystem to mark the beginning of SWIR sensing and imaging for consumer markets.

Artilux’s GeSi imager meets the expectation from stakeholders in the ecosystem for achieving compact form-factor, low power consumption, safety (lead-free), cost-competitiveness and mass production-ready."


IDTechEx Compares SWIR Imaging Technologies

PRNewswire: IDTechEx publishes a comparison of SWIR technologies:

"Arguably the most compelling attribute of imaging in the SWIR region is the reduced optical scattering of longer wavelengths of light. Reduced scattering means that SWIR cameras on vehicles and drones can see through fog and dust clouds, greatly improving visibility and hence safety.

Another benefit of SWIR imaging is distinguishing visually similar materials, which may have similar absorption (and thus reflection) spectra in the visible region but significant differences in the SWIR region. This ability is highly valuable for applications such as quality control in industrial processes, where it enables unwanted items such as rocks and metals to be spotted in food production for example, and sorting recycling.

Other benefits include thermal imaging for items with temperatures between 200 and 500 C, and the ability to see through materials that are opaque to visible light but transparent to SWIR. Silicon is a great example of this, with SWIR imaging used to check the quality of wafer attachment.

...at present short-wave infra-red (SWIR, 1000-2000 nm) imaging is dominated by expensive InGaAs (Indium Gallium Arsenide) sensors that can absorb light up to around 1800 nm. These can cost upwards of $10,000 due to the expense of producing the InGaAs layer via vapor deposition, low manufacturing yields, and limited pixel density that increases material consumption for a given resolution relative to standard silicon photodetectors.

The combination of commercially desirable attributes and an expensive incumbent technology creates a clear opportunity for a disruptive, low-cost alternative. As such, multiple competing approaches for SWIR imaging are being developed."

Friday, October 08, 2021

Masterclass on New Developments In CMOS Image Sensors

Albert Theuwissen gives a "Masterclass on Developments In CMOS Image Sensors Since IS Europe 2021 Online" on October 14, 2021. 

The world of CMOS image sensors is changing at a pace that we never have seen before.  New applications, new technologies, new features are constantly added to the large portfolio of CMOS devices that are out in the market.  In this way the performance as well as the possibilities of the devices is constantly improved.

During Image Sensors Europe 2019 and 2020 conference, a masterclass was organised around the recent developments in the CIS world and due to the popular demand of them, Albert Theuwissen and Smithers will re-run the 3 hour online masterclass to review the year and report on the latest developments.

Several subjects will be discussed : small pixels, new colour filters, new ToF image sensors, new stacked devices, global shutter devices, high-dynamic range techniques, etc.  Some topics that are not yet announced at the moment of writing this abstract will be included as well, because one can be sure: further developments of the CIS technology will not stop on a short notice!  Still exiting times are ahead for the imaging engineers and the imaging community.

A lookback of the developments over the last 12 months will be given, including critical comments on the published papers and data-sheets.  In many cases the information made available to the general public contains a lot of rubbish and data that makes no sense.  The speaker will analyse the figures and numbers published and will compare them to data available from other companies.  A real, unbiased benchmark of performance data will be given in the workshop.

Old ToF Presentations

IEEE Sensors keeps publishing old videos from its archives. Here is a recent bunch of such publications:

"Introduction to Time-of-Flight Imaging" by Edoardo Charbon from Technische Universiteit Delft, Netherlands

"In this paper, the most important architectures used in time-of-flight (TOF) imaging will be described, starting with global shutter image sensors, ultra-fast CCDs with 100ns frames, on to TOF-specific architectures, such as direct, or pulse-based TOF and indirect, or phase-based TOF, implemented both in CCD and CMOS processes with optimized optical stacks."

"A Fast Global Shutter Image Sensor Based on the VOD Mechanism" by Erez Tadmor, Idan Bakish, Shlomo Felzenshtein, Eli Larry, Giora Yahav, David Cohen from Microsoft

"In this paper the physical principles that allow a fast (ns scale) global shutter operation using the vertical overflow drain mechanism are explained and characterized. Several new measurement methodologies are developed in order to quantify the fast shutter temporal and spatial behaviour. Measurement results that highlight different physical properties of the shutter mechanism are surveyed, and the results are studied and analyzed. Process and device simulations of a pinned photodiode with a VOD mechanism are presented in order to give insights regarding the physical origins of the measured phenomena."

"Resolving Multipath Interference in Kinect: an Inverse Problem Approach" by Ayush Bhandari, Micha Feigin, Shahram Izadi, Christoph Rhemann, Mirko Schmidt, Ramesh Raskar from MIT and Microsoft

"Multipath interference (MPI) is one of the major sources of both depth and amplitude measurement errors in Time"of"Flight (ToF) cameras. This problem has received a lot of attention in the recent past. In this work, we discuss the MPI problem within the framework of inverse problems based on the Fredholm integral and multi"frequency measurements. As compared to previous approaches that consider up to two interfering paths, our model considers the general case of K"interfering paths. In the theoretical setting, we show that for the case of K"interfering paths of light, 2K + 1 frequency measurements suffice to compute the depth and amplitude images corresponding to each of the K optical paths. Our algorithm is practical, by providing a deterministic solution. Also, for the first time, we demonstrate the effectivity of our model on an off-the-shelf Microsoft Kinect. Theoretical findings and practical demonstration warrants future research and further experimentation."


"Depth-Range Extension with Folding Technique for SPAD-Based TOF LIDAR Systems" by Daniele Perenzoni, Leonardo Gasparini, Nicola Massari, David Stoppa from Fondazione Bruno Kessler, Italy

"Direct Time-of-Flight cameras can be implemented combining Single-Photon Avalanche Diodes (SPAD) with Time-to-Digital Converters (TDCs). Such a technology exhibits an intrinsic distance range limitation due to the TDC range, which is fixed by the TDC resolution and number of bits. Up to now it was not possible to extend this range without redesigning the TDC. In this work we propose a method to overcome this limitation thanks to a folding measuring technique. In our implementation the TDC is based on a ring-oscillator; the TDC starts when a photon is detected and stops when an external Stop signal is provided. As the number of bits is limited, the TDC restarts from 0 after reaching the final value (folding). In this paper, we show that it is possible to arbitrary extend the maximum detectable distance through repeated measurements by modifying the timing of the “stop” signal in multiples of the TDC’s maximum range."


"A Low-Power Pixel-Level Circuit for High Dynamic Range Time-of-Flight Camera" by Nicola Massari, David Stoppa, Lucio Pancheri from Fondazione Bruno Kessler, and Università degli Studi di Trento, Italy.

"Design of a low-power pixel-level circuit for extending the dynamic range of a ToF camera."

Thursday, October 07, 2021

Samsung Introduces 17nm FinFET Process for Stacked CIS, Aims to 600MP Resolution, 8K 240fps Video, 120dB HDR

Samsung introduces 17LPV process at its Foundry Forum 2021.  It's a combination of 28nm BEOL and FinFET. One of the main applications of the new process is logic dies in high resolution, high-speed, HDR CIS: