Saturday, October 16, 2021

2021 Walter Kosonocky Award

 ST reports at its Facebook page:

"ST’s Francois Roy succeeds in solid-state image sensors

The International Image Sensor Society recently gave François Roy the Walter Kosonocky Award for his paper entitled Fully Depleted Trench-Pinned Photo Gate for CMOS Image Sensor Applications. The Photo Gate pixel concept improves image quality and the manufacturing process.

‘’It is a great honor for ST, the ST Imaging teams, my PhD students and myself to receive this prestigious award and I thank them all,’’ said François.

At ST we are proud of our inventors. We nurture strong competences and encourage our distinguished senior experts to coach young engineers."

HDR in iToF Imaging

Lucid Vision Labs shows two nice demos of HDR importance in ToF imaging. The demos are based on its Helios2+ camera with Sony IMX556 iToF sensors:

2014 Imaging Papers

IEEE Sensors keeps publishing 2014 papers video presentations:

Author: Refael Whyte, Lee Streeter, Michael Cree, Adrian Dorrington
Affiliation: University of Waikato, New Zealand

Abstract: Time-of-Flight (ToF) range cameras measure distance for each pixel by illuminating the entire scene with amplitude modulated light and measuring the change in phase between the emitted light and reflected light. One of the most significant causes of distance accuracy errors is multi-path interference, where multiple propagation paths exist from the light source to the same pixel. These multiple propagation paths can be caused by inter-reflections, subsurface scattering, edge effects and volumetric scattering.  Several techniques have been proposed to mitigate multi-path interference. In this paper a review of current techniques for resolving measurement errors due to multi-path interference is presented, as currently there is no quantitative comparison between techniques and evaluation of technique parameters. The results will help with the selection of a multi-path interference restoration method for specific time-of-flight camera applications.


Author: Mohammad Habib, Farhan Quaiyum, Syed Islam, Nicole McFarlane
Affiliation: University of Tennessee, Knoxville, United States

Abstract: Perimeter-gated single photon avalanche diodes (PGSPAD) in standard CMOS processes have increased breakdown voltages and improved dark count rates. These devices use a polysilicon gate to reduce the premature breakdown of the device. When coupled with a scintillation material, these devices could be instrumental in radiation detection. This work characterizes the variation in PGSPAD noise (dark count rate) and breakdown voltage as a function of applied gate voltages for varying device shape, size, and junction type.


Author: Min-Woong Seo, Taishi Takasawa, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Affiliation: Shizuoka University, Japan

Abstract: A low-noise high-sensitivity CMOS image sensor for scientific use is developed and evaluated. The prototype sensor contains 1024(H) × 1024(V) pixels with high performance column-parallel ADCs. The measured maximum quantum efficiency (QE) is 69 % at 660 nm and long-wavelength sensitivity is also enhanced with a large sensing area and the optimized process. In addition, dark current is 0.96 pA/cm2 at 292 K, temporal random noise in a readout circuitry is 1.17 electrons RMS, and the conversion gain is 124 uV/e-. The implemented CMOS imager using 0.11-um CIS technology has a very high sensitivity of 87 V/lx*sec that is suitable for scientific and industrial applications such as medical imaging, bioimaging, surveillance cameras and so on.

Friday, October 15, 2021

IEDM 2021: Samsung Presents 0.8um Color Routing Pixel, Sony 6um SPAD Achieves 20.2% PDE at 940nm

IEDM 2021 presents many image sensor papers in its program:

  • 20-1 A Back Illuminated 6 μm SPAD Pixel Array with High PDE and Timing Jitter Performance,
    S. Shimada, Y. Otake, S. Yoshida, S. Endo, R. Nakamura, H. Tsugawa, T. Ogita, T. Ogasahara, K. Yokochi, Y. Inoue, K. Takabayashi, H. Maeda, K. Yamamoto, M. Ono, S. Matsumoto, H. Hiyama, and T. Wakano.
    Sony
    This paper presents a 6μm pitch silicon SPAD pixel array using 3D-stacked technology. A PDE of 20.2% and timing jitter FWHM of 137ps at λ=940nm with 3V excess bias were achieved. These state-of-the-art performances were allowed via the implementation of a pyramid surface structure and pixel potential profile optimization.
  • 30-1 Highly Efficient Color Separation and Focusing in the Sub-micron CMOS Image Sensor,
    S. Yun, S. Roh, S. Lee, H. Park, M. Lim, S. Ahn, and H. Choo.
    Samsung Advanced Institute of Technology
    We report nanoscale metaphotonic color-routing (MPCR) structure that can significantly improve the lowlight performance of a sub-micron CMOS image sensor. Fabricated on the Samsung's commercial 0.8μm pixel sensor, MPCR structures confirms increased quantum efficiency (+20%), a luminance SNR improvement (+1.22 dB@5lux), a comparably low color error and great angular tolerant response.
  • 20-2 3.2 Megapixel 3D-Stacked Charge Focusing SPAD for Low-Light Imaging and Depth Sensing (Late News),
    K. Morimoto, J. Iwata, M. Shinohara, H. Sekine, A. Abdelghafar, H. Tsuchiya, Y. Kuroda, K. Tojima, W. Endo, Y. Maehashi, Y. Ota, T. Sasago, S. Maekawa, S. Hikosaka, T. Kanou, A. Kato, T. Tezuka, S. Yoshizaki, T. Ogawa, K. Uehira, A. Ehara, F. Inui, Y. Matsuno, K. Sakurai, T. Ichikawa.
    Canon Inc.
    We present a new generation of scalable photon counting image sensors for low-light imaging and depthsensing, featuring read-noise-free operation. Newly proposed charge focusing SPAD is employed to a prototype 3.2 megapixel 3D backside-illuminated image sensor, demonstrating the best-in-class pixel performance with the largest array size in APD-based image sensors.
  • 23-4 1.62µm Global Shutter Quantum Dot Image Sensor Optimized for Near and Shortwave Infrared,
    J. S. Steckel, E. Josse, A. G. Pattantyus-Abraham, M. Bidaud, B. Mortini, H. Bilgen, O. Arnaud, S. Allegret-Maret, F. Saguin, L. Mazet, S. Lhostis, T. Berger, K. Haxaire, L. L. Chapelon, L. Parmigiani, P. Gouraud, M. Brihoum, P. Bar, M. Guillermet, S. Favreau, R. Duru, J. Fantuz, S. Ricq, D. Ney, I. Hammad, D. Roy, A. Arnaud , B. Vianne, G. Nayak, N. Virollet, V. Farys, P. Malinge, A. Tournier, F. Lalanne, A. Crocherie, J. Galvier, S. Rabary, O. Noblanc, H. Wehbe-Alause , S. Acharya, A. Singh, J. Meitzner, D. Aher, H. Yang, J. Romero, B. Chen, C.Hsu, K. C. Cheng, Y. Chang, M. Sarmiento, C. Grange, E. Mazaleyrat, K. Rochereau,
    STMicroelectronics
    We have developed a 1.62µm pixel pitch global shutter sensor optimized for imaging in the NIR and SWIR. This breakthrough was made possible through the use of our colloidal quantum Dot thin film technology. We have scaled up this new platform technology to our 300mm manufacturing toolset.
  • 30-2 Automotive 8.3 MP CMOS Image Sensor with 150 dB Dynamic Range and Light Flicker Mitigation (Invited),
    M. Innocent, S. Velichko, D. Lloyd, J. Beck, A. Hernandez, B. Vanhoff, C. Silsby, A. Oberoi, G. Singh, S. Gurindagunta, R. Mahadevappa, M. Suryadevara, M. Rahman, and V. Korobov,
    ON Semiconductor
    New 8.3 MP image sensor for automotive applications has 2.1 µm pixel with overflow and triple gain readout. In comparison to earlier 3 µm pixel, flicker free range increased to 110 dB and total range to 150dB. SNR in transitions stays above 25 dB up to 125°C.
  • 30-3 A 2.9μm Pixel CMOS Image Sensor for Security Cameras with high FWC and 97 dB Single Exposure Dynamic Range,
    T. Uchida, K. Yamashita, A. Masagaki, T. Kawamura, C. Tokumitsu, S. Iwabuchi,. Onizawa, M. Ohura, H. Ansai, K. Izukashi, S. Yoshida, T. Tanikuni, S. Hiyama, H. Hirano, S. Miyazawa, Y. Tateshita,
    Sony
    We developed a new photodiode structure for CMOS image sensors with a pixel size of 2.9μm. It adds the following two structures: one forms a strong electric field P/N junction on the full-depth deep-trench isolation side wall, and the other is a dual-vertical-gate structure.
  • 30-4 3D Sequential Process Integration for CMOS Image Sensor,
    K. Nakazawa, J. Yamamoto, S. Mori, S. Okamoto, A. Shimizu, K. Baba, N. Fujii, M. Uehara, K. Hiramatsu, H. Kumano, A. Matsumoto, K. Zaitsu, H. Ohnuma, K. Tatani, T. Hirano, and H. Iwamoto,
    Sony
    We developed a new structure of pixel transistors stacked over photodiode fabricated by 3D sequential process integration. With this technology, we successfully increased AMP size and demonstrated backsideilluminated CMOS image sensor of 6752 x 4928 pixels at 0.7um pitch to prove its functionality and integrity.
  • 35-3 Computational Imaging with Vision Sensors embedding In-pixel Processing (Invited),
    J.N.P. Martel, G. Wetzstein,
    Stanford University
    Emerging vision sensors embedding in-pixel processing capabilities enable new ways to capture visual information. We review some of our work in designing new systems and algorithms using such vision sensors with applications in video-compressive imaging, high-dynamic range imaging, high-speed tracking, hyperspectral or light-field imaging.

Thursday, October 14, 2021

Prophesee CEO on Future Event-Driven Sensor Improvements

IEEE Spectrum publishes an interview with Prophesee CEO Luca Verre. There is an interesting part about the company's next generation event-driven sensor:

"For the next generation, we are working along three axes. One axis is around the reduction of the pixel pitch. Together with Sony, we made great progress by shrinking the pixel pitch from the 15 micrometers of Generation 3 down to 4.86 micrometers with generation 4. But, of course, there is still some large room for improvement by using a more advanced technology node or by using the now-maturing stacking technology of double and triple stacks. [The sensor is a photodiode chip stacked onto a CMOS chip.] You have the photodiode process, which is 90 nanometers, and then the intelligent part, the CMOS part, was developed on 40 nanometers, which is not necessarily a very aggressive node. Going for more aggressive nodes like 28 or 22 nm, the pixel pitch will shrink very much.

The benefits are clear: It's a benefit in terms of cost; it's a benefit in terms of reducing the optical format for the camera module, which means also reduction of cost at the system level; plus it allows integration in devices that require tighter space constraints. And then of course, the other related benefit is the fact that with the equivalent silicon surface, you can put more pixels in, so the resolution increases.The event-based technology is not following necessarily the same race that we are still seeing in the conventional [color camera chips]; we are not shooting for tens of millions of pixels. It's not necessary for machine vision, unless you consider some very niche exotic applications.

The second axis is around the further integration of processing capability. There is an opportunity to embed more processing capabilities inside the sensor to make the sensor even smarter than it is today. Today it's a smart sensor in the sense that it's processing the changes [in a scene]. It's also formatting these changes to make them more compatible with the conventional [system-on-chip] platform. But you can even push this reasoning further and think of doing some of the local processing inside the sensor [that's now done in the SoC processor].

The third one is related to power consumption. The sensor, by design, is actually low-power, but if we want to reach an extreme level of low power, there's still a way of optimizing it. If you look at the IMX636 gen 4, power is not necessarily optimized. In fact, what is being optimized more is the throughput. It's the capability to actually react to many changes in the scene and be able to correctly timestamp them at extremely high time precision. So in extreme situations where the scenes change a lot, the sensor has a power consumption that is equivalent to conventional image sensor, although the time precision is much higher. You can argue that in those situations you are running at the equivalent of 1000 frames per second or even beyond. So it's normal that you consume as much as a 10 or 100 frame-per-second sensor.[A lower power] sensor could be very appealing, especially for consumer devices or wearable devices where we know that there are functionalities related to eye tracking, attention monitoring, eye lock, that are becoming very relevant."

Hynix Aims to Capture Larger Chunk of CIS Market

BusinessKorea: "I am confident that CMOS image sensors (CISs) will be a pillar of SK Hynix's growth along with DRAMs and NAND flashes," said Song Chang-rok, VP of SK Hynix's CIS business, in an interview with SK Hynix Newsroom on Oct. 12. "The next goal is to join the leaders’ group."

Although SK Hynix is a latecomer, it aims to strengthen its R&D capabilities for image sensors and improve productivity early to join the leaders of the high-pixel count sensor market.

Currently, Sony and Samsung Electronics are the leaders of the CIS market, with a combined market share of about 80%. SK Hynix, OmniVision, and GalaxyCore are competing for the remaining 20%.

Technology gaps become meaningless when the market undergoes big changes,” Song said. “Competition factors will change from process technologies such as miniaturization to peripheral technologies such as information sensors and intelligence sensors.

The CIS market is expected to grow 7.3% annually from US$19.9 billion in 2021 to US$26.3 billion in 2025, Gartner said in June. During the same period, the overall semiconductor market is expected to grow by 4.0% annually and the memory market - by 4.1%.

Few more quotes from Song Chang-rok's article (in Google translation):

"As the memory semiconductor market grows and technology advances, new fabs are being built and new processes and equipment are introduced. Idle assets and prior art generated during this process can be applied to the CIS business. CIS requires a lower level of refinement compared to memory, but the equipment and processes required for production are similar.

In addition, the CIS business is important in that it serves as a bridgehead for SK hynix to expand into the non-memory market. We will respond effectively to market growth and create a successful story.

As a latecomer, we have experienced some trial and error, but we have continued to grow steadily. At first, customers doubted the possibility that SK hynix could do CIS business, but now it is recognized as a major supplier in the low-pixel area of ​​13MP (megapixel) or less. In order to expand into the 32MP or higher high-pixel market that can create high added value, we are strengthening our R&D capabilities and striving to secure productivity.

Above all, SK hynix has a great advantage in securing 'Pixel Shrink' technology, which determines the reliability of CIS. Cell miniaturization know-how has been accumulated for a long time in the DRAM field, and proven equipment is deployed on the production line. When our competitors go through multiple stages, we can find shortcuts. We will strengthen our competitiveness by making good use of these advantages.

Analog driven CIS cannot continue to reduce pixel size like DRAM. When the limit of miniaturization is reached, overcoming it requires new innovations in the surrounding technology rather than the process technology. In the future, CIS will evolve into an information sensor or intelligence sensor, not just a visual sensor. Accordingly, the paradigm of competition also changes.

The CIS business has different characteristics from the memory semiconductor business, such as a multi-variety small-volume production system and a complex industrial ecosystem, but there have been some inefficiencies by using the existing memory semiconductor process or system as it is. We focused on improving the speed of decision-making and improving work efficiency by improving it for the CIS business."

Yole on Camera Module Market

Yole Developpement reports on CCM market:

"After the outbreak of the pandemic in 2020, society and enterprise including Compact Camera Module (CCM) industry were temporarily suspended, but swift action was taken, and CCM industry quickly recovered.

The multi-camera approach in mobile not only increases photography functions (such as macro, telephoto etc.) but also greatly improves the photographic effects. As a result, the multiple-camera strategy is applicable in most of smartphones, resulting in an increase in the number of mobile CCMs to increase from 4.9B to 5.4B units from 2019 to 2020, a Yearover- Year (YoY) growth of 10.4%.

From imaging to sensing, the 3D camera could be in the front or rear (the front also includes optical fingerprint recognition), it will positively affect the developmental direction of the multicamera approach in mobile phones.

The automotive market started with rear imaging and has now developed to 360 surround-view, going from one camera to at least four even more. The autonomous driving is also increasing the need for more cameras. In addition, cars also need in-cabin cameras, as well as cameras to replace rearview mirrors. They are the next wave of CCM market.

In the consumer sector, products are becoming intelligent – connected to everything – allowing vision to play a more significant role in applications such as robots and home surveillance cameras. These will need more cameras.

Yole Développement expects the revenue of the global camera module to expand from $34B in 2020 to $59B in 2026, at a 9.8% CAGR."

Wednesday, October 13, 2021

Assorted Videos: Cordy, Teledyne e2v, SCD, Emberion, Isorg, FLIR

 Zhuhai Cordy Electronic Technology publishes a video of its image sensor testing machine:

Teledyne e2v publishes a promotional video about its Emerald 36M and 67M sensors:


SCD publishes a video from its 5MP MSIR imager, said to be world's highest resolution MWIR sensor:


QinetiQ publishes an interview with Emberion and a short Q&A session:

Isorg posts a demo of its large area fingerprint sensor integrated into a smartphone display:


Autosens publishes a short interview with Teledyne FLIR on automotive use case for thermal cameras:

MIPI A-PHY Unveils PAM16 in its Roadamap

MIPI Alliance announces a completion of the development of the next version of the MIPI A-PHY SerDes interface, which will double the maximum available downlink data rate from 16 Gbps to 32 Gbps to support evolving requirements of automotive displays and sensors (cameras, lidars and radars). The enhanced version, v1.1, also will double the data rate available for uplink control traffic and introduce options for implementing A-PHY’s lower speed gears over lower-cost legacy cables, providing additional flexibility for manufacturers to implement A-PHY.

Tuesday, October 12, 2021

BAE Systems Unveils Low-Light Image Sensor Enabling Night Vision in Overcast Starlight Conditions

BusinessWire: BAE Systems unveils its BSI “Hawkeye” HWK1411 ultra low-light image sensor, said to enable market-leading night vision capabilities with reduced size, weight, and power. The 1.6MP sensor is designed for battery-powered soldier systems, unmanned platforms, and targeting and surveillance applications.

The sensor achieving Overcast Starlight Conditions at 120 fps which is said to be really a groundbreaking results for a CMOS sensor.

ST to Present its Quantum Dot SWIR Sensor at IEDM 2021

IEDM Press Kit shows few figures from STMicro presentation of its quantum dot NIR/SWIR imager paper:

Paper #23.4, “1.62µm Global Shutter Quantum Dot Image Sensor Optimized for Near and Shortwave Infrared,” J.S. Steckel et al, STMicroelectronics

Record Quantum Efficiency for NIR/SWIR Sensors: STMicroelectronics researchers will report a 1.62µm pixel-pitch global shutter sensor for imaging in the near-infrared (NIR) and shortwave infrared (SWIR) regions of the light spectrum. It demonstrated record optical performance: an unprecedented quantum efficiency of >50% and a shutter efficiency of >99.94%. The breakthrough was made possible by use of a novel colloidal PbS quantum dot thin-film technology, and the devices were fabricated on a 300mm manufacturing toolset.

  • The top photo is of a qualification wafer showing (a) elementary quantum film (QF) test structures; (b) pixel matrix test chips; and (c) full image sensor products.
  • Going from left to right in the middle set of images/drawings are a QF photodiode array integrated on top of a CMOS readout IC; the QF photodiode cross-section; and a graphical description of the device stack.
  • At the bottom is an outdoor image taken with the 940nm NIR QF sensor (left) and with a high-end smartphone camera (right). The NIR image shows a significant difference in contrast, and the ability to clearly identify the black electrical wires hidden in the tree leaves, vs. the visible image.