Thursday, January 27, 2022

EET-China: For the First Time Sony Outsources to TSMC Pixel Layer Manufacturing for iPhone 14 Pro Sensor

EET-China and Yahoo-Japan report: "Sony will expand the outsourcing of CMOS image sensor chip manufacturing, of which the pixel layer chip is the first to be manufactured by TSMC.

It is reported that Sony plans to use the 40nm process of TSMC's Nanke Fab 14B plant for its 48-megapixel layer chip, and will upgrade and expand the use of the 28nm mature special process in the future. factory, as well as the joint venture fab JASM in Kumamoto, Japan.

In addition, the logic layer chip at the core of Sony's ISP will also be handed over to TSMC for mass production, using the 22nm process of China's Fab 15A, but the color filter film and microlens process in the latter stage will still be shipped to Sony's own factory in Japan. completed within.

Regarding Sony's change in attitude, the industry believes that this is mainly to meet the demand for the iPhone 14 equipped with a 48-megapixel CMOS image sensor for the first time."

iToF: Comparison of Different Multipath Resolve Methods

IEEE Sensors publishes a video presentation "Multi-Layer ToF: Comparison of Different Multipath Resolve Methods for Indirect 3D Time-of-Flight" by Jonas Gutknecht and Teddy Loeliger from ZHAW School of Engineering, Switzerland.

Abstract: Multipath Interferences (MPI) represent a significant source of error for many 3D indirect time-of-flight (iToF) applications. Several approaches for separating the individual signal paths in case of MPI are described in literature. However, a direct comparison of these approaches is not possible due to the different parameters used in these measurements. In this article, three approaches for MPI separation are compared using the same measurement and simulation data. Besides the known procedures based on the Prony method and the Orthogonal Matching Pursuit (OMP) algorithm, the Particle Swarm Optimization (PSO) algorithm is applied to this problem. For real measurement data, the OMP algorithm has achieved the most reliable results and reduced the mean absolute distance error up to 96% for the tested measurement setups. However, the OMP algorithm limits the minimal distance between two objects with the setup used to approximately 2.7 m. This limitation cannot be significantly reduced even with a considerably higher modulation bandwidth.

Wednesday, January 26, 2022

3D Thermal Imaging Startup Owl Autonomous Imaging Raises $15M in Series-A Round

PRNewswire: Owl Autonomous Imaging (Owl AI), a developer of patented monocular 3D thermal imaging and ranging solutions for automotive active safety systems, today announced $15M in Series A funding.

Owl has developed a patented 3D Thermal Ranging camera, the world's only solid-state camera delivering HD thermal video with high precision ranging for safe autonomous vehicle operation.

Tuesday, January 25, 2022

Facebook Proposes Image Sensing for More Accurate Voice Recognition

Meta (Facebook) publishes a research post "AI that understands speech by looking as well as hearing:"

"People use AI for a wide range of speech recognition and understanding tasks, from enabling smart speakers to developing tools for people who are hard of hearing or who have speech impairments. But oftentimes these speech understanding systems don’t work well in the everyday situations when we need them most: Where multiple people are speaking simultaneously or when there’s lots of background noise. Even sophisticated noise-suppression techniques are often no match for, say, the sound of the ocean during a family beach trip or the background chatter of a bustling street market.

To help us build these more versatile and robust speech recognition tools, we are announcing Audio-Visual Hidden Unit BERT (AV-HuBERT), a state-of-the-art self-supervised framework for understanding speech that learns by both seeing and hearing people speak. It is the first system to jointly model speech and lip movements from unlabeled data — raw video that has not already been transcribed. Using the same amount of transcriptions, AV-HuBERT is 75 percent more accurate than the best audio-visual speech recognition systems (which use both sound and images of the speaker to understand what the person is saying)."

Sony Holds “Sense the Wonder Day”

Sony Semiconductor Solutions Corporation (SSS) held "Sense the Wonder Day," an event to share with a wide range of stakeholders, including employees, the concept behind the company's new corporate slogan, "Sense the Wonder."

At the event, SSS President and CEO Terushi Shimizu introduced SSS as "a company driven by technology and the curiosity of each individual," and explained that SSS's technology "will create the social infrastructure of the future, and will no doubt lead to a 'sensing society' in which image sensors play an active role in all aspects of life." In addition, he said, "The imaging and sensing technologies we create will allow us to uncover new knowledge that makes us question the common sense of the world and discover new richness hidden in our daily lives.

Thesis on SPAD Quenching

University of Paris-Saclay publishes o PhD thesis "Modeling and simulation of the electrical behavior and the quenching efficiency of Single-Photon Avalanche Diodes" by Yassine Oussaiti.

"Single-photon avalanche diodes (SPADs) emerged as the most convenient photodetectors for many photon-counting applications, taking advantage of their high detection efficiencies and fast timing responses. Over the past years, their design rules have been evolving to reach more aggressive performances. Usually, trade-offs are required to meet the different constraints.To face these technological challenges, the development of reliable models to describe the device operation and predict the relevant figures-of-merit is compulsory. Evidently, the numerical solvers must be both physics-based and computationally efficient.This Ph.D. work aims to improve the modeling of silicon SPADs, focusing on the avalanche build-up and the quenching efficiency. After a state-of-the-art overview, we investigate various device architectures and potential technological improvements using TCAD methods. We highlight the role of calibrated models and scalability laws in predicting the electrical response.Furthermore, we present a Verilog-A model accounting for the temporal current build-up in SPADs. The important parameters of this model are fitted on TCAD mixed-mode predictions. Importantly, the resulting SPICE simulations of the quenching compare favorably with measurements, allowing a pixel designer to optimize circuits.Since standard TCAD tools are based on deterministic models, the stochastic description of carriers is limited. Hence, Monte Carlo algorithms are used to simulate the statistical behavior of these photodiodes, with a particular attention on the photon detection efficiency and timing jitter. The good agreement between simulation results and experiments confirms the method's accuracy, and demonstrates its ability to assist the development of new generation SPADs."

Monday, January 24, 2022

Thesis on Parasitic Light Sensitivity in Global Shutter Pixels

Toulouse University publishes a PhD thesis "Developing a method for modeling, characterizing and mitigating parasitic light sensitivity in global shutter CMOS image sensors" by Federico Pace.

"Though being treated as a figure of merit, there is no standard metric for measuring Parasitic Light Sensitivity in Global Shutter CMOS Image Sensors. Some measurement techniques have been presented in literature [Mey+11], though they may not apply for a general characterization of each pixel in the array. Chapter 4 presents a development of a standard metric for measuring Parasitic Light Sensitivity in Global Shutter CMOS Image Sensors that can be applied to the large variety of Global Shutter CMOS Image Sensors on the market.

The metric relies on Quantum Efficiency (QE) measurements, which are widely known in the image sensor community and well standardized. The metric allows per-pixel characterization at different wavelength and at different impinging angles, thus allowing a more complete characterization of the Parasitic Light Sensitivity in Global Shutter CMOS Image Sensors."

Sunday, January 23, 2022

LiDAR with Entangled Photons

EPFL and Glasgow University publish an Optics Express paper "Light detection and ranging with entangled photons" by Jiuxuan Zhao, Ashley Lyons, Arin Can Ulku, Hugo Defienne, Daniele Faccio, and Edoardo Charbon.

"Single-photon light detection and ranging (LiDAR) is a key technology for depth imaging through complex environments. Despite recent advances, an open challenge is the ability to isolate the LiDAR signal from other spurious sources including background light and jamming signals. Here we show that a time-resolved coincidence scheme can address these challenges by exploiting spatio-temporal correlations between entangled photon pairs. We demonstrate that a photon-pair-based LiDAR can distill desired depth information in the presence of both synchronous and asynchronous spurious signals without prior knowledge of the scene and the target object. This result enables the development of robust and secure quantum LiDAR systems and paves the way to time-resolved quantum imaging applications."

Saturday, January 22, 2022

Polarization Event Camera

AIT Austrian Institute of Technology, ETH Zurich, Western Sydney University, and University of Illinois at Urbana-Champaign publish a pre-print paper "Bio-inspired Polarization Event Camera" by Germain Haessig, Damien Joubert, Justin Haque, Yingkai Chen, Moritz Milde, Tobi Delbruck, and Viktor Gruev

"The stomatopod (mantis shrimp) visual system has recently provided a blueprint for the design of paradigm-shifting polarization and multispectral imaging sensors, enabling solutions to challenging medical and remote sensing problems. However, these bioinspired sensors lack the high dynamic range (HDR) and asynchronous polarization vision capabilities of the stomatopod visual system, limiting temporal resolution to ~12 ms and dynamic range to ~ 72 dB. Here we present a novel stomatopod-inspired polarization camera which mimics the sustained and transient biological visual pathways to save power and sample data beyond the maximum Nyquist frame rate. This bio-inspired sensor simultaneously captures both synchronous intensity frames and asynchronous polarization brightness change information with sub-millisecond latencies over a million-fold range of illumination. Our PDAVIS camera is comprised of 346x260 pixels, organized in 2-by-2 macropixels, which filter the incoming light with four linear polarization filters offset by 45 degrees. Polarization information is reconstructed using both low cost and latency event-based algorithms and more accurate but slower deep neural networks. Our sensor is used to image HDR polarization scenes which vary at high speeds and to observe dynamical properties of single collagen fibers in bovine tendon under rapid cyclical loads."

Friday, January 21, 2022

SWIR Startup Trieye Collaborates with Automotive Tier 1 Supplier Hitachi Astemo

PRNewswire:  TriEye announces collaboration with Hitachi Astemo, Tier 1 automotive supplier of world-class products. Trieye's SEDAR (Spectrum Enhanced Detection And Ranging), has also received significant recognition when it was named CES 2022 Innovation Award Honoree, in the Vehicle Intelligence category.

"We believe that TriEye's SEDAR can provide autonomous vehicles with ranging and accurate detection capabilities that are needed to increase the safety and operability under all visibility conditions," says John Nunneley, SVP Design Engineering, Hitachi Astemo Americas, Inc.

SeeDevice Focuses on SWIR Sensing and Joins John Deere's 2022 Startup Collaborator Program

GlobeNewswire: Deere & Company announces the companies that will be part of the 2022 cohort of their Startup Collaborator program, including SeeDevice. This program launched in 2019 to enhance and deepen its interaction with startup companies whose technology could add value for John Deere customers.

SeeDevice is said to be a pioneer in CMOS-based SWIR image sensor technology, the first of its kind, based in quantum tunneling and plasmonic phenomena in standard logic CMOS process. A fabless quantum image sensor licensing company, Seedevice will collaborate with John Deere to implement its Quantum Photo-Detection-- QPD CMOS SWIR image sensor technology for agricultural and industrial applications and solutions. SeeDevice's unique technology is capable of broad-spectrum detection ability from a single CMOS pixel to detect spectral wavelengths from visual and near infrared -NIR (~400nm - 1,100nm), up to short-wave infrared -SWIR (~1,600nm), manufactured on a normal logic CMOS process.

"We're very honored to be invited to Deere's Start-up Collaborator program. The feasibility of a single-sensor solution from visible to SWIR wavelengths opens the doors to new industrial use-cases previously not possible due to the limitations of performance, cost, power, and size. To our knowledge, it is the first in the industry to achieve this level of performance, so we're excited to be working with John Deere to enhance next-generation image sensing devices with quantum sensing," said Thomas Kim, CEO and Founder of SeeDevice. 

SeeDevice has redesigned its website emphasizing the SWIR sensitivity of its image sensors:

Thursday, January 20, 2022

Omnivision Unveils its New Logo

Omnivision publishes short videos explaining its new logo:

UV Sensors in SOI Process

Tower publishes a MDPI paper "Embedded UV Sensors in CMOS SOI Technology" by Michael Yampolsky, Evgeny Pikhay, and Yakov Roizin.

"We report on ultraviolet (UV) sensors employing high voltage PIN lateral photodiode strings integrated into the production RF SOI (silicon on isolator) CMOS platform. The sensors were optimized for applications that require measurements of short wavelength ultraviolet (UVC) radiation under strong visible and near-infrared lights, such as UV used for sterilization purposes, e.g., COVID-19 disinfection. Responsivity above 0.1 A/W in the UVC range was achieved, and improved blindness to visible and infrared (IR) light demonstrated by implementing back-end dielectric layers transparent to the UV, in combination with differential sensing circuits with polysilicon UV filters. Degradation of the developed sensors under short wavelength UV was investigated and design and operation regimes allowing decreased degradation were discussed. Compared with other embedded solutions, the current design is implemented in a mass-production CMOS SOI technology, without additional masks, and has high sensitivity in UVC."

Wednesday, January 19, 2022

Nanostructure Modifiers for Pixel Spectral Response

University of California – Davis and W&WSens publish an paper "Reconstruction-based spectroscopy using CMOS image sensors with random photon-trapping nanostructure per sensor" by Ahasan Ahamed, Cesar Bartolo-Perez, Ahmed Sulaiman Mayet, Soroush Ghandiparsi, Lisa McPhillips, Shih-Yuan Wang, M. Saif Islam.

"Emerging applications in biomedical and communication fields have boosted the research in the miniaturization of spectrometers. Recently, reconstruction-based spectrometers have gained popularity for their compact size, easy maneuverability, and versatile utilities. These devices exploit the superior computational capabilities of recent computers to reconstruct hyperspectral images using detectors with distinct responsivity to different wavelengths. In this paper, we propose a CMOS compatible reconstruction-based on-chip spectrometer pixels capable of spectrally resolving the visible spectrum with 1 nm spectral resolution maintaining high accuracy (>95 %) and low footprint (8 um x 8 um), all without the use of any additional filters. A single spectrometer pixel is formed by an array of silicon photodiodes, each having a distinct absorption spectrum due to their integrated nanostructures, this allows us to computationally reconstruct the hyperspectral image. To achieve distinct responsivity, we utilize random photon-trapping nanostructures per photodiode with different dimensions and shapes that modify the coupling of light at different wavelengths. This also reduces the spectrometer pixel footprint (comparable to conventional camera pixels), thus improving spatial resolution. Moreover, deep trench isolation (DTI) reduces the crosstalk between adjacent photodiodes. This miniaturized spectrometer can be utilized for real-time in-situ biomedical applications such as Fluorescence Lifetime Imaging Microscopy (FLIM), pulse oximetry, disease diagnostics, and surgical guidance."

Image Sensor Facts for Kids

Kiddle, an encyclopedia for kids, publishes a page about image sensors:

Tuesday, January 18, 2022

Recent Videos: EnliTech, IPVM, Scantinel, Infiray, Omron, Ibeo

EnliTech presents its CIS wafer testing solutions:

IPVM publishes "Intro to Surveillance Cameras:"

Scantinel presents its FMCW LiDAR:

Infiray presents bright future for thermal cameras in ADAS applications:

Guide Sensmart presents the world's first smartphone thermal camera with AF:

Omron publishes a webinar about its QVGA ToF sensor capable of 100klux ambient light operation:

Ibeo publishes a webinar about its SPAD-based automotive "Digital LiDAR:"

Bankrupt HiDM is Acquired by Rongxin Semiconductor

JW Insights reports that Rongxin Semiconductor acquired through an auction the bankrupt HiDM (Huaian Imaging Device manufacturing Corporation) in Huaian, Jiangsu province. Rongxin Semiconductor was founded in April 2021 in Ningbo, Jiangsu province. Rongxin paid RMB1.666 billion ($262.1 million) for HiDM assets.

As a private capital, Rongxin’s participation in wafer manufacturing by rescuing HiDM represents a new source of solutions to failed mega semiconductor projects that had occurred over the last several years. It is also regarded as a new force in improving China’s foundry capacity.

Rongxin mainly focuses on 90-55nm 12-inch chip production lines of CIS and other semiconductors. The company's WLCSP TSV packaging focuses on advanced packaging and testing of CIS products.

RongSemi has formed strategic cooperation partnerships with several companies, including OmniVision. Currently, Rongxin is completing the fab construction and hiring its personnel now. The company needs a total of about 1,500 employees, including 70 management personnel, 650 technical personnel, and 780 production personnel.

Monday, January 17, 2022

EI 2022 Course on Signal Processing for Photon-Limited Imaging

Stanley Chan from Purdue University publishes slides for his 2022 Electronic Imaging short course "Signal Processing for Photon-Limited Imaging." Few slides out of 81:

Actlight DPD Presentation

Actlight CEO Serguei Okhonin presented at Photonics Spectra Conference held on-line last week "Dynamic Photodiodes: Unique Light-Sensing Technology with Tunable Sensitivity." The conference registration registration is free of charge. Few slides from the presentation:

Sunday, January 16, 2022

"Electrostatic Doping" for In-Sensor Computing

Harvard University, KIST, Pusan University, and Samsung Advanced Institute of Technology publish a pre-print paper "In-sensor optoelectronic computing using electrostatically doped silicon" by Houk Jang, Henry Hinton, Woo-Bin Jung, Min-Hyun Lee, Changhyun Kim, Min Park, Seoung-Ki Lee, Seongjun Park, and Donhee Ham.

"Complementary metal-oxide-semiconductor (CMOS) image sensors are a visual outpost of many machines that interact with the world. While they presently separate image capture in front-end silicon photodiode arrays from image processing in digital back-ends, efforts to process images within the photodiode array itself are rapidly emerging, in hopes of minimizing the data transfer between sensing and computing, and the associated overhead in energy and bandwidth. Electrical modulation, or programming, of photocurrents is requisite for such in-sensor computing, which was indeed demonstrated with electrostatically doped, but non-silicon, photodiodes. CMOS image sensors are currently incapable of in-sensor computing, as their chemically doped photodiodes cannot produce electrically tunable photocurrents. Here we report in-sensor computing with an array of electrostatically doped silicon p-i-n photodiodes, which is amenable to seamless integration with the rest of the CMOS image sensor electronics. This silicon-based approach could more rapidly bring in-sensor computing to the real world due to its compatibility with the mainstream CMOS electronics industry. Our wafer-scale production of thousands of silicon photodiodes using standard fabrication emphasizes this compatibility. We then demonstrate in-sensor processing of optical images using a variety of convolutional filters electrically programmed into a 3 × 3 network of these photodiodes."