Thursday, January 27, 2022

EET-China: For the First Time Sony Outsources to TSMC Pixel Layer Manufacturing for iPhone 14 Pro Sensor

EET-China and Yahoo-Japan report: "Sony will expand the outsourcing of CMOS image sensor chip manufacturing, of which the pixel layer chip is the first to be manufactured by TSMC.

It is reported that Sony plans to use the 40nm process of TSMC's Nanke Fab 14B plant for its 48-megapixel layer chip, and will upgrade and expand the use of the 28nm mature special process in the future. factory, as well as the joint venture fab JASM in Kumamoto, Japan.

In addition, the logic layer chip at the core of Sony's ISP will also be handed over to TSMC for mass production, using the 22nm process of China's Fab 15A, but the color filter film and microlens process in the latter stage will still be shipped to Sony's own factory in Japan. completed within.

Regarding Sony's change in attitude, the industry believes that this is mainly to meet the demand for the iPhone 14 equipped with a 48-megapixel CMOS image sensor for the first time."

iToF: Comparison of Different Multipath Resolve Methods

IEEE Sensors publishes a video presentation "Multi-Layer ToF: Comparison of Different Multipath Resolve Methods for Indirect 3D Time-of-Flight" by Jonas Gutknecht and Teddy Loeliger from ZHAW School of Engineering, Switzerland.

Abstract: Multipath Interferences (MPI) represent a significant source of error for many 3D indirect time-of-flight (iToF) applications. Several approaches for separating the individual signal paths in case of MPI are described in literature. However, a direct comparison of these approaches is not possible due to the different parameters used in these measurements. In this article, three approaches for MPI separation are compared using the same measurement and simulation data. Besides the known procedures based on the Prony method and the Orthogonal Matching Pursuit (OMP) algorithm, the Particle Swarm Optimization (PSO) algorithm is applied to this problem. For real measurement data, the OMP algorithm has achieved the most reliable results and reduced the mean absolute distance error up to 96% for the tested measurement setups. However, the OMP algorithm limits the minimal distance between two objects with the setup used to approximately 2.7 m. This limitation cannot be significantly reduced even with a considerably higher modulation bandwidth.

Wednesday, January 26, 2022

3D Thermal Imaging Startup Owl Autonomous Imaging Raises $15M in Series-A Round

PRNewswire: Owl Autonomous Imaging (Owl AI), a developer of patented monocular 3D thermal imaging and ranging solutions for automotive active safety systems, today announced $15M in Series A funding.

Owl has developed a patented 3D Thermal Ranging camera, the world's only solid-state camera delivering HD thermal video with high precision ranging for safe autonomous vehicle operation.

Tuesday, January 25, 2022

Facebook Proposes Image Sensing for More Accurate Voice Recognition

Meta (Facebook) publishes a research post "AI that understands speech by looking as well as hearing:"

"People use AI for a wide range of speech recognition and understanding tasks, from enabling smart speakers to developing tools for people who are hard of hearing or who have speech impairments. But oftentimes these speech understanding systems don’t work well in the everyday situations when we need them most: Where multiple people are speaking simultaneously or when there’s lots of background noise. Even sophisticated noise-suppression techniques are often no match for, say, the sound of the ocean during a family beach trip or the background chatter of a bustling street market.

To help us build these more versatile and robust speech recognition tools, we are announcing Audio-Visual Hidden Unit BERT (AV-HuBERT), a state-of-the-art self-supervised framework for understanding speech that learns by both seeing and hearing people speak. It is the first system to jointly model speech and lip movements from unlabeled data — raw video that has not already been transcribed. Using the same amount of transcriptions, AV-HuBERT is 75 percent more accurate than the best audio-visual speech recognition systems (which use both sound and images of the speaker to understand what the person is saying)."

Sony Holds “Sense the Wonder Day”

Sony Semiconductor Solutions Corporation (SSS) held "Sense the Wonder Day," an event to share with a wide range of stakeholders, including employees, the concept behind the company's new corporate slogan, "Sense the Wonder."

At the event, SSS President and CEO Terushi Shimizu introduced SSS as "a company driven by technology and the curiosity of each individual," and explained that SSS's technology "will create the social infrastructure of the future, and will no doubt lead to a 'sensing society' in which image sensors play an active role in all aspects of life." In addition, he said, "The imaging and sensing technologies we create will allow us to uncover new knowledge that makes us question the common sense of the world and discover new richness hidden in our daily lives.

Thesis on SPAD Quenching

University of Paris-Saclay publishes o PhD thesis "Modeling and simulation of the electrical behavior and the quenching efficiency of Single-Photon Avalanche Diodes" by Yassine Oussaiti.

"Single-photon avalanche diodes (SPADs) emerged as the most convenient photodetectors for many photon-counting applications, taking advantage of their high detection efficiencies and fast timing responses. Over the past years, their design rules have been evolving to reach more aggressive performances. Usually, trade-offs are required to meet the different constraints.To face these technological challenges, the development of reliable models to describe the device operation and predict the relevant figures-of-merit is compulsory. Evidently, the numerical solvers must be both physics-based and computationally efficient.This Ph.D. work aims to improve the modeling of silicon SPADs, focusing on the avalanche build-up and the quenching efficiency. After a state-of-the-art overview, we investigate various device architectures and potential technological improvements using TCAD methods. We highlight the role of calibrated models and scalability laws in predicting the electrical response.Furthermore, we present a Verilog-A model accounting for the temporal current build-up in SPADs. The important parameters of this model are fitted on TCAD mixed-mode predictions. Importantly, the resulting SPICE simulations of the quenching compare favorably with measurements, allowing a pixel designer to optimize circuits.Since standard TCAD tools are based on deterministic models, the stochastic description of carriers is limited. Hence, Monte Carlo algorithms are used to simulate the statistical behavior of these photodiodes, with a particular attention on the photon detection efficiency and timing jitter. The good agreement between simulation results and experiments confirms the method's accuracy, and demonstrates its ability to assist the development of new generation SPADs."

Monday, January 24, 2022

Thesis on Parasitic Light Sensitivity in Global Shutter Pixels

Toulouse University publishes a PhD thesis "Developing a method for modeling, characterizing and mitigating parasitic light sensitivity in global shutter CMOS image sensors" by Federico Pace.

"Though being treated as a figure of merit, there is no standard metric for measuring Parasitic Light Sensitivity in Global Shutter CMOS Image Sensors. Some measurement techniques have been presented in literature [Mey+11], though they may not apply for a general characterization of each pixel in the array. Chapter 4 presents a development of a standard metric for measuring Parasitic Light Sensitivity in Global Shutter CMOS Image Sensors that can be applied to the large variety of Global Shutter CMOS Image Sensors on the market.

The metric relies on Quantum Efficiency (QE) measurements, which are widely known in the image sensor community and well standardized. The metric allows per-pixel characterization at different wavelength and at different impinging angles, thus allowing a more complete characterization of the Parasitic Light Sensitivity in Global Shutter CMOS Image Sensors."

Sunday, January 23, 2022

LiDAR with Entangled Photons

EPFL and Glasgow University publish an Optics Express paper "Light detection and ranging with entangled photons" by Jiuxuan Zhao, Ashley Lyons, Arin Can Ulku, Hugo Defienne, Daniele Faccio, and Edoardo Charbon.

"Single-photon light detection and ranging (LiDAR) is a key technology for depth imaging through complex environments. Despite recent advances, an open challenge is the ability to isolate the LiDAR signal from other spurious sources including background light and jamming signals. Here we show that a time-resolved coincidence scheme can address these challenges by exploiting spatio-temporal correlations between entangled photon pairs. We demonstrate that a photon-pair-based LiDAR can distill desired depth information in the presence of both synchronous and asynchronous spurious signals without prior knowledge of the scene and the target object. This result enables the development of robust and secure quantum LiDAR systems and paves the way to time-resolved quantum imaging applications."

Saturday, January 22, 2022

Polarization Event Camera

AIT Austrian Institute of Technology, ETH Zurich, Western Sydney University, and University of Illinois at Urbana-Champaign publish a pre-print paper "Bio-inspired Polarization Event Camera" by Germain Haessig, Damien Joubert, Justin Haque, Yingkai Chen, Moritz Milde, Tobi Delbruck, and Viktor Gruev

"The stomatopod (mantis shrimp) visual system has recently provided a blueprint for the design of paradigm-shifting polarization and multispectral imaging sensors, enabling solutions to challenging medical and remote sensing problems. However, these bioinspired sensors lack the high dynamic range (HDR) and asynchronous polarization vision capabilities of the stomatopod visual system, limiting temporal resolution to ~12 ms and dynamic range to ~ 72 dB. Here we present a novel stomatopod-inspired polarization camera which mimics the sustained and transient biological visual pathways to save power and sample data beyond the maximum Nyquist frame rate. This bio-inspired sensor simultaneously captures both synchronous intensity frames and asynchronous polarization brightness change information with sub-millisecond latencies over a million-fold range of illumination. Our PDAVIS camera is comprised of 346x260 pixels, organized in 2-by-2 macropixels, which filter the incoming light with four linear polarization filters offset by 45 degrees. Polarization information is reconstructed using both low cost and latency event-based algorithms and more accurate but slower deep neural networks. Our sensor is used to image HDR polarization scenes which vary at high speeds and to observe dynamical properties of single collagen fibers in bovine tendon under rapid cyclical loads."

Friday, January 21, 2022

SWIR Startup Trieye Collaborates with Automotive Tier 1 Supplier Hitachi Astemo

PRNewswire:  TriEye announces collaboration with Hitachi Astemo, Tier 1 automotive supplier of world-class products. Trieye's SEDAR (Spectrum Enhanced Detection And Ranging), has also received significant recognition when it was named CES 2022 Innovation Award Honoree, in the Vehicle Intelligence category.

"We believe that TriEye's SEDAR can provide autonomous vehicles with ranging and accurate detection capabilities that are needed to increase the safety and operability under all visibility conditions," says John Nunneley, SVP Design Engineering, Hitachi Astemo Americas, Inc.

SeeDevice Focuses on SWIR Sensing and Joins John Deere's 2022 Startup Collaborator Program

GlobeNewswire: Deere & Company announces the companies that will be part of the 2022 cohort of their Startup Collaborator program, including SeeDevice. This program launched in 2019 to enhance and deepen its interaction with startup companies whose technology could add value for John Deere customers.

SeeDevice is said to be a pioneer in CMOS-based SWIR image sensor technology, the first of its kind, based in quantum tunneling and plasmonic phenomena in standard logic CMOS process. A fabless quantum image sensor licensing company, Seedevice will collaborate with John Deere to implement its Quantum Photo-Detection-- QPD CMOS SWIR image sensor technology for agricultural and industrial applications and solutions. SeeDevice's unique technology is capable of broad-spectrum detection ability from a single CMOS pixel to detect spectral wavelengths from visual and near infrared -NIR (~400nm - 1,100nm), up to short-wave infrared -SWIR (~1,600nm), manufactured on a normal logic CMOS process.

"We're very honored to be invited to Deere's Start-up Collaborator program. The feasibility of a single-sensor solution from visible to SWIR wavelengths opens the doors to new industrial use-cases previously not possible due to the limitations of performance, cost, power, and size. To our knowledge, it is the first in the industry to achieve this level of performance, so we're excited to be working with John Deere to enhance next-generation image sensing devices with quantum sensing," said Thomas Kim, CEO and Founder of SeeDevice. 

SeeDevice has redesigned its website emphasizing the SWIR sensitivity of its image sensors: