Tuesday, May 11, 2021

Hynix Article on CIS Progress

EDN publishes SK Hynix article on image sensor progress that shows the company's iToF sensor, among many other things:

Monday, May 10, 2021

Omnivision to Announce 0.61um Pixel Sensor

SparrowNews quotes Chinese sources that Omnivision is about to announce OV60A sensor with 0.61um pixels.

The 1/2.8-inch 60MP OV60A is the world’s first 0.61um image sensor for mobile phone front and rear cameras. The four-in-one CFA allows near-pixel merging to deliver 15MP images with 4x the sensitivity, providing the equivalent performance of 1.22 microns for preview and native 4K video, with the additional pixels needed for EIS.

The sensor also supports a low-power mode for “always-on” sensing, which saves battery life when used in conjunction with the phone’s artificial intelligence features. The “always-on” low power modes include ambient light sensing for wake-up and low power streaming mode. The sensor also supports dual I/O voltage rails (1.8V and 1.2V) and a CPHY interface.


Here is a timeline for the renewed pixel size race:

Sunday, May 09, 2021

Artilux Dramatically Reduces Ge-on-Si PD Dark Current

Artilux whitepaper presents the company's latest progress in the dark current reduction:

"In this white paper, we proudly announce Artilux Halcyon GeSi Technology, which reduces dark current and DCR by more than 3 orders of magnitude compared to what was commonly known in past literature. Moreover, this breakthrough can be adopted in a wide variety of photodetectors with customized pixel arrays. With such unprecedented performance and attribute, we expect Artilux Halcyon GeSi Technology will soon be applied to multiple growing market segments ranging from NIR and SWIR image sensors, hyperspectral image sensor, 3D and 4D LiDAR sensors and beyond by working with our partners. These markets are estimated to have strong growth with double-digit CAGR (compound annual growth rate) between 2021 to 2025.

To provide a fair comparison to past literature, we fabricated a series of normal incidence photodetectors at various sizes and measured their dark currents. The resulting data with the use of Artilux Halcyon GeSi Technology are shown in Fig. 1.

To evaluate the noise performance of these photodetectors when being used in linear mode or in Geiger-mode photodetection, it’s standard to define the so-called bulk dark current density (unit: mA/cm2) and surface dark current density (unit: μA/cm) and extract them from the data shown in Fig. 1. In past literature, these two numbers were reported roughly in the order of 10 mA/cm2 and 10 μA/cm, respectively. With the use of Artilux Halcyon GeSi Technology, these two numbers can be drastically reduced to roughly a few μA/cm2 and a few nA/cm, respectively, which translates into more than 3 orders
of magnitude improvement!

Halcyon GeSi Technology in conjunction with Artilux proprietary scaling design. For SWIR image sensor with less than 5 μm pixel pitch at low bias voltage typical for this application, the expected performance in pilot run is in the order of a few to tens of fA dark current (uncooled). For direct ToF (time of flight) 3D sensor with slightly larger pixel pitch and around 15V breakdown voltage, the expected performance in pilot run is in the order of tens to hundreds of KHz DCR (uncooled). We will continue to perfect these performances in future Artilux products."

Apparently, the data on the graph shows the previous generation Ge-on-Si dark current:

Saturday, May 08, 2021

A Low Dark Current 160 dB DR Logarithmic Pixel

MDPI paper "A Low Dark Current 160 dB Logarithmic Pixel with Low Voltage Photodiode Biasing" is written by 2 authors with 4 affiliations: Alessandro Michel Brunetti and Bhaskar Choubey from University of Oxford, Universität Siegen, Fraunhofer Institute of Microelectronics Circuits and Systems, and Absensing.

"A typical logarithmic pixels suffer from poor performance under low light conditions due to a leakage current, usually referred to as the dark current. In this paper, we propose a logarithmic pixel design capable of reducing the dark current through low-voltage photodiode biasing, without introducing any process modifications. The proposed pixel combines a high dynamic range with a significant improvement in the dark response compared to a standard logarithmic pixel. The reported experimental results show this architecture to achieve an almost 35 dB improvement at the expense of three additional transistors, thereby achieving an unprecedented dynamic range higher than 160 dB."

Friday, May 07, 2021

Goodix Engineers Become Finalists for the European Inventor Award 2021

ChinaDailyEuropean Patent Office announces finalists for the European Inventor Award 2021. Goodix engineers Bo Pi and Yi He have have been nominated for the world's first fingerprint sensor able to check both fingerprint patterns and the presence of blood flow in the patent EP3072083.

Based on their combined expertise - Pi as a physicist and technologist with extensive electrical sensing knowledge and He as a former optoelectronics professor with experience in fibreoptics and on optical devices - the pair made two key discoveries that would later form the basis of their invention. First, that infrared light sensors - typically used by doctors for medical diagnoses - could be used to measure a finger pulse. Second, that a finger pressed against a sensor forces blood out of the capillaries. These findings led to the development of a new kind of optical sensor capable of capturing these changes, while simultaneously tracing a map of the user's fingerprint. The combination of these multiple technologies makes the world's first integrated Live Finger Detection (LFD) sensor developed by Pi and He almost impossible to deceive, setting a new benchmark for smartphone security.

Today, Pi is Chief Technology Officer at Goodix while He is R&D Director.


Chronoptics on iToF Camera Design Challenges

Chronoptics CTO Refael Whyte publishes an nice article "Indirect Time-of-Flight Depth Camera Systems Design" about different trade-offs and challenges in ToF cameras. Few quotes:

"The table below compares two image sensors the [Melexis] MLX75027 and [Espros] EPC635, both of which have publicly available datasheets.


The MLX75027 has 32 times more pixels than the EPC635, but that comes at a higher price. The application of the depth data dictates the image sensor resolution required.

The pixel size, demodulation contrast and quantum efficiency are all metrics relating to the efficiency of capture of reflected photons. The bigger the pixel active area the bigger the surface area that incoming photons can be collected over. The pixel’s active area is the fill factor multiplied by its size. Both the MLX7502 and EPC635 are back side illuminated (BSI), meaning 100% fill factor. The quantum efficiency is the ratio of electrons generated over the number of arriving photons. The higher the quantum efficiency the more photons are captured. The demodulation contrast is a measure of the number of captured photons that are used in the depth measurement.

Illumination sources should be designed for IEC 60825–1:2014, specification for eye safety. The other aspect of eye safety design is having no single point of failure that makes the illumination source non-eye safe. For example, if the diffuser cracks and exposes the laser elements, is it still eye safe? It not the crack needs to be detected and the laser turned off, or two barriers used incase one fails. Indium tin oxide (ITO) can be used as a coating, as it is electrically conductive and optically transparent, the impedance will change if the surface is damaged. Or a photodiode in the laser can be used to detect changes in the back reflection indicating damage. The same considerations around power supplies shorting and other failure modes need to be considered."

Assorted Videos: ams, Synopsys, ON Semi

Ams presents the use cases for its miniature NanEyeC camera module:

Synopsys presents its "Holistic Design Approach to LiDAR:"

ON Semi publishes a webinar about its low-power Event Triggered Imaging Using the RSL10 Smart Shot Camera:

Thursday, May 06, 2021

Gpixel and Tower Announce VGA iToF Sensor

GlobeNewswire: Gpixel and Tower announces Gpixel’s iToF sensor, GTOF0503, utilizing TOWER’s pixel on its 65nm pixel-level stacked BSI CIS technology fabricated in its Uozo, Japan facility. The GTOF0503 sensor features a 5um 3-tap iToF pixel incorporating a pixel array with a resolution of 640 x 480 pixels, aimed at vision-guided robotics, bin picking, automated guided vehicles, automotive and factory automation applications.

We are very proud to announce the release of our new iToF sensor, entering the 3D imaging market, made possible by our collaboration with Tower’s team. Tower’s vast expertise in development of iToF image sensor technology provided an outstanding platform for the design of this cutting-edge performing product,” said Wim Wuyts, Chief Commercial Officer, Gpixel.”This collaboration produced a unique sensor product that is perfectly suited to serve a wide variety of fast-growing applications and sets a roadmap for future successful developments.

A demodulation contrast of > 80% is achieved with modulation frequencies of up to 165 MHz at either 60 fps in Single Modulation Frequency (SMF) or 30 fps in Dual Modulation Frequency (DMF) depth mode.

Tower is excited to take an important role in this extraordinary project, collaborating with Gpixel’s talented team of experts in the field of sensor development and bringing to market this new, cutting-edge iToF sensor,” said Avi Strum, SVP and GM of Sensors & Displays Business Unit, Tower Semiconductor. “Gpxiel is a valuable and long-term partner, and we are confident that this partnership will continue to bring to market additional intriguing solutions.

GTOF0503 is  available as a bare die and in a 11 x 11 mm ceramic package. Samples (bare die) and evaluation kits are available as well.

AIStorm's AI-in-Imager Uses Tower's Hi-K VIA Capacitor Memory

GlobeNewswire, BusinessWire: AIStorm and Tower announce that AIStorm’s new AI-in-Imager products will use AIStorm’s electron multiplication architecture and Tower’s Hi-K VIA capacitor memory instead of digital calculations to perform AI computation at the pixel level. This saves the silicon real estate, multiple die packaging costs and power required of competitive digital systems including eliminating the need for input digitization. The Hi-K via capacitors reside in the metal layers and thus allow the AI to be built directly into the pixel matrix without any compromise on pixel density or size.

This new imager technology opens up a whole new avenue of “always on” functionality. Instead of periodically taking a picture and interfacing with an external AI processor through complex digitization, transport and memory schemes, AIStorm’s pixel matrix is itself the processor & memory. No other technology can do that,” said Avi Strum, SVP of Sensors and Displays BU at Tower Semiconductor.

AIStorm has built mobile models, under the MantisNet & Cheetah families, that use the direct pixel coupling of the AI matrix to offer sub-100uW “always on” operation with best-in-class latencies, and post wakeup processing of up to 200 TOPs/W.


Himax Reports 70% YoY CMOS Sensor Sales Growth

GlobeNewswire: Himax reports its image sensor sales grew by 70% YoY in Q1 2021. However, it appears that this spectacular growth does not continue into the Q2:

"The CIS revenue is expected to be flattish sequentially in the second quarter. The Company’s shipment has been badly capped by the foundry capacity despite surging customer demands for the CMOS image sensors for web camera and notebook. Nevertheless, a decent growth is expected in second half of 2021 thanks to a major engagement from a major existing customer.

Industry-first 2-in-1 CMOS image sensor of Himax supporting video conferencing and AI facial recognition on ultralow power has been designed into some of the most stylish, slim bezel notebook models of certain major notebook names. Small volume production has started in the fourth quarter of last year. Meaningful ramp-up volume is expected for the coming quarters.

Regarding ultralow power always-on CMOS image sensor that targets always-on AI applications, the Company is getting growing feedback and design adoptions from customers globally for various markets, such as car recorders, surveillance, smart electric meters, drones, smart home appliances, and consumer electronics. More progress will be reported in due course."

Samsung, UCSD, and University of Southern Mississippi Develop SWIR to Visible Image Converter

Phys.org, Newswise, UCSD: Advanced Functional Materials paper "Organic Upconversion Imager with Dual Electronic and Optical Readouts for Shortwave Infrared Light Detection" by Ning Li, Naresh Eedugurala, Dong-Seok Leem, Jason D. Azoulay, and Tse Nga Ng from Samsung Advanced Institute of Technology, UCSD, and University of Southern Mississippi presents a flat SWIR to visible converting device:

"...an organic upconversion imager that is efficient in both optical and electronic readouts, extending the capability of human and machine vision to 1400 nm, is designed and demonstrated. The imager structure incorporates interfacial layers to suppress non‐radiative recombination and provide enhanced optical upconversion efficiency and electronic detectivity. The photoresponse is comparable to state‐of‐the‐art organic infrared photodiodes exhibiting a high external quantum efficiency of ≤35% at a low bias of ≤3 V and 3 dB bandwidth of 10 kHz. The large active area of 2 cm2 enables demonstrations such as object inspection, imaging through smog, and concurrent recording of blood vessel location and blood flow pulses. These examples showcase the potential of the authors’ dual‐readout imager to directly upconvert infrared light for human visual perception and simultaneously yield electronic signals for automated monitoring applications."


Graphene to Revolutionize Automotive Imaging?

AZOsensorsAUTOVISION Spearhead Project from the Europe-based Graphene Flagship consortium is currently creating a new graphene-based, high-resolution image sensor. The new sensor can detect a broad light spectrum from UV to SWIR.

In 2020, member organizations under the Autovision umbrella announced a technique for the growth and transfer of wafer-scale graphene that uses standard semiconductor equipment. Project members collaborated to outline a suite of camera tests designed to make the Autovision sensor compete with cutting-edge visible cameras, SWIR cameras, and LiDAR systems.

The AUTOVISION project is led by Qurv in Barcelona, and counts on the collaboration of industrial partners such Aixtron in the UK and Veoneer in Sweden, this new project will help make safe deployment of autonomous vehicles possible. Over the course of three years, the project will produce CMOS graphene quantum dot image sensors in prototype sensor systems, ready for uptake in the automotive sector. Across the duration of the project, the developing image sensor is set to take huge leaps in sensitivity, operation speed and pixel size.

Omnivision Announces Automotive HDR ISP

Omnivision's new OAX4000 is a companion ISP for the company's HDR sensors providing a complete multicamera viewing application solution with fully processed YUV output. It is capable of processing up to four camera modules with 140 dB HDR, along with the leading LED flicker mitigation (LFM) performance in the industry and high 8MP resolution. It supports multiple CFA patterns, including Bayer, RCCB, RGB-IR and RYYCy. Additionally, the OAX4000 offers more than 30% power savings over the previous generation.


Wednesday, May 05, 2021

More about the First Event-Driven Sensor in Space

Zurich University publishes more info about DAVIS240, the first event driven sensor in space. The pair of DAVIS240 launched to space was included in the custom payload as part of the UNSW Canberra Space’s M2 CubeSat satellite. It was launched with Rocket Lab’s ‘They Go Up So Fast’ mission from New Zealand March 23, 2021. The article includes a very nice high resolution picture of the sensor's layout:

CVPR 2021 Workshop on Event-based Vision Papers On-Line

Computer Vision and Pattern Recognition Workshop on Event-based Vision to be held on June 19, 2021, has already published its papers in on-line access:

  • v2e: From Video Frames to Realistic DVS Events, and Suppl mat
  • Differentiable Event Stream Simulator for Non-Rigid 3D Tracking, and Suppl mat
  • Comparing Representations in Tracking for Event Camera-based SLAM
  • Image Reconstruction from Neuromorphic Event Cameras using Laplacian-Prediction and Poisson Integration with Spiking and Artificial Neural Networks
  • Detecting Stable Keypoints from Events through Image Gradient Prediction
  • EFI-Net: Video Frame Interpolation from Fusion of Events and Frames, and Suppl. mat
  • DVS-OUTLAB: A Neuromorphic Event-Based Long Time Monitoring Dataset for Real-World Outdoor Scenarios
  • N-ROD: a Neuromorphic Dataset for Synthetic-to-Real Domain Adaptation
  • Lifting Monocular Events to 3D Human Poses
  • A Cortically-inspired Architecture for Event-based Visual Motion Processing: From Design Principles to Real-world Applications
  • Spike timing-based unsupervised learning of orientation, disparity, and motion representations in a spiking neural network, and Suppl mat
  • Feedback control of event cameras
  • How to Calibrate Your Event Camera
  • Live Demonstration: Incremental Motion Estimation for Event-based Cameras by Dispersion Minimisation
Thanks to TD for the link!

Vivo's Imaging R&D Team is 700 People Strong

Baidu digital creator Lao Hu publishes an article about camera development team and investment of one of the largest smartphone brands, Vivo:

"...in terms of imaging, Li Zhuo, director of vivo imaging products, revealed last year that the research and development investment in vivo imaging exceeded 20 billion yuan two years ago. In addition, vivo has established global imaging research and development centers in San Diego, Japan, Tokyo, Hangzhou, Xi'an and other places, with a team of more than 700 research and development personnel.

According to media reports, vivo's imaging research and development center has a very clear division. The imaging team in San Diego, USA mainly focuses on the platform ISP level; the imaging R&D team in Japan mainly focuses on the customization and optimization of optics, image sensors, lens modules, etc.; the imaging team in Hangzhou focuses on imaging algorithms; the imaging team in Xi’an is mainly responsible for mobile Debugging and development of imaging field and pre-research of some algorithms.

Vivo not only insists on independent innovation, but also cooperates with powerful third-party forces. For example, last year, Vivo reached a global strategic partnership with Zeiss, the century-old German optical master, and the two parties also jointly established the Vivo Zeiss Joint Imaging Laboratory and organized their respective technologies. Experts use their respective advantages in optics and algorithms to conduct joint research and development, and are committed to solving a series of technical bottlenecks in mobile imaging, leading the continuous innovation of imaging technology, and bringing the world's top mobile phone shooting experience to consumers around the world."

Memristor Image Sensor with In-Memory Computing

2021 IEEE International Symposium on Circuits and Systems (ISCAS) to be held on-line on May 22-28, 2021, publishes all its proceedings in open access. One of the papers presents an image sensor with integrated non-volatile memory: "A new 1P1R Image Sensor with In-Memory Computing Properties based on Silicon Nitride Devices" by Nikolaos Vasileiadis, Vasileios Ntinas, Iosif-Angelos Fyrigos, Rafailia-Eleni Karamani, Vassilios Ioannou-Sougleridis, Pascal Normand, Ioannis Karafyllidis, Georgios Ch. Sirakoulis, and Panagiotis Dimitrakis from National Center for Scientific Research “Demokritos” and Democritus University of Thrace, Greece.

"Research progress in edge computing hardware, capable of demanding in-the-field processing tasks with simultaneous memory and low power properties, is leading the way towards a revolution in IoT hardware technology. Resistive random access memories (RRAM) are promising candidates for replacing current non-volatile memories and realize storage class memories, but also due to their memristive nature they are the perfect candidates for in-memory computing architectures. In this context, a CMOS compatible silicon nitride (SiN) device with memristive properties is presented accompanied by a data-fitted model extracted through analysis of measured resistance switching dynamics. Additionally, a new phototransistor-based image sensor architecture with integrated SiN memristor (1P1R) was presented. The in-memory computing capabilities of the 1P1R device were evaluated through SPICE-level circuit simulation with the previous presented device model. Finally, the fabrication aspects of the sensor are discussed."

SPAC Reduces Aeye Valuation ahead of Closing

Reuters: AEye and a SPAC company CF Finance Acquisition Corp. III amended their merger agreement, valuing the LiDAR maker Aeye at $1.52B, citing valuation changes of publicly traded lidar companies.

In the initial announcement in February, AEye was valued at $2B. The companies attributed the terms of the amended deal to "changing conditions" in the automotive lidar industry.

Meanwhile, Aeye publishes an interview with Continental and ex-GM CTO saying that LiDAR is an absolute necessity for autonomous driving:

Tuesday, May 04, 2021

NIT Presents Nanocrystal SWIR Imager

New Imaging Technologies publishes a whitepaper "Infrared sensing using nanocrystal toward on demand light matter coupling" by Eva Izquierdo, Audrey Chu, Charlie Gréboval, Gregory Vincent, David Darson, Victor Parahyba, Pierre Potet, and Emmanuel Lhuillier from Sorbonne Université, ONERA – The French Aerospace Lab, and NIT.

"Nanocrystals are semiconductor nanoparticles whose optical properties can be tuned from UV up to THz. They are used as sources of green and red light for displays, and also show exhibit promises to design low-cost infrared detectors. Coupling subwavelength optical resonators to nanocrystals film enables to design photodiodes that absorb 80% of incident light from thin (<150 nm) nanocrystal film. It thus becomes possible to overcome the constraints linked to the short diffusion lengths which result from transport by hopping within arrays of nanocrystals enabling a high photoresponse detector operating in the SWIR range."

VLSI Symposia: Why Sony Integrates MRAM on Sensor?

VLSI Symposia 2021 will be held in a fully virtual format due to COVID-19. While there are many imaging-related papers in the program, the most intriguing one comes from Sony on non-volatile MRAM integration onto an image sensor:
  • 3D Stacked CIS Compatible 40nm Embedded STT-MRAM for Buffer Memory,
    M. Oka*, Y. Namba*, Y. Sato*, H. Uchida*, T. Doi*, T. Tatsuno*, M. Nakazawa*, A. Tamura*, R. Haga*, M. Kuroda*, M. Hosomi*, K. Suemitsu**, E. Kariyada**, T. Suzuki**, H. Tanigawa**, M. Ueki**, M. Moritoki**, Y. Takegawa**, K. Bessho* and T. Umebayashi*,
    *Sony Semiconductor Solutions Corp. and
    **Sony Semiconductor Manufacturing Corp., Japan
    This paper presents the world's first demonstration of a 40nm embedded STT-MRAM for buffer memory, which is compatible with the 3D stacked CMOS image sensor (CIS) process. We optimized a CoFeB-based perpendicular magnetic tunnel junction (p-MTJ) to suppress the degradation of magnetic properties caused by the 3D stacked wafer process. With improved processes, we achieved high speed write operation below 40 ns under typical operation voltage conditions, endurance up to 1E+10 cycles and 1 s data retention required for a buffer memory. In addition, to broaden the application of embedded MRAM (eMRAM), we proposed a novel fusion technology that integrated embedded non-volatile memory (eNVM) and buffer memory type embedded MRAM in the same chip. We achieved a data retention of 1 s ~ >10 years with a sufficient write margin using the fusion technology.
Why does Sony need the MRAM on image sensor? My seven best guesses are:
  1. Column FPN calibration. Possibly, FPN can be reduced by another order of magnitude below its current level
  2. Dark current calibration. A few dark frames at different temperatures and exposure times can be stored on-chip, and subtracted, when needed.
  3. Per-pixel FPN calibration in global shutter pixels. For example, a storage node leakage can be measured at different temperatures and readout speeds and subtracted later. Or charge injection in voltage-domain GS pixels can be calibrated-out.
  4. Per-pixel PRNU and color crosstalk calibration to get a silky-smooth sky on photo
  5. PDAF pixel variations calibration, so that AF would be more accurate
  6. Temperature gradients and respective black level variations across the pixel array can be measured in different operating modes and temperatures and stored on-sensor
  7. Some kind of per-pixel calibration for ToF or event-driven sensors. Maybe store individual voltages for each APD in an array of APD pixels
Other image sensor-related papers are:
  • (Invited) A CMOS Image Sensor and an AI Accelerator for Realizing Edge-Computing-Based Surveillance Camera Systems,
    F. Morishita, N. Kato, S. Okubo, T. Toi, M. Hiraki, S. Otani, H. Abe, Y. Shinohara and H. Kondo, Renesas Electronics Corp., Japan
    This paper presents a CMOS image sensor and an AI accelerator to realize surveillance camera systems based on edge computing. For CMOS image sensors to be used for surveillance, it is desirable that they are highly sensitive even in low illuminance. We propose a new timing shift ADC used in CMOS image sensors for improving high sensitivity performance. Our proposed ADC improves non-linearity characteristics under low illuminance by 63%. Achieving power-efficient edge computing is a challenge for the systems to be used widely in the surveillance camera market. We demonstrate that our proposed AI accelerator performs inference processing for object recognition with 1 TOPS/W.
  • All-Directional Dual Pixel Auto Focus Technology in CMOS Image Sensors,
    E. S. Shim, Samsung Electronics Co., Ltd., Korea
    We developed a dual pixel with accurate and all-directional auto focus (AF) performance in CMOS image sensor (CIS). The optimized in-pixel deep trench isolation (DTI) provided accurate AF data and good image quality in the entire image area and over whole visible wavelength range. Furthermore, the horizontal-vertical (HV) dual pixel with the slanted in-pixel DTI enabled the acquisition of all-directional AF information by the conventional dual pixel readout method. These technologies were demonstrated in 1.4um dual pixel and will be applied to the further shrunken pixels.
  • Development of Advanced Inter-Color-Filter Grid on Sub-Micron-Pixel CMOS Image Sensor for Mobile Cameras with High Sensitivity and High Resolution,
    J. In-Sung, Y. Lee, H. Y. Park, J. U. Kim, D. Kang, T. Kim, M. Kim, K. Lee, M. Heo, I. Ro, J. Kim, I. Park, S. Kwon, K. Yoon, D. Park, C. Lee, E. Jo, M. Jeon, C. Park, K. R. Byun, C. K. Chang, J. S. Hur, K. Yoon, T. Jeon, J. Lee, J. Park, B. Kim, J. Ahn, H. Kim, C.-R. Moon and H.-S. Kim, Samsung Electronics Co., Ltd., Korea
    Sub-micron pixels have been widely adopted in recent CMOS image sensors to implement high resolution cameras in small form factors, i.e. slim mobile-phones. Even with shrinking pixels, customers demand higher image quality, and the pixel performance must remain comparable to that of the previous generations. Conventionally, to suppress the optical crosstalk between pixels, a metal grid has been used as an isolation structure between adjacent color filters. However, as the pixel size continues to shrink to the sub-micron regime, an optical loss increases because the focal spot size of the pixel's microlens does not downscale accordingly with the decreasing pixel size due to the diffraction limit: the light absorption inevitably occurs in the metal grid. For the first time, we have demonstrated a new lossless, dielectric-only grid scheme. The result shows 29 % increase in sensitivity and +1.2-dB enhancement in Y-SNR when compared to the previous hybrid metal-and-dielectric grid.
  • A 2.6 e-Rms Low-Random-Noise, 116.2 mW Low-Power 2-Mp Global Shutter CMOS Image Sensor with Pixel-Level ADC and In-Pixel Memory,
    M.-W. Seo, M. Chu, H.-Y. Jung, S. Kim, J. Song, J. Lee, S.-Y. Kim, J. Lee, S.-J. Byun, D. Bae, M. Kim, G.-D. Lee, H. Shim, C. Um, C. Kim, I.-G. Baek, D. Kwon, H. Kim, H. Choi, J. Go, J. Ahn, J.-k. Lee, C. Moon, K. Lee and H.-S. Kim, Samsung Electronics Co., Ltd., Korea
    This paper presents a low-random noise of 2.6 e-rms, a low-power of 116.2 mW at video rate, and a high-speed up to 960 fps 2-mega pixels global-shutter type CMOS image sensor (CIS) using an advanced DRAM technology. To achieve a high performance global-shutter CIS, we proposed a novel architecture for the digital pixel sensor which is a remarkable global shutter operation CIS with a pixel-wise ADC and an in-pixel digital memory. Each pixel has two small-pitch Cu-to-Cu interconnectors for the wafer-level stacking, and the pitch of each unit pixel is less than 5 um which is the world's smallest pixel embedding both pixel-level ADC and 22-bit memories.
  • A Photon-Counting 4Mpixel Stacked BSI Quanta Image Sensor with 0.3e- Read Noise and 100dB Single-Exposure Dynamic Range,
    J. Ma, D. Zhang, O. Elgendy and S. Masoodian, Gigajot Technology Inc., USA
    This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2um pixels that can operate simultaneously in photon-counting mode with deep sub-electron read noise (0.3e- rms) and linear integration mode with large full-well capacity (30k e-). A single-exposure dynamic range of 100dB is realized with this dual-mode readout under room temperature. This QIS device uses a cluster-parallel readout architecture to achieve up to 120fps frame rate at 550mW power consumption.
  • A 5.1ms Low-Latency Face Detection Imager with In-Memory Charge-Domain Computing of Machine-Learning Classifiers,
    H. Song*, S. Oh*, J. Salinas*, S.-Y. Park** and E. Yoon*, *Univ. of Michigan, USA and **Pusan National Univ., Korea
    We present a CMOS imager for low-latency face detection empowered by parallel imaging and computing of machinelearning (ML) classifiers. The energy-efficient parallel operation and multi-scale detection eliminate image capture delay and significantly alleviate backend computational loads. The proposed pixel architecture, composed of dynamic samplers in a global shutter (GS) pixel array, allows for energy-efficient in-memory charge-domain computing of feature extraction and classification. The illumination-invariant detection was realized by using log-Haar features. A prototype 240x240 imager achieved an on-chip face detection latency of 5.1ms with a 97.9% true positive rate and 2% false positive rate at 120fps. Moreover, a dynamic nature of in-memory computing allows an energy efficiency of 419pJ_pixel for feature extraction and classification, leading to the smallest latency-energy product of 3.66ms.nJ_pixel with digital backend processing.
  • A CMOS LiDAR Sensor with Pre-Post Weighted-Histogramming for Sunlight Immunity Over 105 klx and SPAD-Based Infinite Interference Canceling,
    S. Hyeongseok, Sungkyunkwan Univ., Korea
    This paper presents a CMOS LiDAR sensor with high background noise (BGN) immunity. The sensor has on-chip pre-post weighted histogramming to detect only time-correlated time-of-flight (TOF) out of BGN from both sunlight and exponentially increased dark noise while enhancing sensitivity through higher excess voltage (Vex) of SPADs. The sensor also employs a SPAD-based random number generator (SRNG) for canceling interference (IF) from an infinite number of LiDARs. The sensor shows 8.08 cm accuracy for the range of 32 m under high BGN (105 klx sunlight and 48.72 kcps dark-count rate with increased Vex).
  • Advanced Multi-NIR Spectral Image Sensor with Optimized Vision Sensing System and Its Impact on Innovative Applications,
    H. Sumi*, **, H. Takehara**, J. Ohta** and M. Ishikawa*, *The Univ. of Tokyo and **Nara Institute of Science and Technology, Japan
    Innovative applications with multiple near-infrared (multi-NIR) spectral CMOS image sensors (CIS) and camera systems have recently been developed. The multi-NIR filter is an indispensably key technology in practical of using the multi-NIR camera system in consumer camera. Advanced processing technology for multi-NIR signals has been developed using a Fabry-Perot structure. Three types of NIR wavelength filters are formed as a Bayer pattern with 2-x-2um2 pixel size on a 5-M pixel BSICIS. The thickness differences of the three types of bandpass filters are suppressed to less than 75 nm. To enable applications in surveillance, automobiles, and fundus cameras for health management, signal processing technology has also been developed that processes and mixes each signal of a multi-NIR signal with low-intensity visible light images. This provides good image SNR (Signal-to-Noise Ratio ) under low lighting conditions of 0.1 lux or less allowing changes of state to be easily identified.
  • Multiplex PCR CMOS Biochip for Detection of Upper Respiratory Pathogens including SARS-CoV-2,
    A. Manickam, K. A. Johnson, R. Singh, N. Wood, E. Ku, A. Cuppoletti, M. McDermott and A. Hassibi, InSilixa, Inc., USA
    A 1024-pixel CMOS biochip for multiplex polymerase chain reaction application is presented. Biosensing pixels include 137dB DDR photosensors and an integrated emission filter with OD ~ 6 to perform real-time fluorescence-based measurements while thermocycling the reaction chamber with heating and cooling rates of > ±10°C/s. The surface of the CMOS IC is biofunctionalized with DNA capturing probes. The biochip is integrated into a fluidic consumable enabling loading of extracted nucleic acid samples and the detection of upper respiratory pathogens, including SARS-CoV-2.
The Symposia also offers a short course "Image Sensor Technologies for Computer Vision Systems to Realize Smart Sensing" by A. Nose, Sony Semiconductor Solutions Corp.

Monday, May 03, 2021

Ex-ON Semi Belgium Group Joins Omnivision

Recently, ON Semi has laid off 18 engineers in its Belgium design center. The group, led by Tomas Geurts and Tom Gyselinck, has joined Omnivision now:

ON Semi Reports Q1 Results

ON Semi reports higher YoY image sensor sales:

SPAD vs APD in ToF Applications

MDPI publishes a paper "Analytical Evaluation of Signal-to-Noise Ratios for Avalanche- and Single-Photon Avalanche Diodes" by Andre Buchner, Stefan Hadrath, Roman Burkard, Florian M. Kolb, Jennifer Ruskowski, Manuel Ligges, and Anton Grabmaier from Fraunhofer Institute for Microelectronic Circuits and Systems, OSRAM, and University of Duisburg-Essen, Germany.

"Designers of optical systems for ranging applications can choose from a variety of highly sensitive photodetectors, of which the two most prominent ones are linear mode avalanche photodiodes (LM-APDs or APDs) and Geiger-mode APDs or single-photon avalanche diodes (SPADs). Both achieve high responsivity and fast optical response, while maintaining low noise characteristics, which is crucial in low-light applications such as fluorescence lifetime measurements or high intensity measurements, for example, Light Detection and Ranging (LiDAR), in outdoor scenarios. The signal-to-noise ratio (SNR) of detectors is used as an analytical, scenario-dependent tool to simplify detector choice for optical system designers depending on technologically achievable photodiode parameters. In this article, analytical methods are used to obtain a universal SNR comparison of APDs and SPADs for the first time. Different signal and ambient light power levels are evaluated. The low noise characteristic of a typical SPAD leads to high SNR in scenarios with overall low signal power, but high background illumination can saturate the detector. LM-APDs achieve higher SNR in systems with higher signal and noise power but compromise signals with low power because of the noise characteristic of the diode and its readout electronics. Besides pure differentiation of signal levels without time information, ranging performance in LiDAR with time-dependent signals is discussed for a reference distance of 100 m. This evaluation should support LiDAR system designers in choosing a matching photodiode and allows for further discussion regarding future technological development and multi pixel detector designs in a common framework."

Sunday, May 02, 2021

The Rise of Driver Monitoring Camera Market

EETimes writes "Anyone who believes vision-based driver monitoring systems (DMS) are unnecessary or obsolete has not been paying attention to the recent market developments covered in EE Times."



BusinessWire: Prophesee and Xperi announce a neuromorphic DMS, powered by Prophesee Metavision Event-Based Vision sensor.