Thursday, May 06, 2021

Samsung, UCSD, and University of Southern Mississippi Develop SWIR to Visible Image Converter

Phys.org, Newswise, UCSD: Advanced Functional Materials paper "Organic Upconversion Imager with Dual Electronic and Optical Readouts for Shortwave Infrared Light Detection" by Ning Li, Naresh Eedugurala, Dong-Seok Leem, Jason D. Azoulay, and Tse Nga Ng from Samsung Advanced Institute of Technology, UCSD, and University of Southern Mississippi presents a flat SWIR to visible converting device:

"...an organic upconversion imager that is efficient in both optical and electronic readouts, extending the capability of human and machine vision to 1400 nm, is designed and demonstrated. The imager structure incorporates interfacial layers to suppress non‐radiative recombination and provide enhanced optical upconversion efficiency and electronic detectivity. The photoresponse is comparable to state‐of‐the‐art organic infrared photodiodes exhibiting a high external quantum efficiency of ≤35% at a low bias of ≤3 V and 3 dB bandwidth of 10 kHz. The large active area of 2 cm2 enables demonstrations such as object inspection, imaging through smog, and concurrent recording of blood vessel location and blood flow pulses. These examples showcase the potential of the authors’ dual‐readout imager to directly upconvert infrared light for human visual perception and simultaneously yield electronic signals for automated monitoring applications."


Graphene to Revolutionize Automotive Imaging?

AZOsensorsAUTOVISION Spearhead Project from the Europe-based Graphene Flagship consortium is currently creating a new graphene-based, high-resolution image sensor. The new sensor can detect a bread light spectrum from UV to SWIR.

In 2020, member organizations under the Autovision umbrella announced a technique for the growth and transfer of wafer-scale graphene that uses standard semiconductor equipment. Project members collaborated to outline a suite of camera tests designed to make the Autovision sensor compete with cutting-edge visible cameras, SWIR cameras, and LiDAR systems.

The AUTOVISION project is led by Qurv in Barcelona, and counts on the collaboration of industrial partners such Aixtron in the UK and Veoneer in Sweden, this new project will help make safe deployment of autonomous vehicles possible. Over the course of three years, the project will produce CMOS graphene quantum dot image sensors in prototype sensor systems, ready for uptake in the automotive sector. Across the duration of the project, the developing image sensor is set to take huge leaps in sensitivity, operation speed and pixel size.

Omnivision Announces Automotive HDR ISP

Omnivision's new OAX4000 is a companion ISP for the company's HDR sensors providing a complete multicamera viewing application solution with fully processed YUV output. It is capable of processing up to four camera modules with 140 dB HDR, along with the leading LED flicker mitigation (LFM) performance in the industry and high 8MP resolution. It supports multiple CFA patterns, including Bayer, RCCB, RGB-IR and RYYCy. Additionally, the OAX4000 offers more than 30% power savings over the previous generation.


Wednesday, May 05, 2021

More about the First Event-Driven Sensor in Space

Zurich University publishes more info about DAVIS240, the first event driven sensor in space. The pair of DAVIS240 launched to space was included in the custom payload as part of the UNSW Canberra Space’s M2 CubeSat satellite. It was launched with Rocket Lab’s ‘They Go Up So Fast’ mission from New Zealand March 23, 2021. The article includes a very nice high resolution picture of the sensor's layout:

CVPR 2021 Workshop on Event-based Vision Papers On-Line

Computer Vision and Pattern Recognition Workshop on Event-based Vision to be held on June 19, 2021, has already published its papers in on-line access:

  • v2e: From Video Frames to Realistic DVS Events, and Suppl mat
  • Differentiable Event Stream Simulator for Non-Rigid 3D Tracking, and Suppl mat
  • Comparing Representations in Tracking for Event Camera-based SLAM
  • Image Reconstruction from Neuromorphic Event Cameras using Laplacian-Prediction and Poisson Integration with Spiking and Artificial Neural Networks
  • Detecting Stable Keypoints from Events through Image Gradient Prediction
  • EFI-Net: Video Frame Interpolation from Fusion of Events and Frames, and Suppl. mat
  • DVS-OUTLAB: A Neuromorphic Event-Based Long Time Monitoring Dataset for Real-World Outdoor Scenarios
  • N-ROD: a Neuromorphic Dataset for Synthetic-to-Real Domain Adaptation
  • Lifting Monocular Events to 3D Human Poses
  • A Cortically-inspired Architecture for Event-based Visual Motion Processing: From Design Principles to Real-world Applications
  • Spike timing-based unsupervised learning of orientation, disparity, and motion representations in a spiking neural network, and Suppl mat
  • Feedback control of event cameras
  • How to Calibrate Your Event Camera
  • Live Demonstration: Incremental Motion Estimation for Event-based Cameras by Dispersion Minimisation
Thanks to TD for the link!

Vivo's Imaging R&D Team is 700 People Strong

Baidu digital creator Lao Hu publishes an article about camera development team and investment of one of the largest smartphone brands, Vivo:

"...in terms of imaging, Li Zhuo, director of vivo imaging products, revealed last year that the research and development investment in vivo imaging exceeded 20 billion yuan two years ago. In addition, vivo has established global imaging research and development centers in San Diego, Japan, Tokyo, Hangzhou, Xi'an and other places, with a team of more than 700 research and development personnel.

According to media reports, vivo's imaging research and development center has a very clear division. The imaging team in San Diego, USA mainly focuses on the platform ISP level; the imaging R&D team in Japan mainly focuses on the customization and optimization of optics, image sensors, lens modules, etc.; the imaging team in Hangzhou focuses on imaging algorithms; the imaging team in Xi’an is mainly responsible for mobile Debugging and development of imaging field and pre-research of some algorithms.

Vivo not only insists on independent innovation, but also cooperates with powerful third-party forces. For example, last year, Vivo reached a global strategic partnership with Zeiss, the century-old German optical master, and the two parties also jointly established the Vivo Zeiss Joint Imaging Laboratory and organized their respective technologies. Experts use their respective advantages in optics and algorithms to conduct joint research and development, and are committed to solving a series of technical bottlenecks in mobile imaging, leading the continuous innovation of imaging technology, and bringing the world's top mobile phone shooting experience to consumers around the world."

Memristor Image Sensor with In-Memory Computing

2021 IEEE International Symposium on Circuits and Systems (ISCAS) to be held on-line on May 22-28, 2021, publishes all its proceedings in open access. One of the papers presents an image sensor with integrated non-volatile memory: "A new 1P1R Image Sensor with In-Memory Computing Properties based on Silicon Nitride Devices" by Nikolaos Vasileiadis, Vasileios Ntinas, Iosif-Angelos Fyrigos, Rafailia-Eleni Karamani, Vassilios Ioannou-Sougleridis, Pascal Normand, Ioannis Karafyllidis, Georgios Ch. Sirakoulis, and Panagiotis Dimitrakis from National Center for Scientific Research “Demokritos” and Democritus University of Thrace, Greece.

"Research progress in edge computing hardware, capable of demanding in-the-field processing tasks with simultaneous memory and low power properties, is leading the way towards a revolution in IoT hardware technology. Resistive random access memories (RRAM) are promising candidates for replacing current non-volatile memories and realize storage class memories, but also due to their memristive nature they are the perfect candidates for in-memory computing architectures. In this context, a CMOS compatible silicon nitride (SiN) device with memristive properties is presented accompanied by a data-fitted model extracted through analysis of measured resistance switching dynamics. Additionally, a new phototransistor-based image sensor architecture with integrated SiN memristor (1P1R) was presented. The in-memory computing capabilities of the 1P1R device were evaluated through SPICE-level circuit simulation with the previous presented device model. Finally, the fabrication aspects of the sensor are discussed."

SPAC Reduces Aeye Valuation ahead of Closing

Reuters: AEye and a SPAC company CF Finance Acquisition Corp. III amended their merger agreement, valuing the LiDAR maker Aeye at $1.52B, citing valuation changes of publicly traded lidar companies.

In the initial announcement in February, AEye was valued at $2B. The companies attributed the terms of the amended deal to "changing conditions" in the automotive lidar industry.

Meanwhile, Aeye publishes an interview with Continental and ex-GM CTO saying that LiDAR is an absolute necessity for autonomous driving:

Tuesday, May 04, 2021

NIT Presents Nanocrystal SWIR Imager

New Imaging Technologies publishes a whitepaper "Infrared sensing using nanocrystal toward on demand light matter coupling" by Eva Izquierdo, Audrey Chu, Charlie Gréboval, Gregory Vincent, David Darson, Victor Parahyba, Pierre Potet, and Emmanuel Lhuillier from Sorbonne Université, ONERA – The French Aerospace Lab, and NIT.

"Nanocrystals are semiconductor nanoparticles whose optical properties can be tuned from UV up to THz. They are used as sources of green and red light for displays, and also show exhibit promises to design low-cost infrared detectors. Coupling subwavelength optical resonators to nanocrystals film enables to design photodiodes that absorb 80% of incident light from thin (<150 nm) nanocrystal film. It thus becomes possible to overcome the constraints linked to the short diffusion lengths which result from transport by hopping within arrays of nanocrystals enabling a high photoresponse detector operating in the SWIR range."

VLSI Symposia: Why Sony Integrates MRAM on Sensor?

VLSI Symposia 2021 will be held in a fully virtual format due to COVID-19. While there are many imaging-related papers in the program, the most intriguing one comes from Sony on non-volatile MRAM integration onto an image sensor:
  • 3D Stacked CIS Compatible 40nm Embedded STT-MRAM for Buffer Memory,
    M. Oka*, Y. Namba*, Y. Sato*, H. Uchida*, T. Doi*, T. Tatsuno*, M. Nakazawa*, A. Tamura*, R. Haga*, M. Kuroda*, M. Hosomi*, K. Suemitsu**, E. Kariyada**, T. Suzuki**, H. Tanigawa**, M. Ueki**, M. Moritoki**, Y. Takegawa**, K. Bessho* and T. Umebayashi*,
    *Sony Semiconductor Solutions Corp. and
    **Sony Semiconductor Manufacturing Corp., Japan
    This paper presents the world's first demonstration of a 40nm embedded STT-MRAM for buffer memory, which is compatible with the 3D stacked CMOS image sensor (CIS) process. We optimized a CoFeB-based perpendicular magnetic tunnel junction (p-MTJ) to suppress the degradation of magnetic properties caused by the 3D stacked wafer process. With improved processes, we achieved high speed write operation below 40 ns under typical operation voltage conditions, endurance up to 1E+10 cycles and 1 s data retention required for a buffer memory. In addition, to broaden the application of embedded MRAM (eMRAM), we proposed a novel fusion technology that integrated embedded non-volatile memory (eNVM) and buffer memory type embedded MRAM in the same chip. We achieved a data retention of 1 s ~ >10 years with a sufficient write margin using the fusion technology.
Why does Sony need the MRAM on image sensor? My seven best guesses are:
  1. Column FPN calibration. Possibly, FPN can be reduced by another order of magnitude below its current level
  2. Dark current calibration. A few dark frames at different temperatures and exposure times can be stored on-chip, and subtracted, when needed.
  3. Per-pixel FPN calibration in global shutter pixels. For example, a storage node leakage can be measured at different temperatures and readout speeds and subtracted later. Or charge injection in voltage-domain GS pixels can be calibrated-out.
  4. Per-pixel PRNU and color crosstalk calibration to get a silky-smooth sky on photo
  5. PDAF pixel variations calibration, so that AF would be more accurate
  6. Temperature gradients and respective black level variations across the pixel array can be measured in different operating modes and temperatures and stored on-sensor
  7. Some kind of per-pixel calibration for ToF or event-driven sensors. Maybe store individual voltages for each APD in an array of APD pixels
Other image sensor-related papers are:
  • (Invited) A CMOS Image Sensor and an AI Accelerator for Realizing Edge-Computing-Based Surveillance Camera Systems,
    F. Morishita, N. Kato, S. Okubo, T. Toi, M. Hiraki, S. Otani, H. Abe, Y. Shinohara and H. Kondo, Renesas Electronics Corp., Japan
    This paper presents a CMOS image sensor and an AI accelerator to realize surveillance camera systems based on edge computing. For CMOS image sensors to be used for surveillance, it is desirable that they are highly sensitive even in low illuminance. We propose a new timing shift ADC used in CMOS image sensors for improving high sensitivity performance. Our proposed ADC improves non-linearity characteristics under low illuminance by 63%. Achieving power-efficient edge computing is a challenge for the systems to be used widely in the surveillance camera market. We demonstrate that our proposed AI accelerator performs inference processing for object recognition with 1 TOPS/W.
  • All-Directional Dual Pixel Auto Focus Technology in CMOS Image Sensors,
    E. S. Shim, Samsung Electronics Co., Ltd., Korea
    We developed a dual pixel with accurate and all-directional auto focus (AF) performance in CMOS image sensor (CIS). The optimized in-pixel deep trench isolation (DTI) provided accurate AF data and good image quality in the entire image area and over whole visible wavelength range. Furthermore, the horizontal-vertical (HV) dual pixel with the slanted in-pixel DTI enabled the acquisition of all-directional AF information by the conventional dual pixel readout method. These technologies were demonstrated in 1.4um dual pixel and will be applied to the further shrunken pixels.
  • Development of Advanced Inter-Color-Filter Grid on Sub-Micron-Pixel CMOS Image Sensor for Mobile Cameras with High Sensitivity and High Resolution,
    J. In-Sung, Y. Lee, H. Y. Park, J. U. Kim, D. Kang, T. Kim, M. Kim, K. Lee, M. Heo, I. Ro, J. Kim, I. Park, S. Kwon, K. Yoon, D. Park, C. Lee, E. Jo, M. Jeon, C. Park, K. R. Byun, C. K. Chang, J. S. Hur, K. Yoon, T. Jeon, J. Lee, J. Park, B. Kim, J. Ahn, H. Kim, C.-R. Moon and H.-S. Kim, Samsung Electronics Co., Ltd., Korea
    Sub-micron pixels have been widely adopted in recent CMOS image sensors to implement high resolution cameras in small form factors, i.e. slim mobile-phones. Even with shrinking pixels, customers demand higher image quality, and the pixel performance must remain comparable to that of the previous generations. Conventionally, to suppress the optical crosstalk between pixels, a metal grid has been used as an isolation structure between adjacent color filters. However, as the pixel size continues to shrink to the sub-micron regime, an optical loss increases because the focal spot size of the pixel's microlens does not downscale accordingly with the decreasing pixel size due to the diffraction limit: the light absorption inevitably occurs in the metal grid. For the first time, we have demonstrated a new lossless, dielectric-only grid scheme. The result shows 29 % increase in sensitivity and +1.2-dB enhancement in Y-SNR when compared to the previous hybrid metal-and-dielectric grid.
  • A 2.6 e-Rms Low-Random-Noise, 116.2 mW Low-Power 2-Mp Global Shutter CMOS Image Sensor with Pixel-Level ADC and In-Pixel Memory,
    M.-W. Seo, M. Chu, H.-Y. Jung, S. Kim, J. Song, J. Lee, S.-Y. Kim, J. Lee, S.-J. Byun, D. Bae, M. Kim, G.-D. Lee, H. Shim, C. Um, C. Kim, I.-G. Baek, D. Kwon, H. Kim, H. Choi, J. Go, J. Ahn, J.-k. Lee, C. Moon, K. Lee and H.-S. Kim, Samsung Electronics Co., Ltd., Korea
    This paper presents a low-random noise of 2.6 e-rms, a low-power of 116.2 mW at video rate, and a high-speed up to 960 fps 2-mega pixels global-shutter type CMOS image sensor (CIS) using an advanced DRAM technology. To achieve a high performance global-shutter CIS, we proposed a novel architecture for the digital pixel sensor which is a remarkable global shutter operation CIS with a pixel-wise ADC and an in-pixel digital memory. Each pixel has two small-pitch Cu-to-Cu interconnectors for the wafer-level stacking, and the pitch of each unit pixel is less than 5 um which is the world's smallest pixel embedding both pixel-level ADC and 22-bit memories.
  • A Photon-Counting 4Mpixel Stacked BSI Quanta Image Sensor with 0.3e- Read Noise and 100dB Single-Exposure Dynamic Range,
    J. Ma, D. Zhang, O. Elgendy and S. Masoodian, Gigajot Technology Inc., USA
    This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2um pixels that can operate simultaneously in photon-counting mode with deep sub-electron read noise (0.3e- rms) and linear integration mode with large full-well capacity (30k e-). A single-exposure dynamic range of 100dB is realized with this dual-mode readout under room temperature. This QIS device uses a cluster-parallel readout architecture to achieve up to 120fps frame rate at 550mW power consumption.
  • A 5.1ms Low-Latency Face Detection Imager with In-Memory Charge-Domain Computing of Machine-Learning Classifiers,
    H. Song*, S. Oh*, J. Salinas*, S.-Y. Park** and E. Yoon*, *Univ. of Michigan, USA and **Pusan National Univ., Korea
    We present a CMOS imager for low-latency face detection empowered by parallel imaging and computing of machinelearning (ML) classifiers. The energy-efficient parallel operation and multi-scale detection eliminate image capture delay and significantly alleviate backend computational loads. The proposed pixel architecture, composed of dynamic samplers in a global shutter (GS) pixel array, allows for energy-efficient in-memory charge-domain computing of feature extraction and classification. The illumination-invariant detection was realized by using log-Haar features. A prototype 240x240 imager achieved an on-chip face detection latency of 5.1ms with a 97.9% true positive rate and 2% false positive rate at 120fps. Moreover, a dynamic nature of in-memory computing allows an energy efficiency of 419pJ_pixel for feature extraction and classification, leading to the smallest latency-energy product of 3.66ms.nJ_pixel with digital backend processing.
  • A CMOS LiDAR Sensor with Pre-Post Weighted-Histogramming for Sunlight Immunity Over 105 klx and SPAD-Based Infinite Interference Canceling,
    S. Hyeongseok, Sungkyunkwan Univ., Korea
    This paper presents a CMOS LiDAR sensor with high background noise (BGN) immunity. The sensor has on-chip pre-post weighted histogramming to detect only time-correlated time-of-flight (TOF) out of BGN from both sunlight and exponentially increased dark noise while enhancing sensitivity through higher excess voltage (Vex) of SPADs. The sensor also employs a SPAD-based random number generator (SRNG) for canceling interference (IF) from an infinite number of LiDARs. The sensor shows 8.08 cm accuracy for the range of 32 m under high BGN (105 klx sunlight and 48.72 kcps dark-count rate with increased Vex).
  • Advanced Multi-NIR Spectral Image Sensor with Optimized Vision Sensing System and Its Impact on Innovative Applications,
    H. Sumi*, **, H. Takehara**, J. Ohta** and M. Ishikawa*, *The Univ. of Tokyo and **Nara Institute of Science and Technology, Japan
    Innovative applications with multiple near-infrared (multi-NIR) spectral CMOS image sensors (CIS) and camera systems have recently been developed. The multi-NIR filter is an indispensably key technology in practical of using the multi-NIR camera system in consumer camera. Advanced processing technology for multi-NIR signals has been developed using a Fabry-Perot structure. Three types of NIR wavelength filters are formed as a Bayer pattern with 2-x-2um2 pixel size on a 5-M pixel BSICIS. The thickness differences of the three types of bandpass filters are suppressed to less than 75 nm. To enable applications in surveillance, automobiles, and fundus cameras for health management, signal processing technology has also been developed that processes and mixes each signal of a multi-NIR signal with low-intensity visible light images. This provides good image SNR (Signal-to-Noise Ratio ) under low lighting conditions of 0.1 lux or less allowing changes of state to be easily identified.
  • Multiplex PCR CMOS Biochip for Detection of Upper Respiratory Pathogens including SARS-CoV-2,
    A. Manickam, K. A. Johnson, R. Singh, N. Wood, E. Ku, A. Cuppoletti, M. McDermott and A. Hassibi, InSilixa, Inc., USA
    A 1024-pixel CMOS biochip for multiplex polymerase chain reaction application is presented. Biosensing pixels include 137dB DDR photosensors and an integrated emission filter with OD ~ 6 to perform real-time fluorescence-based measurements while thermocycling the reaction chamber with heating and cooling rates of > ±10°C/s. The surface of the CMOS IC is biofunctionalized with DNA capturing probes. The biochip is integrated into a fluidic consumable enabling loading of extracted nucleic acid samples and the detection of upper respiratory pathogens, including SARS-CoV-2.
The Symposia also offers a short course "Image Sensor Technologies for Computer Vision Systems to Realize Smart Sensing" by A. Nose, Sony Semiconductor Solutions Corp.

Monday, May 03, 2021

Ex-ON Semi Belgium Group Joins Omnivision

Recently, ON Semi has laid off 18 engineers in its Belgium design center. The group, led by Tomas Geurts and Tom Gyselinck, has joined Omnivision now: