Wednesday, December 18, 2019

Image Sensors at EI 2020

Electronic Imaging Conference to be held on Jan. 27-30 in Burlingame, CA, unveils its agenda with quite a few image sensor papers:

3D-IC smart image sensors
Laurent Millet, Stephane Chevobbe
CEA/LETI, CEA/LIST, France
This presentation will introduce 3D-IC technologies applied to imaging, and give some examples of 3D-IC or stacked sensors and their 3D partitioning topologies. A focus will be given on our stacked vision chip that embeds flexible pre-processing at high-speed and low latency, like fast event detection, edge detection or convolution computation. The perspectives will show how this technology can pave the way for new sensor architectures and applications.

Indirect time-of-flight CMOS image sensor using 4-tap charge-modulation pixels and range-shifting multi-zone technique
Kamel Mars, Keita Kondo, Michihiro Inoue, Shohei Daikoku, Masashi Hakamata, Keita Yasutomi, Keiichiro Kagawa, Sung-Wook Jun, Yoshiyuki Mineyama, Satoshi Aoyama, Shoji Kawahito
Shizuoka University, Tokyo Institute of Technology, Brookman Technology, Japan

This paper presents an indirect TOF image sensor using short pulse modulation based 4-tap one drain pixels and fast sub-frames readout for range shifted multiple pulse capturing time window. measurement uses a short pulse modulation technique combined with short multiple sub-frames where the accumulations number for each sub-frame is carefully selected for near and far zone in order to ovoid sensor saturation due to strong laser power or strong ambient light. Current setup uses two sub-frames where the gate opening sequence is set as G21G2G3G4 and where the gate pulse width is set to 10ns. The proposed timing sequence allows 3-time windows at each sub-frame. By combining the last gate of the first sub-frame and the first gate of the second sub-frame, an extra time window is also obtained making seven measurable time windows in total. The process of combining the two sub-frame is achieved offline by an automated calculation algorithm allowing automated and smooth measurement of two zone simultaneously. TOF image and range of 10.5m have been successfully measured using 2-subframes and 7-time windows where the used light pulse width is also set to 10ns allowing a 1.5m measurement range for each window. A depth resolution of 1 percent was achieved at 10m range.

A short-pulse based time-of-flight image sensor using 4-tap charge-modulation pixels with accelerated carrier response
Michihiro Inoue, Shohei Daikoku, Keita Kondo, Akihito Komazawa, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Shizuoka University, Japan

Most of the reported CMOS indirect TOF range imagers are designed for CW (continuous wave) modulation and their range resolutions have been greatly improved by using high modulation frequency of over 100MHz. On the other hand, for extending the applications of indirect TOF image sensors to outdoor and high ambient light environments, a short-pulse-based TOF image sensor with multi-tap charge-modulation pixels will be a good candidate. The TOF sensor to be announced this time shows that the pixel with three n-type doping layers and substrate biasing has a sufficient gating response to the light pulse width of 4ns with the linearity of 3%.

A high-linearity time-of-flight image sensor using a time-domain feedback technique
Juyeong Kim, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Shizuoka University, Japan

In this paper, we proposed a time-domain feedback technique for Time-of-Flight (ToF) image sensor. The time-domain feedback has an advantage of easily time-to-digital conversion and effectively suppressing the linearity error. The time-domain feedback technique has been implemented by 2-tap lock-in pixels and 5b digitally-controlled delay lines (DCDLs). The prototype ToF sensor is fabricated in a 0.11μm (1P4M) CIS process. The lock-in pixels, having a size of 16.8×16.8μm2, are driven by 7ns of pulse signal from the 5b DCDLs. The light pulse delay is controlled to measure the performance. Full-range is set to 0 to 105cm with an 11b for the full scale in 22ms. Our sensor has attained the linearity of less than 0.3%, and the range resolution of 2.67mm (peak) and 0.27mm (mean) has been achieved without any calibration techniques.

A 4-tap global shutter pixel with enhanced IR sensitivity for VGA time-of-flight CMOS image sensors
Taesub Jung, Yonghun Kwon, Sungyoung Seo, Min-Sun Keel, Changkeun Lee, Sung-Ho Choi, Sae-Young Kim, Sunghyuck Cho, Youngchan Kim, Young-Gu Jin, Moosup Lim, Hyunsurk Ryu, Yitae Kim, Joonseok Kim, Chang-Rok Moon
Samsung Electronics, Korea

An indirect time-of-flight (ToF) CMOS image sensor has been designed with 4-tap 7 µm global shutter pixel in back-side illumination process. 15000 e- of high full-well capacity (FWC) per a tap of 3.5 µm pitch and 3.6 e- of read-noise has been realized by employing true correlated double sampling (CDS) structure with storage gates (SGs). Noble characteristics such as 86 % of demodulation contrast (DC) at 100MHz operation, 37 % of higher quantum efficiency (QE) and lower parasitic light sensitivity (PLS) at 940 nm have been achieved. As a result, the proposed ToF sensor shows depth noise less than 0.3 % with 940 nm illuminator in even long distance.

An over 120dB dynamic range linear response single exposure CMOS image sensor with two-stage lateral overflow integration trench capacitors
Yasuyuki Fujihara, Maasa Murata, Shota Nakayama, Rihito Kuroda, Shigetoshi Sugawa
Tohoku University, Japan

This paper presents a prototype linear response single exposure CMOS image sensor with two-stage lateral overflow integration trench capacitors (LOFITreCs) exhibiting over 120dB dynamic range with 11.4Me- full well capacity and maximum signal-to-noise ratio (SNR) of 70dB. The measured SNR at all switching points were over 35dB thanks to the proposed two-stage LOFITreCs.

Deep image demosaicing for submicron image sensors (JIST-first)
Irina Kim, Seongwook Song, SoonKeun Chang, SukHwan Lim, Kai Guo
Samsung Electronics, Korea

The latest trend in image sensor technology allowing submicron pixel size for high-end mobile devices comes at very high image resolutions and with irregularly sampled Quad Bayer Color Filter Array (CFA). Sustaining image quality becomes a challenge for the Image Signal Processor (ISP), namely for demosaicing. Inspired by the success of deep learning approach to standard Bayer demosaicing, we aim to investigate how artifacts-prone Quad Bayer Array can benefit from it. We found that deeper networks are capable to improve image quality and reduce artifacts; however, deeper networks can be hardly deployed on mobile devices given very high image resolutions: 24MP, 36MP, 48MP. In this paper, we propose an efficient end-to-end solution to bridge this gap - a Duplex Pyramid Network (DPN). Deep hierarchical structure, residual learning, linear feature maps depth growth allow very large receptive field, yielding better details restoration and artifacts reduction, while staying computationally efficient. Experiments show that the proposed network outperforms state-of-the-art for both Bayer and Quad Bayer demosaicing. For challenging Quad Bayer CFA it reduces visual artifacts better than other deep networks including artifacts existing in conventional commercial solution. While superior in image quality, it is x2-x25 times faster than state-of-the-art deep neural networks and therefore feasible for deployment on mobile devices, paving the way for a new era of on-device deep ISPs.

Imaging in the autonomous vehicle revolution
Gary Hicok
NVIDIA, USA

Innovation of imaging capabilities for AVs has been rapidly improving to the point that the cornerstone AV sensors are cameras. Much like the human brain processes visual data taken in by the eyes, AVs must be able to make sense of this constant flow of information, which requires high-performance computing to respond to the flow of sensor data. This presentation will delve into how these developments in imaging are being used to train, test and operate safe autonomous vehicles. Attendees will walk away with a better understanding of how deep learning, sensor fusion, surround vision and accelerated computing are enabling this deployment.

Single-shot multi-frequency pulse-TOF depth imaging with sub-clock shifting for multi-path interference separation
Tomoya Kokado, Yu Feng, Masaya Horio, Keita Yasutomi, Shoji Kawahito, Takashi Komuro, Hajime Ngahara, Keiichiro Kagawa
Shizuoka University, Saitama University, Osaka University, Japan

Short-pulse-based time-of-flight (TOF) depth imaging using on a multi-tap macro-pixel computational ultra-fast CMOS image sensor with temporally coded shutters was demonstrated. To separate multi-path components and shorten the minimal separation between the adjacent pulses in a single shot and to overcome the range-resolution tradeoff, an application of multi-frequency coded shutters and sub-clock shifting is proposed. The computational CMOS image sensor incorporates an array of macro-pixels each of which is composed of four sub-pixels. The subpixels are implemented with four-tap lateral electric field charge modulators (LEFMs) with dedicated charge draining gates. For the macro-pixel, 16 different temporal binary shutters are applied to acquire a mosaic image of cross-correlations between an incident temporal optical signal and the temporal shutters. The effectiveness of the proposed method was verified experimentally with the computational CMOS image sensor. The clock frequency for the shutter generator was 73MHz. A 520nm sub-ns pulse laser was used. A two-component multi-path optical signal created by a transparent acrylic plate and a mirror, which were placed 8.2m apart each other, and change in time of flight that was a half as long as the minimal time window were successfully distinguished.

Improving the disparity for depth extraction by decreasing the pixel height in monochrome CMOS image sensor with offset pixel apertures
Jimin Lee1, Sang-Hwan Kim, Hyeunwoo Kwen, Seunghyuk Chang, JongHo Park, Sang-Jin Lee, Jang-Kyoo Shin
Kyungpook National University, Korea Advanced Institute of Science and Technology, Korea

This paper introduces the disparity improvement due to pixel height decrease in monochrome CMOS image sensor (CIS) with offset pixel apertures (OPAs) for depth extraction. A 3D image is a stereoscopic image created by adding depth information to a planar two-dimensional image. In the monochrome CIS with the OPAs described in this paper, the disparity is an important factor for obtaining depth information. As the pixel height decreases, the incident angle of light transferred from the microlens to the metal pattern opening increases. Therefore, the light response angle of left-OPA (LOPA) pixel and right-OPA (ROPA) pixel increases and thus the disparity improves. In this work, silicon-region-etching (SRE) process is applied to the proposed monochrome CIS with OPAs and the overall height of the pixel is lowered. Monochrome CIS with OPAs is used for the experiment, and a chief-ray-angle (CRA) experiment is implemented to measure the change of the disparity according to the pixel height. The proposed monochrome CIS with OPAs was designed and manufactured using the 0.11-μm CIS process. Improved disparity due to decreased pixel height has been experimentally verified.

Planar microlenses for near infrared CMOS image sensors
Lucie Dilhan, Jérôme Vaillant, Alain Ostrovsky, Lilian Masarotto, Céline Pichard, Romain Paquet
University Grenoble Alpes, CEA, STMicroelectronics, France

In this paper we present planar microlenses designed to improve the sensitivity of SPAD pixels. We designed diffractive and metasurface planar microlens structures based on rigorous optical simulations, then we implemented the diffractive microlens on a SPAD design available on STMicroelectronics 40nm CMOS testchips (32 x 32 SPAD array), and compared with the process of reference melted microlens. We characterized circuits and demonstrated optical gain from our designed microlenses.

Event threshold modulation in dynamic vision spiking imagers for data throughput reduction
Luis Cubero, Arnaud Peizerat, Dominique Morche, Gilles Sicard
LETI, CEA, University Grenoble Alpes, France

Dynamic vision sensors are growing in popularity for Computer Vision and moving scenes: its output is a stream of events reflecting temporal lighting changes, instead of absolute values. One of its advantages is fast detection of events, as they are read asynchronously as spikes. However, high event data throughput implies an increasing workload for the read-out. That can lead to data loss or to prohibitively large power consumption for constrained devices. This work presents a technique to reduce that event data throughput at the cost of a very compact additional circuitry at the pixel level: less events are generated while preserving most of the information. Our simulated example depicts a data throughput reduced to 14 %, in the case of the most aggressive version of our approach.

No comments:

Post a Comment

All comments are moderated to avoid spam and personal attacks.