SPIE/IS&T Electronic Imaging Conference, to be held on Feb. 12-15, 2015 in San Francisco, publishes its Advance Program. Image Sensors and Imaging System and Digital Photography and Mobile Imaging tracks have many interesting image sensor papers. Just to name a few:
2.2um BSI CMOS image sensor with two layer photo-detector
Authors: Hiroki Sasaki, Toshiba Corp. (Japan)
Abstract:
Back Side Illumination (BSI) CMOS image sensor with a Two-layer (2L) photo detector has been fabricated and evaluated. The BSI test pixel array has Green pixel (2.2um x 2.2um) and Magenta pixel (2.2um x 4.4um). Green pixel has single-layer (1L) photo detector, Magenta pixel has 2L photo detector and vertical charge transfer (VCT) path for back side photo detector. 2L photo detector and VCT were implemented by high-energy ion implantation from circuit side. The structure of 2L photo detector was measured by Scanning Spreading Resistance Microscopy (SSRM). Measured spectral response curves from 2L photo detector fit well with those estimated based on light-absorption theory for Silicon detectors. Our measurement results show that the keys to realize 2L photo detector in BSI pixel are; (1) reduction of crosstalk to the vertical charge transfer path from adjacent pixels and (2) controlling backside photo detector thickness variance to reduce color signal variation.
Signal conditioning circuits for 3D-integrated burst image sensor with on-chip A/D conversion
Authors: Rémi Bonnard, Fabrice Guellec, Josep Segura Puchades, CEA-LETI (France); Wilfried Uhring, Institut de Physique et Chimie des Matériaux de Strasbourg (France)
Abstract:
Ultra High Speed (UHS) imaging is at the forefront of the imaging technology for some years now. These image sensors are used to shoot high speed phenomenon that require about hundred images at Mega frame-per-seconds such as detonics, plasma forming, laser ablation… At such speed the data read-out is a bottleneck and CMOS and CCD image sensors store a limited number of frames (burst) on-chip before a slow read-out. Moreover in recent years 3D integration has made significant progresses in term of interconnection density. It appears as a key technology for the future of UHS imaging as it allows a highly parallel integration, shorter interconnects and an increase of the fill factor. In the past we proposed an idea of 3D integrated burst image sensor with on-chip A/D conversion that overcome the state of the art in term of frame-per-burst. This sensor is made of 3 stacked layers respectively performing the signal conditioning, the A/D conversion and the burst storage. We present here different solutions to implement the analogue front-end of the first layer. We will describe three circuits for three purposes (high frame rate, power efficiency and sensitivity). All these front-ends perform global shutter acquisition.
A high-sensitivity 2x2 multi-aperture color camera based on selective averaging
Authors: Bo Zhang, Keiichiro Kagawa, Taishi Takasawa, Min-Woong Seo, Keita Yasutomi, Shoji Kawahito, Shizuoka Univ. (Japan)
Abstract:
To demonstrate the low-noise performance of the multi-aperture imaging system using a selective averaging method, an ultra-high-sensitivity multi-aperture color camera with 2×2 apertures is being developed. In low-light conditions, random telegraph signal (RTS) noise and dark current white defects become visible, which greatly degrades the quality of the image. To reduce these kinds of noise as well as to increase the number of incident photon, the multi-aperture imaging system composed of an array of a lens and a CMOS image sensor (CIS), and the selective averaging for minimizing the synthetic sensor noise at every pixel is utilized. It is verified by simulation that the effective noise at the peak of noise histogram is reduced from 1.38 e- to 0.48 e- in a 3×3-aperture system, where RTS noise and dark current white defects have been successfully removed. In this work, a prototype based on low-noise color sensors with 1280×1024 pixels fabricated in 0.18um CIS technology is designed. The pixel pitch is 7.1μm×7.1μm. The noise of the sensor is around 1e- based on the folding-integration and cyclic column ADCs, and the low voltage differential signaling (LVDS) is used to improve the noise immunity. The synthetic F-number of the prototype is 0.6.
Simulation analysis of a backside illuminated multi-collection gate image sensor
Authors: Vu Truong Son Dao, Takeharu Goji Etoh, Ritsumeikan Univ. (Japan); Edoardo Charbon, Zhang Chao, Technische Univ. Delft (Netherlands); Yoshinari Kamakura, Osaka Univ. (Japan)
Abstract:
We have proposed a new structure, a backside-illuminated multi-collection gate (BSI MCG) image sensor. The target frame rate is 1 Gfps. Each pixel has a group of collection gates (CG) located around its center. Image signals can be selectively collected by each of the CGs. After collection of a signal charge packet by a CG, the charge packet can be transferred to in-situ storage placed around the CGs during the collection of signals by other CGs. However, it is difficult to operate multiple CGs at the frame interval of 1ns with driving voltages delivered from a conventional off-chip driver. We proposed a special integrated structure on which: a driver chip comprises multiple ring oscillator (RO) drivers, each of which is equipped with an XNOR circuit; each RO driver is vertically connected to a group of pixels of the MCG BSI image sensor mounted on a different wafer by TSV technology. This approach provides advantages, such as: (1) reduction of interconnect resistance and capacitance; (2) perfect electrical isolation between the imaging and the driver chips; (3) driving voltages can be delivered almost evenly to all the pixels. We designed a test chip including an imaging device and a RO driver to evaluate various fundamental characteristics, though, at this moment; these devices were not stacked. The imaging device consists of: (a) 32x48 pixels driven by a conventional driver; (b) 1x2 pixels driven by a test RO driver from a separate dice. Each pixel is a hexagonal MCG BSI one that stores 5 consecutive images. The test RO driver with XNOR circuits that can drive a 10fF capacitive load. A time-to-digital converter with temporal resolution of 30ps is included to measure the actual pulse width. A modified 130nm 1P5M CMOS Image Sensor process is used to fabricate the test chip. We obtained the following simulation results: (1) In each pixel, mean and standard deviation of the electron travel time were 0.62ns and 0.17ns, respectively. The maximum travel time was from 0.6ns to 1.4ns if a generation site was near the center to near the edge of a pixel. (2) The RO driver can achieve a pulse width of 1.4ns with a voltage swing of 4.2V and 20% overlapping pulses. The minimum pulse width reduced to 0.77ns with a decreased voltage swing of 2.6V and 25% overlapping pulses. Therefore, we can confirm that the proposed test chip can achieve the target.
Analysis of pixel gain and linearity of CMOS image sensor using floating capacitor load readout operation
Authors: Shunichi Wakashima, Fumiaki Kusuhara, Rihito Kuroda, Shigetoshi Sugawa, Tohoku Univ. (Japan)
Abstract:
In this paper, we demonstrate that the floating capacitor load readout operation has higher readout gain and wider linearity range than conventional pixel readout operation, and report the reason. The pixel signal readout gain is determined by the transconductance, the backgate transconductance and the output resistance of the in-pixel driver transistor and the load resistance. In floating capacitor load readout operation, since there is no current source and the load is the sample/hold capacitor only, the load resistance approaches infinity. Therefore readout gain is larger than that of conventional readout operation. And in floating capacitor load readout operation, there is no current source and the amount of voltage drop is smaller than that of conventional readout operation. Therefore the linearity range is enlarged for both high and low voltage limits in comparison to the conventional readout operation. The effect of linearity range enlargement becomes more advantageous when decreasing the power supply voltage for the lower power consumption. To confirm these effects, we fabricated a prototype chip using 0.18um 1-Poly 3-Metal CMOS process technology with pinned PD. As a result, we confirmed that floating capacitor load readout operation increases both readout gain and linearity range.
A SPAD-based 3D imager with in-pixel TDC for 145ps-accuracy ToF measurement
Authors: Ion Vornicu, Ricardo A. Carmona-Galán, Ángel B. Rodríguez-Vázquez, Instituto de Microelectrónica de Sevilla (Spain)
Abstract:
Single-Photon Avalanche Diodes (SPAD) can be employed to detect the arrival of a reflected pulse of light, thus emerging as a feasible alternative for generating a depth map of the scene. SPADs arranged in a bi-dimensional array can effectively associate an estimation of the time-of-flight (ToF) to each point in the image. In this paper, we will deal with the exact time stamping of the detection event. For this to be achieved, we have incorporated an 11b resolution time-to-digital converter (TDC) to each pixel of a 64 × 64-SPAD array. The sensor chip has been designed in a 0.18μm standard CMOS technology, achieving a pixel pitch of 64μm. The complete sensor array fits in 4.1 × 4.1mm2. Each pixel contains a SPAD, an active quenching and recharge circuit, a ripple counter, a voltage-controlled ring oscillator (VCRO), an encoder and 11b memory. A minimum time bin of 145ps can be detected, for a power consumption of 9μW per TDC. A PLL provided on-chip tunes the reference voltage for the array VCROs, in order to overcome global drift of process parameter and temperature variations. The measured standard deviation of the TDCs across the array is 1% without applying any pixel-to-pixel calibration.
An ISO standard for measuring low light performance
Author: Dietmar Wüller, Image Engineering GmbH & Co. KG (Germany)
Abstract:
To measure the low light performance of todays cameras has become a challenge. The increasing quality for noise reduction algorithms and other steps of the image pipe makes it necessary to investigate the balance of image quality aspects. The first step to define a measurement procedure is to capture images under low light conditions using a huge variety of cameras and review the images as well as the metadata of these images. Image quality parameter need to be identified and a threshold below which the image gets unacceptable needs to be defined for each parameter. Although this may later on require a real psychophysical study to increase the precision of the thresholds the current project tries to find out wether each parameter can be viewed as an independent one or if multiple parameters need to be grouped to differentiate acceptable images from unacceptable ones. Another important aspect is what camera settings are allowed? For example what is the longest acceptable exposure time and how is this affected by image stabilization. Cameras on a tripod may produce excellent images with multi second exposures. After this analysis the question is how the light level gets reported? Is it the illuminance of the scene, the luminance of a certain area in the scene or the exposure?
No comments:
Post a Comment
All comments are moderated to avoid spam and personal attacks.