Wednesday, November 21, 2018

Image Sensing Content at ISSCC 2019

ISSCC 2019 to be held on February 17-21 in San Francisco publishes its program with a number of image sensor papers. The image Sensor session starts with Smartsens presentation, probably the first image sensor company from China presenting its work at ISSCC:

A Stacked Global-Shutter CMOS Imager with SC-Type Hybrid-GS Pixel and Self-Knee Point Calibration Single-Frame HDR and On-Chip Binarization Algorithm for Smart Vision Applications
C. Xu, Y. Mo, G. Ren, W. Ma, X. Wang, W. Shi, J. Hou, K. Shao, H. Wang, P. Xiao, Z. Shao, X. Xie, X. Wang, C. Yiu
SmartSens Technology

Energy-Efficient Low-Noise CMOS Image Sensor with Capacitor Array-Assisted Charge-Injection SAR ADC for Motion-Triggered Low-Power IoT Applications
K. D. Choo, L. Xu, Y. Kim, J-H. Seol, X. Wu, D. Sylvester, D. Blaauw
University of Michigan, Ann Arbor, MI

A Data-Compressive 1.5b/2.75b Log-Gradient QVGA Image Sensor with Multi-Scale Readout for Always-On Object Detection
C. Young, A. Omid-Zohoor, P. Lajevardi, B. Murmann
Stanford University, Stanford, CA; Robert Bosch, Sunnyvale, CA

A 76mW 500fps VGA CMOS Image Sensor with Time-Stretched Single-Slope ADCs Achieving 1.95e- Random Noise
I. Park, C. Park, J. Cheon, Y. Chae,
Yonsei University, Seoul, Korea
Kumoh National Institute of Technology, Gyeongbuk, Korea

Dual-Tap Pipelined-Code-Memory Coded-Exposure-Pixel CMOS Image Sensor for Multi-Exposure Single-Frame Computational Imaging
N. Sarhangnejad, N. Katic, Z. Xia, M. Wei, N. Gusev, G. Dutta, R. Gulve, H. Haim, M. Moreno Garcia, D. Stoppa, K. N. Kutulakos, R. Genov
University of Toronto, Toronto, Canada; Synopsys, Toronto, Canada; Fondazione Bruno Kessler, Trento, Italy; ams AG, Ruschlikon, Switzerland

A 400×400-Pixel 6μm-Pitch Vertical Avalanche Photodiodes CMOS Image Sensor Based on 150ps-Fast Capacitive Relaxation Quenching in Geiger Mode for Synthesis of Arbitrary Gain Images
Y. Hirose, S. Koyama, T. Okino, A. Inoue, S. Saito, Y. Nose, M. Ishii, S. Yamahira, S. Kasuga, M. Mori, T. Kabe, K. Nakanishi, M. Usuda, A. Odagawa, T. Tanaka
Panasonic, Nagaokakyo, Japan

A 256×256 40nm/90nm CMOS 3D-Stacked 120dB-Dynamic-Range Reconfigurable Time-Resolved SPAD Imager
R. K. Henderson, N. Johnston, S. W. Hutchings, I. Gyongy, T. Al Abbas, N. Dutton, M. Tyler, S. Chan, J. Leach
University of Edinburgh, Edinburgh, United Kingdom; STMicroelectronics, Edinburgh, United Kingdom; Heriot-Watt University, Edinburgh, United Kingdom

A 32×32-Pixel 0.9THz Imager with Pixel-Parallel 12b VCO-Based ADC in 0.18μm CMOS
S. Yokoyama, M. Ikebe, Y. Kanazawa, T. Ikegami, P. Ambalathankandy, S. Hiramatsu, E. Sano, Y. Takida, H. Minamide
Hokkaido University, Sapporo, Japan; RIKEN, Sendai, Japan

A 512-Pixel 3kHz-Frame-Rate Dual-Shank Lensless Filterless Single- Photon-Avalanche-Diode CMOS Neural Imaging Probe
C. Lee, A. J. Taal, J. Choi, K. Kim, K. Tien, L. Moreaux, M. L. Roukes, K. L. Shepard
Columbia University, New York, NY; KIST, Seoul, Korea; California Institute of Technology, Pasadena

The Industry Showcase event includes:
  • ams AG, Premstätten, Austria, Direct Time-of-Flight Module in CMOS 55nm HV for Mobile Applications
  • Ouster, San Francisco, CA, Native camera imaging on LiDAR and deep learning enablement
  • Samsung Electronics, Hwaseong, Korea, Motion Artifact Free Dynamic Vision Sensor for Machine Vision

Automotive Gesture Recognition Market

GlobeNewswire: Global Market Insights forecasts that automotive gesture recognition market will grow at about 44% CAGR from 2018 to 2024 led by rising trend towards customer comfort and advanced driving experience. The market is expected to reach $13.6bn by 2024 from $1bn in 2017:

Event-Based Sensor Use Case

Neuro Vision spin-off from Zurich University shows a use case for an event-based camera:

Aeye LiDAR Shows 1000m Track Detection, Raises $40m

Techchrunch,, VentureBeat: AEye raises a $40m in Series B. round led by Taiwania Capital, the investment firm created and backed by Taiwan’s National Development Council, and includes returning investors Kleiner Perkins, Intel Capital, Airbus Ventures and Tychee Partners.

This brings the LiDAR startup’s total funding to about $61m. In the announcement, founder and CEO Luis Dussan said Taiwania’s investment is a strategic one and will give AEye more access to manufacturing, logistics and tech resources in Asia. AEye also plans to launch a new product at CES in January.

In tests monitored and validated by VSI Labs, a research company that focuses on autonomous-vehicle technology, AEye said that its iDAR sensor, which combines a solid-state lidar and high-resolution camera in one device, was able to detect and track a white color moving truck from one kilometer away. AEye claims that this is four to five times the distance other current lidar systems can detect.

In a press statement, AEye chief of staff Blair LaCorte said the company believes iDAR can potentially track moving objects, including trucks and drones, from 5km to 10km away.

Tuesday, November 20, 2018

Sony Adds Some Data on its DSLR/ILC Sensors

Sony publishes flyers for 8 new products for DSLR/ILC cameras spanning from 150MP medium format IMX411 to 20MP 60fps MFT IMX272 sensors, including full-frame and APS-C sensors:

IWISS2018 Posters List

4th International Workshop on Image Sensors and Imaging Systems (IWISS2018) to be held on Nov. 28-29 in Tokyo, publishes the list of posters:


Monday, November 19, 2018

Dual-Gate Organic Phototransistor for Image Sensing

Nature publishes a paper "Dual-gate organic phototransistor with high-gain and linear photoresponse" Philip C. Y. Chow, Naoji Matsuhisa, Peter Zalar, Mari Koizumi, Tomoyuki Yokota, and Takao Someya from Hong Kong University of Science and Technology, Holst Centre (The Netherlands), and University of Tokyo.

"The conversion of light into electrical signal in a photodetector is a crucial process for a wide range of technological applications. Here we report a new device concept of dual-gate phototransistor that combines the operation of photodiodes and phototransistors to simultaneously enable high-gain and linear photoresponse without requiring external circuitry. In an oppositely biased, dual-gate transistor based on a solution-processed organic heterojunction layer, we find that the presence of both n- and p-type channels enables both photogenerated electrons and holes to efficiently separate and transport in the same semiconducting layer. This operation enables effective control of trap carrier density that leads to linear photoresponse with high photoconductive gain and a significant reduction of electrical noise. As we demonstrate using a large-area, 8 × 8 imaging array of dual-gate phototransistors, this device concept is promising for high-performance and scalable photodetectors with tunable dynamic range."

Human Eye Resolution in Megapixels

Quora publishes an answer on human eye resolution question written by Michael Bross, former Pychology Professor at Concordia University, Montreal, among 93 other answers. Few interesting quotes:

"...if you look of what is going on in the eye it looks messy, the ‘seeing’ is done by the visual cortex.

Note that the light has to pass trough several structures before it gets to the retina, cornea, aqueous humor, lens, vitreous humor (humors are a translucent gel/watery like medium), blood vessels, and then it has to traverse 4 layers of nerve cells before it gets to the light receptors (rods and cones) at the back of the retina.

So plenty of photons get absorbed before reaching the receptors, add to this that quite a few of them will be bouncing around in the eye ball, and it has been estimated that only around 20–25% of light entering the eye reaches the receptors.

So to put that into pixel estimates (I’m relying here on data from Hendrik Lensch at the Max Plank Institute Informatik), given a 19″ LED viewed at 60 cm. without hyperacuity the visual cortex would process Pixel 3,000x3,000 pixels, with hyperacuity 18,000x18,000.

Sunday, November 18, 2018

High Photon Throughput SPAD Imager

MDPI Special Issue The International SPAD Sensor Workshop publishes a paper "A CMOS SPAD Imager with Collision Detection and 128 Dynamically Reallocating TDCs for Single-Photon Counting and 3D Time-of-Flight Imaging" by Chao Zhang, Scott Lindner, Ivan Michel Antolovic, Martin Wolf, and Edoardo Charbon from Delft University of Technology, University of Zurich, EPFL, and Kavli Institute of Nanoscience.

"Per-pixel time-to-digital converter (TDC) architectures have been exploited by single-photon avalanche diode (SPAD) sensors to achieve high photon throughput, but at the expense of fill factor, pixel pitch and readout efficiency. In contrast, TDC sharing architecture usually features high fill factor at small pixel pitch and energy efficient event-driven readout. While the photon throughput is not necessarily lower than that of per-pixel TDC architectures, since the throughput is not only decided by the TDC number but also the readout bandwidth. In this paper, a SPAD sensor with 32 × 32 pixels fabricated with a 180 nm CMOS image sensor technology is presented, where dynamically reallocating TDCs were implemented to achieve the same photon throughput as that of per-pixel TDCs. Each 4 TDCs are shared by 32 pixels via a collision detection bus, which enables a fill factor of 28% with a pixel pitch of 28.5 μm. The TDCs were characterized, obtaining the peak-to-peak differential and integral non-linearity of −0.07/+0.08 LSB and −0.38/+0.75 LSB, respectively. The sensor was demonstrated in a scanning light-detection-and-ranging (LiDAR) system equipped with an ultra-low power laser, achieving depth imaging up to 10 m at 6 frames/s with a resolution of 64 × 64 with 50 lux background light."

Saturday, November 17, 2018

Funding News: All Money Invested in Automotive Startups

Adasky, an Israeli developer of Far-Infrared camera (FIR) for autonomous vehicles, has secured $20M from a lead investor, Sungwoo Hitech, a Korean automotive supplier. The investment is part of a larger round of funding, and will
enable the company to expand globally. AdaSky’s solution, Viper, is an all-in-one, complete solution for autonomous vehicles, combining FIR camera technology with fusion-ready, deep-learning computer vision algorithms.

Viper is the smallest, highest-resolution thermal camera for autonomous vehicles on the market. We strongly believe that AdaSky’s technology will enable 24/7 sight and perception for vehicles and put us all on the path to fully autonomous driving,” said Myung-Keun Lee, Chairman & Co-CEO of Sungwoo Hitech.

EETimes: Solid-state LiDAR company Sense Photonics founded in 2016 and based in North Caroline raises $14.4m. The company previously raised $2.8m in 2016. TSense Photonics plans to use the money address the autonomous vehicle, UAV and industrial automation markets.

The company's patent applications reveal a design based on VSCEL array and unspecified ToF sensor"

BusinessWire: Korean SOS Lab (Smart Optical Sensors Lab) has raised $6m for its automotive LiDAR. The lead investor in this series A round is Mando, a top-tier automotive supplier.

BusinessWire: In spite of rumors about technological troubles, Quanergy announces its Series C funding at a valuation exceeding $2 billion, with an un-named global top-tier fund as the lead investor. The Series C financing is sad to take the company well beyond its planning horizons to cash-flow and operating breakeven, and keeps the company’s IPO process on track.

"Demand for Quanergy’s solutions continues to be strong, with revenue increasing rapidly and bookings exceeding forecast. Product and software development continues at a brisk pace. Substantial orders for the company’s S3 solid-state sensor were fulfilled this year. Rapid innovation continues to increase the field of view (FoV) and range for the S3 in outdoor environments.

Since the end of 2017, Quanergy has had an annual production capacity of one million solid-state sensors at its fully automated production line in Silicon Valley. The completion of this round of financing will further enhance the company's capital reserve, to accelerate innovation and commercialization of its hardware, software and smart sensing solutions, and construction of ultra-large-scale production facilities.

"With our advanced technology, we have reduced the price of solid-state LiDAR to a few hundred dollars in volume,” said Louay Eldada, CEO of Quanergy. “Our third-generation solid-state LiDAR is being developed to fully integrate the sensor on a single chip. For Quanergy, the most important focus at the moment is to speed up the production ramping and prove our strength with mass-produced products."

PRNewswire: Israeli Guardian Optical Technologies announces an additional investment of $2.5m. The new investment is part of a pre-B round that totals $5.6M that will be used to expand the R&D team to serve the companies' expanding customer base as well as supporting customers' projects.

Guardian Optical sensor empowers car manufacturers to build safer cars, and at a lower cost, by eliminating the need to install multiple sensors throughout the car. Patent-pending sensor technology provides real-time, information on occupancy status based on three interconnected layers of information: video image recognition (2D), depth mapping (3D), and micro- to macro-motion detection. The sensor detects the location and physical dimensions of each occupant and can identify the difference between a person and an inanimate object.

Friday, November 16, 2018

Sensation Cooperation Project in Europe

SENSATION, a project within the EUREKA PENTA Cluster managed by AENEAS Industry Association, is developing innovative image capture, transmission and processing technologies for high-end Machine Vision and Broadcast applications. The project focuses on key requirements common to all professional vision-based applications namely: higher spatial resolution, higher frame rate, wider colour gamut, higher DR and improved image quality.

Machine vision calls for small pixel, high resolution sensors that can perform high quality inspection at high speeds. In the broadcast market, demand is being driven by the migration from HDTV to UHDTV. The UHDTV standard supports 4K and 8K resolutions, 12 bits per pixel (compared to 10 bits in HDTV), a wider colour gamut and an increased DR.

The SENSATION project brings together key European players in the imaging industry including R&D institutes specialized in image sensor technologies, image sensor designs and video processing; fabless design houses; a semiconductor manufacturer; image compression experts and system integrators. Through this collaboration the partners can strengthen Europe’s ability to compete in global markets for image capture, processing and transmission.

The partners will cooperate on the development of the following:
  • Development of (building blocks for) CMOS image sensors: smaller global shutter pixels, increased dynamic range, increased data rates, auto-focus pixels, improved ADC’s, ultra-high-speed architectures and high speed serial interfaces
  • New solutions for camera transmission
  • Demonstration of results in cameras for Machine Vision and Broadcast, and demonstration of separate image sensor evaluation set-ups
  • Standards for a high-speed serial interface for image sensors, image compression and camera interfaces.

Thanks to AT for the info!

Image Sensor Papers at IEDM 2018

Image sensor papers have a strong appearance in IEDM 2018 Program:

1.5µm dual conversion gain, backside illuminated image sensor using stacked pixel level connections with 13ke- full-well capacitance and 0.8e- noise
V. C. Venezia, A. C-W Hsiung, K. Ai, X. Zhao, Zhiqiang Lin, Duli Mao, Armin Yazdani, Eric A. G. Webster, L. A. Grant, OmniVision Technologies
A 1.5µm pixel size, 8 mega pixel density, dual conversion gain (DCG), back side illuminated CMOS image sensor (CIS) is described having a linear full-well capacity (FWC) of 13ke- and total noise of 0.8e- RMS at 8x gain. The sensor adopts a world smallest 1.5µm pitch, stacked pixel-level connection (SPLC) technology with greater than 8M connections, maximizing fill-factor of the photodiode and dimensions of the associated transistor dimensions to achieve a large FWC and low noise performance at the same time. In addition, by allocating transistors into two different layers, the DCG function can be realized with 1.5µm pixel size.

A 0.68e-rms Random-Noise 121dB Dynamic-Range Sub-pixel architecture CMOS Image Sensor with LED Flicker Mitigation
S. Iida, Y. Sakano, T. Asatsuma, M. Takami, I. Yoshiba, N. Ohba, H. Mizuno, T. Oka, K. Yamaguchi, A. Suzuki, K. Suzuki, M. Yamada, M. Takizawa, Y. Tateshita, and K. Ohno, Sony Semiconductor
This is a report of a CMOS image sensor with a sub-pixel architecture having a pixel pitch of 3 um. The aforementioned sensor achieves both ultra-low random noise of 0.68e-rms and high dynamic range of 121 dB in a single exposure, further realizing LED flicker mitigation.

A 24.3Me- Full Well Capacity CMOS Image Sensor with Lateral Overflow Integration Trench Capacitor for High Precision Near Infrared Absorption Imaging
M. Murata, R. Kuroda, Y. Fujihara, Y. Aoyagi, H. Shibata*, T. Shibaguchi*, Y. Kamata*, N. Miura*, N. Kuriyama*and S. Sugawa, Tohoku University, *LAPIS Semiconductor Miyagi Co., Ltd.
This paper presents a 16um pixel pitch CMOS image sensor exhibiting 24.3Me- full well capacity with a record spatial efficiency of 95ke-/um2 and high quantum efficiency in near infrared waveband by the introduction of lateral overflow integration trench capacitor on a very low dopant concentration p-type Si substrate. A diffusion of 5mg/dl concentration glucose was clearly visualized by an over 71dB SNR absorption imaging at 1050nm.

HDR 98dB 3.2µm Charge Domain Global Shutter CMOS Image Sensor (Invited)
A. Tournier, F. Roy, Y. Cazaux*, F. Lalanne, P. Malinge, M. Mcdonald, G. Monnot**, N. Roux**, STMicroelectronics, **CEA Leti, **STMicroelectronics
We developed a High Dynamic Range (HDR) Global Shutter (GS) pixel for automotive applications working in the charge domain with dual high-density storage node using Capacitive Deep Trench Isolation (CDTI). With a pixel size of 3.2µm, this is the smallest reported GS pixel achieving linear dynamic range of 98dB with a noise floor of 2.8e-. The pinned memory isolated by CDTI can store 2 x 8000e- with dark current lower than 5e-/s at 60°C. A shutter efficiency of 99.97% at 505nm and a Modulation Transfer Function (MTF) at 940nm better than 0.5 at Nyquist frequency is also reported.

High Performance 2.5um Global Shutter Pixel with New Designed Light-Pipe Structure
T. Yokoyama, M. Tsutsui,Y. Nishi, I. Mizuno, V. Dmitry, A. Lahav TowerJazz
We developed a 2.5um global shutter (GS) CMOS image sensor pixel using an advanced Light-Pipe (LP) structure designed with novel guidelines. To the best of our knowledge, it is the smallest reported GS pixel in the world. The developed pixel shows an excellent Quantum Efficiency (QE), Angular Responses (AR) and very low Parasitic Light Sensitivity (PLS). Also, even in oblique light condition of 10 degrees, the 1/PLS is maintained to about half value. These key characteristics allow development of ultra-high resolution sensors, industrial cameras with wide aperture lenses and low form factors optical modules for GS mobile applications.

Back-Illuminated 2.74 µm-Pixel-Pitch Global Shutter CMOS Image Sensor with Charge-Domain Memory Achieving 10k e- Saturation Signal
Y. Kumagai, R. Yoshita, N. Osawa, H. Ikeda, K.Yamashita, T. Abe, S. Kudo, J. Yamane, T. Idekoba, S. Noudo, Y. Ono, S.Kunitake, M. Sato, N. Sato, T. Enomoto, K. Nakazawa, H. Mori, Y. Tateshita, and K. Ohno, Sony Semiconductor
A 3208×2184 global shutter image sensor with back-illuminated architecture is implemented in a 90 nm/65 nm imaging process. The sensor, having 2.74 µm-pitch-pixels, achieves 10000 electrons full-well capacity and -80 dB parasitic light sensitivity. Furthermore, 13.8 e-/s dark current at 60°C and 1.85 erms random noise are obtained. In this paper, the structure of a pixel with memory along with saturation enhancement technology is described.

A CMOS Proximity Capacitance Image Sensor with 16µm Pixel Pitch, 0.1aF Detection Accuracy and 60 Frames Per Second
M. Yamamoto, R. Kuroda, M. Suzuki, T. Goto, H. Hamori*, S. Murakami*, T. Yasuda*, and S. Sugawa, Tohoku University, *OHT Inc.
A 16µm pixel pitch 60 frames per second CMOS proximity capacitance image sensor fabricated by a 0.18µm CMOS process technology is presented. By the introduction of noise cancelling operation, both fixed pattern noise and kTC noise are significantly reduced, resulting in the 0.1aF detection accuracy. Proximity capacitance imaging results using the developed sensor are also demonstrated.

Through-silicon-trench in back-side-illuminated CMOS image sensors for the improvement of gate oxide long term performance
A. Vici, F. Russo*, N. Lovisi*, L. Latessa*, A. Marchioni*, A. Casella*, F. Irrera, Sapienza University of Rome, *LFoundry, a SMIC Company
To improve the gate oxide long term performance of MOSFETs in back side illuminated CMOS image sensors the wafer back is patterned with suitable through-silicon-trenches. We demonstrate that the reliability improvement is due to the annealing of the gate oxide border traps thanks to passivating chemical species carried by trenches.

High-Performance Germanium-on-Silicon Lock-in Pixels for Indirect Time-of-Flight Applications
N. Na, S.-L. Cheng, H.-D. Liu, M.-J. Yang, C.-Y. Chen, H.-W. Chen, Y.-T. Chou, C.-T. Lin, W.-H. Liu, C.-F. Liang, C.-L. Chen, S.-W. Chu, B.-J. Chen, Y.-F. Lyu, and S.-L. Chen, Artilux Inc.
We investigate and demonstrate the first Ge-on-Si lock-in pixels for indirect time-of-flight measurements. Compared to conventional Si lock-in pixels, such novel Ge-on-Si lock-in pixels simultaneously maintain a high quantum efficiency and a high demodulation contrast at a higher operation frequency, which enable consistently superior depth accuracies for both indoor and outdoor scenarios. System performances are evaluated, and pixel quantum efficiencies are measured to be more than 85% and more than 46% at 940nm and 1550nm wavelengths, respectively, along with demodulation contrasts measured to be higher than 0.81 at 300MHz. Our work may open up new routes to high-performance indirect time-of-flight sensors and imagers, as well as potential adoptions of eye-safe lasers (e.g. wavelengths longer than 1.4µm) for consumer electronics and photonics.

CMOS-Integrated Single-Photon-Counting X-Ray Detector using an Amorphous-Selenium Photoconductor with 11×11-µm2 Pixels
A. Camlica, A. El-Falou, R. Mohammadi, P. M. Levine, and K. S. Karim, University of Waterloo
We report, for the first time, results from a single-photon-counting X-ray detector monolithically integrated with an amorphous semiconductor. Our prototype detector combines amorphous selenium (a-Se), a well known X-ray photoconductive material suitable for large-area applications, with a 0.18-µm-CMOS readout integrated circuit containing two 26×196 photon counting pixel arrays. The detector features 11×11-um2 pixels to overcome a-Se count-rate limitations by unipolar charge sensing of the faster charge carriers (holes) via a unique pixel geometry that leverages the small pixel effect for the first time in an amorphous semiconductor. Measured results from a mono-energetic radioactive source are presented and demonstrate the untapped potential of using amorphous semiconductors for high-spatial-resolution photon-counting Xray imaging applications.

High Performance 2D Perovskite/Graphene Optical Synapses as Artificial Eyes
H. Tian, X. Wang, F. Wu, Y. Yang, T.-L. Ren, Tsinghua University
Conventional von Neumann architectures feature large power consumptions due to memory wall. Partial distributed architecture using synapses and neurons can reduce the power. However, there is still data bus between image sensor and synapses/neurons, which indicates plenty room to further lower the power consumptions. Here, a novel concept of all distributed architecture using optical synapse has been proposed. An ultrasensitive artificial optical synapse based on a graphene/2D perovskite heterostructure shows very high photo-responsivity up to 730 A/W and high stability up to 74 days. Moreover, our optical synapses has unique reconfigurable light-evoked excitatory/inhibitory functions, which is the key to enable image recognition. The demonstration of an optical synapse array for direct pattern recognition shows an accuracy as high as 80%. Our results shed light on new types of neuromorphic vision applications, such as artificial eyes.

Hybrid bonding for 3D stacked image sensors: impact of pitch shrinkage on interconnect robustness
J. Jourdon,, S. Lhostis, S. Moreau**, J. Chossat, M. Arnoux***, C. Sart, Y. Henrion, P. Lamontagne, L. Arnaud**, N. Bresson**, V. Balan**, C. Euvrard**, Y. Exbrayat**, D. Scevola, E. Deloffre, S. Mermoz, A. Martin***, H. Bilgen, F. Andre, C. Charles, D. Bouchu**, A. Farcy, S. Guillaumet, A. Jouve**, H. Fremont*, and S. Cheramy**, STMicroelectronics, *University of Bordeaux, **CEA-LETI, ***STMicroelectronics
We present the first 3D-stacked CMOS Image Sensor with a bonding pitch of 1.44 µm. The influence of the hybrid bonding pitch shrinkage (8.8 to 1.44 µm) from the process point of view to a functional device via the robustness aspect is studied. Smaller bonding pads do not lead to any specific failure.

Few other papers that are not directly related to imaging, but might become more relevant some day:

100-340GHz Systems: Transistors and Applications (Invited),
M.J.W. Rodwell, Y. Fang, J. Rode, J. Wu, B. Markman, S. T. Suran Brunelli, J. Klamkin, M Urteaga*, University of California, Santa Barbara, *Teledyne Scientific Company
We examine potential 100-340 GHz wireless applications in communications and imaging, and examine the prospects of developing the mm-wave transistors needed to support these applications.

High Voltage Generation Using Deep Trench Isolated Photodiodes in a Back Side Illuminated Process
F. Kaklin, J. M. Raynor*, R. K. Henderson, The University of Edinburgh, *STMicroelectronics Imaging Division
We demonstrate passive high voltage generation using photodiodes biased in the photovoltaic region of operation. The photodiodes are integrated in a 90nm back side illuminated (BSI) deep trench isolation (DTI) capable imaging process technology. Four equal area, DTI separated arrays of photodiodes are implemented on a single die and connected using on-chip transmission gates (TG). The TGs control interconnects between the four arrays, connecting them in series or in parallel. A series configuration successfully generates an open-circuit voltage of 1.98V at 1klux. The full array generates 423nW/mm2 at 1klux of white LED illumination in series mode and 425nW/mm2 in parallel mode. Peak conversion efficiency is estimated at 16.1%, at 5.7klux white LED illumination.

Error-Resilient Analog Image Storage and Compression with Analog-Valued RRAM Arrays: An Adaptive Joint Source-Channel Coding Approach
X. Zheng, R. Zarcone*, D. Paiton*, J. Sohn, W. Wan, B. Olshausen* and H. -S. Philip Wong, Stanford University, *University of California, Berkeley
We demonstrate by experiment an image storage and compression task by directly storing analog image data onto an analog-valued RRAM array. A joint source-channel coding algorithm is developed with a neural network to encode and retrieve natural images. The encoder and decoder adapt jointly to the statistics of the images and the statistics of the RRAM array in order to minimize distortion. This adaptive joint source-channel coding method is resilient to RRAM array non-idealities such as cycle-to-cycle and device-to-device variations, time-dependent variability, and non-functional storage cells, while achieving a reasonable reconstruction performance of ~ 20 dB using only 0.1 devices/pixel for the analog image.