Tuesday, September 10, 2024

PhD thesis on CMOS SPAD dToF Systems

Thesis Title: Advanced techniques for SPAD-based CMOS d-ToF systems
Author: Alessandro Tontini
Affiliation: University of Trento and FBK

Full text available here: [link]

Abstract:

The possibility to enable spatial perception to electronic devices gave rise to a number of important development results in a wide range of fields, from consumer and entertainment applications to industrial environments, automotive and aerospace. Among the many techniques which can be used to measure the three-dimensional (3D) information of the observed scene, the unique features offered by direct time-of-flight (d-ToF) with single photon avalanche diodes (SPADs) integrated into a standard CMOS process result in a high interest for development from both researchers and market stakeholders. Despite the net advantages of SPAD-based CMOS d-ToF systems over other techniques, still many challenges have to be addressed. The first performance-limiting factor is represented by the presence of uncorrelated background light, which poses a physical limit to the maximum achievable measurement range. Another problem of concern, especially for scenarios where many similar systems are expected to operate together, is represented by the mutual system-to-system interference, especially for industrial and automotive scenarios where the need to guarantee safety of operations is a pillar. Each application, with its own set of requirements, leads to a different set of design challenges. However, given the statistical nature of photons, the common denominator for such systems is the necessity to operate on a statistical basis, i.e., to run a number of repeated acquisitions over which the time-of-flight (ToF) information is extracted. The gold standard to manage a possibly huge amount of data is to compress them into a histogram memory, which represents the statistical distribution of the arrival time of photons collected during the acquisition. Considering the increased interest for long-range systems capable of both high imaging and ranging resolutions, the amount of data to be handled reaches alarming levels. In this thesis, we propose an in-depth investigation of the aforesaid limitations. The problem of background light has been extensively studied over the years, and already a wide set of techniques which can mitigate the problem are proposed. However, the trend was to investigate or propose single solutions, with a lack of knowledge regarding how different implementations behave on different scenarios. For such reason, our effort in this view focused on the comparison of existing techniques against each other, highlighting each pros and cons and suggesting the possibility to combine them to increase the performance. Regarding the problem of mutual system interference, we propose the first per-pixel implementation of an active interference-rejection technique, with measurement results from a chip designed on purpose. To advance the state-of-the-art in the direction of reducing the amount of data generated by such systems, we provide for the first time a methodology to completely avoid the construction of a resource-consuming histogram of timestamps. Many of the results found in our investigations are based on preliminary investigations with Monte Carlo simulations, while the most important achievements in terms of interference rejection capability and data reduction are supported by measurements obtained with real sensors.

Contents

Contents
1 Introduction 1
1.1 Single Photon Avalanche Diode (SPAD)
1.1.1 Passive quenching
1.1.2 Active quenching
1.1.3 Photon Detection Efficiency (PDE)
1.1.4 Dark Count Rate (DCR) and afterpulsing

2 Related work
2.1 Pioneering results
2.2 Main challenges
2.3 Integration challenges

3 Numerical modelling of SPAD-based CMOS d-ToF sensors
3.1 Simulator architecture overview
3.2 System features modeling
3.2.1 Optical model
3.2.2 Illumination source - modeling of the laser emission profile
3.3 Monte Carlo simulation
3.3.1 Generation of SPAD-related events
3.3.2 Synchronous and asynchronous SPAD model
3.4 Experimental results
3.5 Summary

4 Analysis and comparative evaluation of background rejection techniques
4.1 Background rejection techniques
4.1.1 Photon coincidence technique
4.1.2 Auto-Sensitivity (AS) technique
4.1.3 Last-hit detection
4.2 Results
4.2.1 Auto-Sensitivity vs. photon coincidence
4.2.2 Comparison of photon coincidence circuits
4.2.3 Last-hit detection characterization
4.3 Automatic adaptation of pixel parameters
4.4 Summary


5 A SPAD-based linear sensor with in-pixel temporal pattern detection for interference and background rejection with smart readout scheme
5.1 Architecture
5.1.1 Pixel architecture
5.1.2 Readout architecture
5.2 Characterization
5.2.1 In-pixel laser pattern detection characterization
5.2.2 Readout performance assessment
5.3 Operating conditions and limits
5.4 Summary

6 SPAD response linearization: histogram-less LiDAR and high photon flux measurements
6.1 Preliminary validation
6.1.1 Typical d-ToF operation
6.1.2 Histogram-less approach
6.2 Mathematical analysis
6.3 Acquisition schemes
6.3.1 Acquisition scheme #1: Acquire or discard
6.3.2 Acquisition scheme #2: Time-gated
6.3.3 Discussion on implementation, expected performance and mathematical analysis
6.3.4 Comparison with state-of-the-art
6.4 Measurement results
6.4.1 Preliminary considerations
6.4.2 Measurements with background light only
6.4.3 Measurements with background and laser light and extraction of the ToF
6.5 Summary

7 Conclusion
7.1 Results
7.1.1 Modelling of SPAD-based d-ToF systems
7.1.2 Comparative evaluation of background-rejection techniques
7.1.3 Interference rejection
7.1.4 Histogram-less and high-flux LiDAR
7.2 Future work and research
Bibliography

Friday, September 06, 2024

8th Space & Scientific CMOS Image Sensors workshop - abstracts due Sep 13, 2024

CNES, ESA, AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE, SODERN, OHB, ISAE SUP’AERO are pleased to invite you to the 8th “Space & Scientific CMOS Image Sensors” workshop to be held in TOULOUSE on November 26th and 27th 2024 within the framework of the Optics and Optoelectronics COMET (Communities of Experts).

The aim of this workshop is to focus on CMOS image sensors for scientific and space applications. Although this workshop is organized by actors of the Space Community, it is widely open to other professional imaging applications such as Machine vision, Medical, Advanced Driver Assistance Systems (ADAS), and Broadcast (UHDTV) that boost the development of new pixel and sensor architectures for high end applications. Furthermore, we would like to invite Laboratories and Research Centers which develop Custom CMOS image sensors with advanced smart design on-chip to join this workshop.

Topics
- Pixel design (high QE, FWC, MTF optimization, low lag,…)
- Electrical design (low noise amplifiers, shutter, CDS, high speed architectures, TDI, HDR)
- On-chip ADC or TDC (in pixel, column, …)
- On-chip processing (smart sensors, multiple gains, summation, corrections)
- Low-light detection (electron multiplication, avalanche photodiodes, quanta image sensors,)
- Photon counting, Time resolving detectors (gated, time-correlated single-photon counting)
- Hyperspectral architectures
- Materials (thin film, optical layers, dopant, high-resistivity, amorphous Si)
- Processes (backside thinning, hybridization, 3D stacking, anti-reflection coating)
- Packaging
- Optical design (micro-lenses, trench isolation, filters)
- Large size devices (stitching, butting)
- High speed interfaces
- Focal plane architectures
- CMOS image sensors with recent space heritage (in-flight performance)

Venue
DIAGORA
Centre de Congrès et d'Exposition. 150, rue Pierre Gilles de Gennes
31670 TOULOUSE – LABEGE

Abstract submission
Please send a short abstract on one A4 page maximum in word or pdf format giving the title, the authors name and affiliation, and presenting the subject of your talk, to L-WCIS24@cnes.fr

Workshop format & official language
Oral presentation shall be requested for the workshop. The official language for the workshop is English.

Slide submission
After abstract acceptance notification, the author(s) will be requested to prepare their presentation in pdf or Powerpoint file format, to be presented at the workshop and to provide a copy to the organizing committee with an authorization to make it available for all attendees, and on-line for the CCT members.

Registration
Registration fee : 100 €.
https://evenium.events/space-and-scientific-cmos-image-sensors-2024/ 

Calendar
13th September 2024 Deadline for abstract submission
11th October 2024 Author notification & preliminary programme
14th October 2024 Registration opening
8th November 2024 Final programme
26th-27th November 2024 Workshop

TriEye launches TES200 SWIR Image Sensor

TriEye has launched the TES200, a 1.3MP SWIR image sensor for machine vision and robotics. See press release below.

TEL  AVIV,  Israel,  September 3, 2024/ – TriEye, pioneer of the world's first cost-effective,  mass-market  Short-Wave  Infrared  (SWIR)  sensing  technology, announced today the release of the TES200 1.3MP SWIR image sensor. Based on the innovative TriEye CMOS image sensor technology that allows SWIR capabilities using a CMOS manufacturing process, the TES200 is the first commercially available product released in the Raven product family.

The TES200 operates in the 700nm to 1650nm wavelength range, delivering high sensitivity and 1.3MP resolution. With its large format, high frame rate, and low power consumption, the TES200 offers enhanced sensitivity and dynamic range. This makes the new image sensor ideal for imaging and sensing applications across various industries, including automotive, industrial, robotics, and biometrics.

"We are proud to announce the commercial availability of the TES200 image sensor. Our CMOS-based solution has set new standards in the automotive market, and with the rise of new Artificial Intelligence (AI) systems, the demand for more sensors and more information has increased. The TES200 now brings these advanced SWIR capabilities to machine vision and robotic systems in various  industries,” said Avi Bakal, CEO of TriEye. “We are excited to offer a solution that delivers a new domain of capabilities in a cost-effective and scalable way, broadening the reach of advanced sensing technology."

The TriEye Raven image sensor family is designed for emerging machine vision and robotics applications,  incorporating  the  latest  SWIR  pixel  and  packaging technologies. The  TES200 is  immediately available in sample quantities and available for production orders with delivery in Q2 2025. 


 

Experience the TES200 in Action at CIOE and VISION 2024

We invite you to explore the advanced capabilities of the TES200 at the CIOE exhibition, held from September 11 to 13, 2024, at the Shenzhen World Exhibition and  Convention  Center,  China,  within the  Lasers  Technology  &  Intelligent Manufacturing Expo. View the demo at the Vertilas booth no. 4D021, 4D022. Then, meet TriEye’s executive team at VISION 2024 in Stuttgart, Germany, from October 8 to 10, at the TriEye booth no. 8A08, where you can experience a live demo of the TES200 and the brand new Ovi 2.0 devkit, and learn firsthand about our latest developments in SWIR imaging.

About TriEye 

TriEye is the pioneer of the world’s-first CMOS-based Short-Wave Infrared (SWIR) image  sensing solutions.  Based  on  advanced  academic  research,  TriEye’s breakthrough technology enables HD SWIR imaging and accurate deterministic 3D sensing  in  all  weather  and  ambient  lighting conditions.  The  company's semiconductor and photonics technology enabled the development of the SEDAR (Spectrum Enhanced Detection And Ranging) platform, which allows perception systems to operate and deliver reliable image data and actionable information, while reducing expenditure up to 100x the existing industry rates. For more information, visit www.trieye.tech

Thursday, August 29, 2024

2024 SEMI MEMS and Imaging Summit program announced

SEMI MEMS & Imaging Sensors Summit 2024 will take place November 14-15 at the International Conference Center Munich (ICM), Messe Münich in Germany.

Thursday, 14th November 2024 

Session 1: Market Dynamics: Landscape and Growth Strategies

09:00  Welcome Remarks
Laith Altimime, President, SEMI Europe

09:20  Opening Remarks by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

09:25  Keynote: Smart Sensors for Smart Life – How Advanced Sensor Technologies Enable Life-Changing Use Cases
Stefan Finkbeiner, General Manager, Bosch Sensortec

09:45  Keynote: Sensing the World: Innovating for a More Sustainable Future
Simone Ferri, APMS Group Vice President, MEMS sub-group General Manager, STMicroelectronics

10:05  Reserved for Yole Development

10:25  Key Takeaways by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

10:30  Networking Coffee Break

Session 2: Sustainable Supply Chain Capabilities

11:10  Opening Remarks by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec

11:15  A Paradigm Shift From Imaging to Vision: Oculi Enables 600x Reduction in Latency-Energy Factor for Visual Edge Applications
Charbel Rizk, Founder & CEO, Oculi

11:35  Reserved for Comet Yxlon

11:55  Key Takeaways by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec

12:00  Networking Lunch

Session 3: MEMS - Exploring Future Trends for Technologies and Device Manufacturing

13:20  Opening Remarks by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI

13:25  Unlocking Novel Opportunities: How 300mm-capable MEMS Foundries Will Change the Game
Jessica Gomez, CEO, Rogue Valley Microdevices

13:45  Trends in Emerging MEMS
Alissa Fitzgerald, CEO, A.M. Fitzgerald & Associates, LLC

14:05  The Most Common Antistiction Films are PFAS, Now What?
David Springer, Product Manager, MVD and Release Etch Products, KLA Corporation

14:25  Reserved for Infineon

14:45  Latest Innovations in MEMS Wafer Bonding
Thomas Uhrmann, Director of Business Development, EV Group

15:05  Key Takeaways by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI

Session 4: Imaging - Exploring Future Trends for Technologies and Device Manufacturing

15:10  Opening Remarks by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor Components, ams OSRAM

15:15  Topic Coming Soon
Avi Bakal, CEO & Co-founder, TriEye

15:35  Active Hyperspectral Imaging Using Extremely Fast Tunable SWIR Light Source
Jussi Soukkamaki, Lead, Hyperspectral & Imaging Technologies, VTT Technical Research Centre of Finland Ltd

15:55  Networking Coffee Break

16:40  Reserved

17:00  Reserved for CEA-Leti

17:20  Reserved for STMicroelectronics

17:40  Key Takeaways by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor Components, ams OSRAM

Friday, 15th November 2024 

Session 5: MEMS and Imaging Young Talent

09:00  Opening Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group

09:05  Unlocking Infrared Multispectral Imaging with Pixelated Metasurface Technology
Charles Altuzarra, Chief Executive Officer & Co-founder, Metahelios

09:10  Electrically Tunable Dual-Band VIS/SWIR Imaging and Sensing
Andrea Ballabio, CEO, EYE4NIR

09:15  FMCW Chip-Scale LiDARs Scale Up for Large Volume Markets Thanks to Silicon Photonics Technology
Simoens François, CEO, SteerLight

09:20  ShadowChrome: A Novel Approach to an Old Problem
Geoff Rhoads, Chief Technology Officer, Transformative Optics Corporation

09:25  Feasibility Investigation of Spherically Bent Image Sensors
Amit Pandey, PhD Student, Technische Hochschule Ingolstadt

09:30  Intelligence Through Vision
Stijn Goossens, CTO, Qurv

09:35  Next Generation Quantum Dot SWIR Sensors
Artem Shulga, CEO & Founder, QDI Systems

09:40  Closing Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group

09:45  Networking Coffee Break

Session 6: Innovations for Next-Gen Applications: Smart Mobility

10:35  Opening Remarks by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG

10:40  Reserved

11:00  New Topology for MEMS Advances Performance and Speeds Manufacturing
Eric Aguilar, CEO, Omnitron Sensors, Inc.

11:20  Key Takeaways by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG

Session 7: Innovations for Next-Gen Applications: Health

11:25  Opening Remarks by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES

11:30  Reserved

11:50  Sensors for Monitoring Vital Signs in Wearable Devices
Markus Arzberger, Senior Director, ams-OSRAM International GmbH

12:10  Pioneering Non-Invasive Wearable MIR Spectrometry for Key Health Biomarkers Analysis
Jan F. Kischkat, CEO, Quantune Technologies GmbH

12:30  Key Takeaways by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES

12:35  End of Conference Reflections by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

12:45  Closing Remarks
Laith Altimime, President, SEMI Europe

12:50  Networking Lunch

Tuesday, August 27, 2024

IEEE SENSORS 2024 --- image sensor topics announced

The list of topics and the authors for the following two events related to image sensor technology have been finalized for the IEEE SENSORS 2024 Conference. The conference will be held in Kobe, Japan, from 20-23 October 2024. It will provide the opportunity to hear world class speakers in the field of image sensors and to sample the sensor ecosystem that extends beyond to see how imaging fits in.

Workshop: “From Imaging to Sensing: Latest and Future Trends of CMOS Image Sensors” [Sunday, 20 October]

Organizers: Sozo Yokogawa (Sony Semiconductor Solutions corp.) • Erez Tadmor (onsemi)

Trends and Developments in State-of-the-Art CMOS Image Sensors”, Daniel McGrath, TechInsights
CMOS Image Sensor Technology: what we have solved, what are to be solved”, Eiichi Funatsu, OMNIVISION
Automotive Imaging: Beyond human Vision”, Vladi Korobov, onsemi
Recent Evolution of CMOS Image Sensor Pixel Technology”, Bumsuk Kim et al., Samsung Electronics
High precision ToF image sensor and system for 3D scanning application”, Keita Yasutomi, Shizuoka University
High-definition SPAD image sensors for computer vision applications”, Kazuhiro Morimoto, Canon Inc.
Single Photon Avalanche Diode Sensor Technologies for Pixel Size Shrinkage, Photon Detection Efficiency Enhancement and 3.36-pm-pitch Photon-counting Architecture”, Jun Ogi, Sony Semiconductor Solutions Corp.
SWIR Single-Photon Detection with Ge-on-Si Technology”, Neil Na, Artilux Inc.
From SPADs to smart sensors: ToF system innovation and AI enable endless application”, Laurent Plaza & Olivier Lemarchand, STMicroelectronics
Depth Sensing Technologies, Cameras and Sensors for VR and AR”, Harish Venkataraman, Meta Inc.
 
Focus session: Overview of The Focus Sensor on Stacking in Image Sensor, [Monday, 21 October]

Orgainizer: S-G. Wu, Brillnics

Co-chairs: DN Yaung, TSMC; John McCarten, L3 Harris

Over the past decade, 3-dimensional (3D) wafer level stacked backside Illuminated (BSI) CMOS image sensors (CIS) have achieved the rapid progress in mass production. This focus session on stacking in image sensors will have 4 invited papers to explore the sensor stack technology evolution from process development, circuit architecture to AI/edge computing in system integration.

The Productization of Stacking in Image Sensors”, Daniel McGrath, TechInsights
Evolution of Image Sensing and Computing Architectures with Stacking Device Technologies”, BC Hseih, Qualcomm
Event-based vision sensor”, Christoph Posch, Prophesee
Evolution of digital pixel sensor (DPS) and advancement by stacking technologies”, Ikeno Rimon, Brillnics

Wednesday, August 21, 2024

Galaxycore educational videos

 

Are you curious about how CMOS image sensors capture such clear and vivid images? Start your journey with the first episode of "CIS Explained". In this episode, we dive deep into the workings of these sophisticated sensors, from the basics of pixel arrays to the intricacies of signal conversion.
This episode serves as your gateway to understanding CMOS image sensors.


In this video, we're breaking down Quantum Efficiency (QE) and its crucial role in CIS. QE is a critical measure of how efficiently our sensors convert incoming light into electrical signals, directly affecting image accuracy and quality. This video will guide you through what QE means for CIS, its impact on your images, and how we're improving QE for better, more reliable imaging.


GalaxyCore DAG HDR Technology Film


Exploring GalaxyCore's Sensor-Shift Optical Image Stabilization (OIS) in under Two Minutes


GalaxyCore's COM packaging technology—a breakthrough in CIS packaging. This video explains how placing two suspended gold wires on the image sensor and bonding it to an IR base can enhance the durability and clarity of image sensors, prevent contamination, and ensure optimal optical alignment.

Monday, August 19, 2024

Avoiding information loss in the photon transfer method

In a recent paper titled "PCH-EM: A Solution to Information Loss in the Photon Transfer Method" in IEEE Trans. on Electron Devices, Aaron Hendrickson et al. propose a new statistical technique to estimate CIS parameters such as conversion gain and read noise.

Abstract: Working from a Poisson-Gaussian noise model, a multisample extension of the photon counting histogram expectation-maximization (PCH-EM) algorithm is derived as a general-purpose alternative to the photon transfer (PT) method. This algorithm is derived from the same model, requires the same experimental data, and estimates the same sensor performance parameters as the time-tested PT method, all while obtaining lower uncertainty estimates. It is shown that as read noise becomes large, multiple data samples are necessary to capture enough information about the parameters of a device under test, justifying the need for a multisample extension. An estimation procedure is devised consisting of initial PT characterization followed by repeated iteration of PCH-EM to demonstrate the improvement in estimating uncertainty achievable with PCH-EM, particularly in the regime of deep subelectron read noise (DSERN). A statistical argument based on the information theoretic concept of sufficiency is formulated to explain how PT data reduction procedures discard information contained in raw sensor data, thus explaining why the proposed algorithm is able to obtain lower uncertainty estimates of key sensor performance parameters, such as read noise and conversion gain. Experimental data captured from a CMOS quanta image sensor with DSERN are then used to demonstrate the algorithm’s usage and validate the underlying theory and statistical model. In support of the reproducible research effort, the code associated with this work can be obtained on the MathWorks file exchange (FEX) (Hendrickson et al., 2024).

 

RRMSE versus read noise for parameter estimates computed using constant flux implementation of PT and PCH-EM. RRMSE curves for PT μ~ and σ~ grow large near σread=0 and were clipped from the plot window.


Open access paper link: https://ieeexplore.ieee.org/document/10570238

Job Postings - Week of 18 August 2024

Omnivision

Principal Image Sensor Technology Engineer

Santa Clara, California, USA

Link

Teledyne

Product Assurance Engineer

Chelmsford, England, UK

Link

Tokyo Electron Labs

Heterogenous Integration Process Engineer I

Albany, New York, USA

Link

Fraunhofer IMS

Doktorand*in Optische Detektoren mit integrierten 2D-Materialien

Duisburg, Germany

Link

AMETEK Forza Silicon

Principal Mixed Signal Design Engineer

Pasadena, CA, USA

Link

University of Birmingham

Professor of Silicon Detector Instrumentation for Particle Physics

Birmingham, England, UK

Link

Ouster

Sensor Package Design Engineer

San Francisco, California, USA

Link

Beijing Institute of High Energy Physics

CEPC Overseas High-Level Young Talents

Beijing, China

Link

Thermo Fisher Scientific

Sr. Staff Product Engineer

Waltham, Massachusetts, USA (Remote)

Link