Wednesday, April 30, 2025

Paper on pixel reverse engineering technique

In an ArXiV preprint titled "Multi-Length-Scale Dopants Analysis of an Image Sensor via Focused Ion Beam-Secondary Ion Mass Spectrometry and Atom Probe Tomography", Guerguis et al write:

The following article presents a multi-length-scale characterization approach for investigating doping chemistry and spatial distributions within semiconductors, as demonstrated using a state-of-the-art CMOS image sensor. With an intricate structural layout and varying doping types/concentration levels, this device is representative of the current challenges faced in measuring dopants within confined volumes using conventional techniques. Focused ion beam-secondary ion mass spectrometry is applied to produce large-
area compositional maps with a sub-20 nm resolution, while atom probe tomography is used to extract atomic-scale quantitative dopant profiles. Leveraging the complementary capabilities of the two methods, this workflow is shown to be an effective approach for resolving nano- and micro- scale dopant information, crucial for optimizing the performance and reliability of advanced semiconductor devices.

Preprint: https://arxiv.org/pdf/2501.08980 


Monday, April 28, 2025

Lecture on fundamentals of CMOS image sensors

 The Fundamentals of CMOS Image Sensors with Richard Crisp 


This video provides a sneak peek of "CMOS Image Sensors: Technology, Applications, and Camera Design Methodology," an SPIE course taught by imaging systems expert Richard Crisp. The course covers everything from the basics of photon capture to sensor architecture and real-world system implementation.
The preview highlights key differences between CCD and CMOS image sensors, delves into common sensor architectures such as rolling shutter and global shutter, and explains the distinction between frontside and backside illumination.
It also introduces the primary noise sources in image sensors and how they can be managed through design and optimization techniques such as photon transfer analysis and MTF assessment.
You'll also see how the course approaches imaging system design using a top-down methodology. This includes considerations regarding pixel architecture, optics, frame rate, and data bandwidth, all demonstrated through practical examples, such as a networked video camera design.
Whether you're an engineer, scientist, or technical manager working with imaging systems, this course is designed to help you better understand the technology behind modern CMOS image sensors and how to make informed design choices. Enjoy!

Friday, April 25, 2025

3D effects in time-delay integration sensor pixels

Guo et al. from Changchun Institute of Optics, University of Chinese Academy of Sciences, and Gpixel Inc. published a paper titled "Study on 3D Effects on Small Time Delay Integration Image Sensor Pixels" in Sensors.

Abstract: This paper demonstrates the impact of 3D effects on performance parameters in small-sized Time Delay Integration (TDI) image sensor pixels. In this paper, 2D and 3D simulation models of 3.5 μm × 3.5 μm small-sized TDI pixels were constructed, utilizing a three-phase pixel structure integrated with a lateral anti-blooming structure. The simulation experiments reveal the limitations of traditional 2D pixel simulation models by comparing the 2D and 3D structure simulation results. This research validates the influence of the 3D effects on the barrier height of the anti-blooming structure and the full well potential and proposes methods to optimize the full well potential and the operating voltage of the anti-blooming structure. To verify the simulation results, test chips with pixel sizes of 3.5 μm × 3.5 μm and 7.0 μm × 7.0 μm were designed and manufactured based on a 90 nm CCD-in-CMOS process. The measurement results of the test chips matched the simulation data closely and demonstrated excellent performance: the 3.5 μm × 3.5 μm pixel achieved a full well capacity of 9 ke- while maintaining a charge transfer efficiency of over 0.99998.

Paper link [open access]: https://www.mdpi.com/1424-8220/25/7/1953

Hamamatsu SPAD tutorial

 SPAD and SPAD Arrays: Theory, Practice, and Applications

 

The video is a comprehensive webinar on Single Photon Avalanche Diodes (SPADs) and SPAD arrays, addressing their theory, applications, and recent advancements. It is led by experts from the New Jersey Institute of Technology and Hamamatsu, discussing technical fundamentals, challenges, and innovative solutions to improve the performance of SPAD devices. Key applications highlighted include fluorescence lifetime imaging, remote gas sensing, quantum key distribution, and 3D radiation detection, showcasing SPAD's unique ability to timestamp events and enhance photon detection efficiency.

Wednesday, April 23, 2025

Speculation about Samsung exiting CIS business?

Recent speculative news article suggest that Samsung is weighing exiting CIS business after recent exit by SK Hynix.

News source: https://www.digitimes.com/news/a20250312PD213/cis-samsung-sk-hynix-business-lsi.html

SK Hynix is shutting down its CMOS image sensor (CIS) business, fueling industry speculation over whether Samsung Electronics will follow suit. Samsung's system LSI division, which oversees its CIS operations, is undergoing an operational diagnosis...

Monday, April 21, 2025

ICCP 2024 Keynote on Event Cameras

 

In this keynote held at the 2024 International Conference on Computational Photography, Prof. Davide Scaramuzza from the University of Zurich presents a visionary keynote on event cameras, which are bio-inspired vision sensors that outperform conventional cameras with ultra-low latency, high dynamic range, and minimal power consumption. He dives into the motivation behind event-based cameras, explains how these sensors work, and explores their mathematical modeling and processing frameworks. He highlights cutting-edge applications across computer vision, robotics, autonomous vehicles, virtual reality, and mobile devices while also addressing the open challenges and future directions shaping this exciting field.
00:00 - Why event cameras matter to robotics and computer vision

07:24 - Bandwidth-latency tradeoff
08:24 - Working principle of the event camera
10:50 - Who sells event cameras
12:27 - Relation between event cameras and the biological eye
13:19 - Mathematical model of the event camera
15:35 - Image reconstruction from events
18:32 - A simple optical-flow algorithm
20:20 - How to process events in general
21:28 - 1st order approximation of the event generation model
23:56 - Application 1: Event-based feature tracking
25:03 - Application 2: Ultimate SLAM
26:30 - Application 3: Autonomous navigation in low light
27:38 - Application 4: Keeping drones fly when a rotor fails
31:06 - Contrast maximization for event cameras
34:14 - Application 1: Video stabilization
35:16 - Application 2: Motion segmentation
36:32 - Application 3: Dodging dynamic objects
38:57 - Application 4: Catching dynamic objects
39:41 - Application 5: High-speed inspection at Boeing and Strata
41:33 - Combining events and RGB cameras and how to apply deep learning
45:18 - Application 1: Slow-motion video
48:34 - Application 2: Video deblurring
49:45 - Application 3: Advanced Driving Assistant Systems
56:34 - History and future of event cameras
58:42 - Reading material and Q&A

Friday, April 18, 2025

Sony releases SPAD-based depth sensor

From PetaPixel: https://petapixel.com/2025/04/15/sony-unveils-the-worlds-smallest-and-lightest-lidar-depth-sensor/

Sony announced the AS-DT1, the world’s smallest and lightest miniature precision LiDAR depth sensor.

Measuring a mere 29 by 29 by 31 millimeters (1.14 by 1.14 by 1.22 inches) excluding protrusions, the Sony AS-DT1 LiDAR Depth Sensor relies upon sophisticated miniaturization and optical lens technologies from Sony’s machine vision industrial cameras to accurately measure distance and range. The device utilizes “Direct Time of Flight” (dToF) LiDAR technology and features a Sony Single Photon Avalanche Diode (SPAD) image sensor. 

From the official Sony webpage: https://pro.sony/ue_US/products/lidar/as-dt1

  • 1.14 (W) x 1.14 (H) x 1.22 in (D)
  • 50 g (1.1 oz)
  • Utilizes dToF LiDAR technology
  • Single Photon Avalanche Diode (SPAD) sensor
  • Range distance of 40 m (131 ft) indoor, 20 m (65.6 ft) outdoor
  • Lightweight aluminum alloy housing structure
  • 2 USB-C ports
  • Connector for external power, UART interface and trigger
  • HFoV 30° or more
  • Maximum measurement range at 15 fps, 50 percent reflectivity, center: Indoor: 131.23 ft and Outdoor: 65.62 ft
  • Measurement accuracy at 10 m: Indoor/Outdoor: ±0.2 in
  • Distance resolution: 0.98 in
  • Frame rate: 30 fps
  • 15 fps @ Maximum ranging distance mode
  • Number of ranging points 576(24 x 24)
  • Laser wavelength 940 nm
  • Dimensions 1.14 (W) x 1.14 (H) x 1.22 in (D) (excluding protrusions)
  • Weight 1.1 oz or less


 

Thursday, April 17, 2025

Conference List - October 2025

ASNT Annual Conference - 6-9 October 2025 - Orlando, Florida, USA - Website

Scientific Detector Workshop 6-10 October 2025 - Canberra, Australia - Website

AutoSens Europe - 7-9 October 2025 - Barcelona, Spain - Website

SPIE/COS Photonics Asia - 12-14 October 2025 - Beijing, China - Website

BioPhotonics Conference - 14-16 October 2025 - Online - Website 

IEEE Sensors Conference - 19-22 October 2025 - Vancouver, British Columbia, Canada - Website 

Optica Laser Congress and Exhibition - 19-23 October 2025 - Prague, Czech Republic - Website

OPTO Taiwan - 22-24 October 2025 - Taipei, Taiwan - Website

Image Sensors Asia - 30-31 October 2025 - Seoul, South Korea - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index