Friday, May 09, 2025

Counterpoint Research's CIS report

Global Smartphone CIS Shipments Climb 2% YoY in 2024

Samsung is no longer in the top-3 smartphone CIS suppliers.


  •  Global smartphone image sensor shipments rose 2% YoY to 4.4 billion units in 2024.
  • Meanwhile, the average number of cameras per smartphone declined further to 3.7 units in 2024 from 3.8 units in 2023.
  • Sony maintained its leading position, followed by GalaxyCore in second place and OmniVision in third.
  • Global smartphone image sensor shipments are expected to fall slightly YoY in 2025.

 

https://www.counterpointresearch.com/insight/post-insight-research-notes-blogs-global-smartphone-cis-shipments-climbs-2-yoy-in-2024/

Wednesday, May 07, 2025

IS&T EI 2025 plenary talk on imaging and AI


 

This plenary presentation was delivered at the Electronic Imaging Symposium held in Burlingame, CA over 2-6 February 2025. For more information see: http://www.electronicimaging.org

Title: Imaging in the Age of Artificial Intelligence

Abstract: AI is revolutionizing imaging, transforming how we capture, enhance, and experience visual content. Advancements in machine learning are enabling mobile phones to have far better cameras, enabling capabilities like enhanced zoom, state-of-the-art noise reduction, blur mitigation, and post-capture capabilities such as intelligent curation and editing of your photo collections, directly on device.
This talk will delve into some of these breakthroughs, and describe a few of the latest research directions that are pushing the boundaries of image restoration and generation, pointing to a future where AI empowers us to better capture, create, and interact with visual content in unprecedented ways.

Speaker: Peyman Milanfar, Distinguished Scientist, Google (United States)

Biography: Peyman Milanfar is a Distinguished Scientist at Google, where he leads the Computational Imaging team. Prior to this, he was a Professor of Electrical Engineering at UC Santa Cruz for 15 years, two of those as Associate Dean for Research. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass. Over the last decade, Peyman's team at Google has developed several core imaging technologies that are used in many products. Among these are the zoom pipeline for the Pixel phones, which includes the multi-frame super-resolution ("Super Res Zoom") pipeline, and several generations of state of the art digital upscaling algorithms. Most recently, his team led the development of the "Photo Unblur" feature launched in Google Photos for Pixel devices.
Peyman received his undergraduate education in electrical engineering and mathematics from the UC Berkeley and his MS and PhD in electrical engineering from MIT. He holds more than two dozen patents and founded MotionDSP, which was acquired by Cubic Inc. Along with his students and colleagues, he has won multiple best paper awards for introducing kernel regression in imaging, the RAISR upscaling algorithm, NIMA: neural image quality assessment, and Regularization by Denoising (RED). He's been a Distinguished Lecturer of the IEEE Signal Processing Society and is a Fellow of IEEE "for contributions to inverse problems and super-resolution in imaging".

Monday, May 05, 2025

Brillnics mono-IR global shutter sensor

Miyauchi et al. from Brillnics Inc., Japan published a paper titled "A 3.96-μm, 124-dB Dynamic-Range, Digital-Pixel Sensor With Triple- and Single-Quantization Operations for Monochrome and Near-Infrared Dual-Channel Global Shutter Operation" in IEEE JSSC (May 2025).

Abstract: This article presents a 3.96- μ m, 640×640 pixel stacked digital pixel sensor capable of capturing co-located monochrome (MONO) and near-infrared (NIR) frames simultaneously in a dual-channel global shutter (GS) operation. A super-pixel structure is proposed with diagonally arranged 2×2 MONO and NIR sub-pixels. To enhance visible light sensitivity, large and small non-uniform micro-lenses are formed on the MONO and NIR sub-pixels, respectively. Each floating diffusion (FD) shared super-pixel is connected to an in-pixel analog-to-digital converter and two banks of 10-bit static random access memories (SRAMs) to enable the dual-channel GS operation. To achieve high dynamic range (DR) in the MONO channel, a triple-quantization (3Q) operation is performed. Furthermore, a single-channel digital-correlated double sampling (D-CDS) 3Q operation is implemented. The fabricated sensor achieved 6.2-mW low power consumption at 30 frames/s with dual-channel capture. The MONO channel achieved 124-dB DR in the 3Q operation and 60 dB for the NIR channel. The sensor fits the stringent form-factor requirement of an augmented reality headset by consolidating MONO and NIR imaging capabilities.

Open access link: https://ieeexplore.ieee.org/document/10706075 

Concept of HDR dual-channel GS operation.
 
 

Pixel level co-located MONO and NIR sub-pixels.

 

Sub-pixel and SRAM-bank usage. (a) Dual-channel operation. (b) Single-channel digital-CDS operation.

Fabricated chip. (a) Chip micrograph. (b) Chip top-level block diagram.

 

Photo-response and SNR curves of digital-CDS operation (after linearization).

 

Sample images captured by dual-channel operation. (a) MONO frame (HDR image). (b) NIR frame ( 2× gain for visual).

Thursday, May 01, 2025

Sony SSSpeculations

Several news sources are repeating a Bloomberg report that Sony is considering partially spinning off its semiconductor business.

https://finance.yahoo.com/news/sony-reportedly-mulling-semiconductor-unit-155046940.html 

Sony Group is contemplating a spinoff of its semiconductor unit, a move that could see Sony Semiconductor Solutions become an independent entity as early as this year, reports Bloomberg. The move, which is still under discussion, is part of the group’s strategy to streamline business operations and concentrate on core entertainment sector. The potential spinoff would involve distributing most of Sony's holding in the chip business to its shareholders while retaining a minority stake.

https://www.trendforce.com/news/2025/04/29/news-sony-reportedly-mulls-chip-division-spinoff-and-listing-to-strengthen-entertainment-focus/ 

According to Bloomberg, sources indicate that Sony Group is weighing the spin-off of its semiconductor subsidiary, Sony Semiconductor Solutions, with an IPO potentially taking place as early as this year. Another report from Bloomberg adds that the move would mark the PlayStation maker’s latest step in streamlining its operations and strengthening its focus on entertainment. As noted by the report, sources indicate that Sony is exploring a “partial spin-off” structure, under which the parent company would retain a stake in the subsidiary.

Wednesday, April 30, 2025

Paper on pixel reverse engineering technique

In an ArXiV preprint titled "Multi-Length-Scale Dopants Analysis of an Image Sensor via Focused Ion Beam-Secondary Ion Mass Spectrometry and Atom Probe Tomography", Guerguis et al write:

The following article presents a multi-length-scale characterization approach for investigating doping chemistry and spatial distributions within semiconductors, as demonstrated using a state-of-the-art CMOS image sensor. With an intricate structural layout and varying doping types/concentration levels, this device is representative of the current challenges faced in measuring dopants within confined volumes using conventional techniques. Focused ion beam-secondary ion mass spectrometry is applied to produce large-
area compositional maps with a sub-20 nm resolution, while atom probe tomography is used to extract atomic-scale quantitative dopant profiles. Leveraging the complementary capabilities of the two methods, this workflow is shown to be an effective approach for resolving nano- and micro- scale dopant information, crucial for optimizing the performance and reliability of advanced semiconductor devices.

Preprint: https://arxiv.org/pdf/2501.08980 


Monday, April 28, 2025

Lecture on fundamentals of CMOS image sensors

 The Fundamentals of CMOS Image Sensors with Richard Crisp 


This video provides a sneak peek of "CMOS Image Sensors: Technology, Applications, and Camera Design Methodology," an SPIE course taught by imaging systems expert Richard Crisp. The course covers everything from the basics of photon capture to sensor architecture and real-world system implementation.
The preview highlights key differences between CCD and CMOS image sensors, delves into common sensor architectures such as rolling shutter and global shutter, and explains the distinction between frontside and backside illumination.
It also introduces the primary noise sources in image sensors and how they can be managed through design and optimization techniques such as photon transfer analysis and MTF assessment.
You'll also see how the course approaches imaging system design using a top-down methodology. This includes considerations regarding pixel architecture, optics, frame rate, and data bandwidth, all demonstrated through practical examples, such as a networked video camera design.
Whether you're an engineer, scientist, or technical manager working with imaging systems, this course is designed to help you better understand the technology behind modern CMOS image sensors and how to make informed design choices. Enjoy!

Friday, April 25, 2025

3D effects in time-delay integration sensor pixels

Guo et al. from Changchun Institute of Optics, University of Chinese Academy of Sciences, and Gpixel Inc. published a paper titled "Study on 3D Effects on Small Time Delay Integration Image Sensor Pixels" in Sensors.

Abstract: This paper demonstrates the impact of 3D effects on performance parameters in small-sized Time Delay Integration (TDI) image sensor pixels. In this paper, 2D and 3D simulation models of 3.5 μm × 3.5 μm small-sized TDI pixels were constructed, utilizing a three-phase pixel structure integrated with a lateral anti-blooming structure. The simulation experiments reveal the limitations of traditional 2D pixel simulation models by comparing the 2D and 3D structure simulation results. This research validates the influence of the 3D effects on the barrier height of the anti-blooming structure and the full well potential and proposes methods to optimize the full well potential and the operating voltage of the anti-blooming structure. To verify the simulation results, test chips with pixel sizes of 3.5 μm × 3.5 μm and 7.0 μm × 7.0 μm were designed and manufactured based on a 90 nm CCD-in-CMOS process. The measurement results of the test chips matched the simulation data closely and demonstrated excellent performance: the 3.5 μm × 3.5 μm pixel achieved a full well capacity of 9 ke- while maintaining a charge transfer efficiency of over 0.99998.

Paper link [open access]: https://www.mdpi.com/1424-8220/25/7/1953

Hamamatsu SPAD tutorial

 SPAD and SPAD Arrays: Theory, Practice, and Applications

 

The video is a comprehensive webinar on Single Photon Avalanche Diodes (SPADs) and SPAD arrays, addressing their theory, applications, and recent advancements. It is led by experts from the New Jersey Institute of Technology and Hamamatsu, discussing technical fundamentals, challenges, and innovative solutions to improve the performance of SPAD devices. Key applications highlighted include fluorescence lifetime imaging, remote gas sensing, quantum key distribution, and 3D radiation detection, showcasing SPAD's unique ability to timestamp events and enhance photon detection efficiency.

Wednesday, April 23, 2025

Speculation about Samsung exiting CIS business?

Recent speculative news article suggest that Samsung is weighing exiting CIS business after recent exit by SK Hynix.

News source: https://www.digitimes.com/news/a20250312PD213/cis-samsung-sk-hynix-business-lsi.html

SK Hynix is shutting down its CMOS image sensor (CIS) business, fueling industry speculation over whether Samsung Electronics will follow suit. Samsung's system LSI division, which oversees its CIS operations, is undergoing an operational diagnosis...

Monday, April 21, 2025

ICCP 2024 Keynote on Event Cameras

 

In this keynote held at the 2024 International Conference on Computational Photography, Prof. Davide Scaramuzza from the University of Zurich presents a visionary keynote on event cameras, which are bio-inspired vision sensors that outperform conventional cameras with ultra-low latency, high dynamic range, and minimal power consumption. He dives into the motivation behind event-based cameras, explains how these sensors work, and explores their mathematical modeling and processing frameworks. He highlights cutting-edge applications across computer vision, robotics, autonomous vehicles, virtual reality, and mobile devices while also addressing the open challenges and future directions shaping this exciting field.
00:00 - Why event cameras matter to robotics and computer vision

07:24 - Bandwidth-latency tradeoff
08:24 - Working principle of the event camera
10:50 - Who sells event cameras
12:27 - Relation between event cameras and the biological eye
13:19 - Mathematical model of the event camera
15:35 - Image reconstruction from events
18:32 - A simple optical-flow algorithm
20:20 - How to process events in general
21:28 - 1st order approximation of the event generation model
23:56 - Application 1: Event-based feature tracking
25:03 - Application 2: Ultimate SLAM
26:30 - Application 3: Autonomous navigation in low light
27:38 - Application 4: Keeping drones fly when a rotor fails
31:06 - Contrast maximization for event cameras
34:14 - Application 1: Video stabilization
35:16 - Application 2: Motion segmentation
36:32 - Application 3: Dodging dynamic objects
38:57 - Application 4: Catching dynamic objects
39:41 - Application 5: High-speed inspection at Boeing and Strata
41:33 - Combining events and RGB cameras and how to apply deep learning
45:18 - Application 1: Slow-motion video
48:34 - Application 2: Video deblurring
49:45 - Application 3: Advanced Driving Assistant Systems
56:34 - History and future of event cameras
58:42 - Reading material and Q&A