Wednesday, December 03, 2025

A-SSCC Circuit Insights CMOS Image Sensor

 

A-SSCC 2025 - Circuit Insights #4: Introduction to CMOS Image Sensors - Prof. Chih-Cheng Hsieh

About Circuit Insights: Circuit Insights features internationally renowned researchers in circuit design, who will deliver engaging and accessible lectures on fundamental circuit concepts and diverse application areas, tailored to a level suitable for senior undergraduate students and early graduate students. The event will provide a valuable and inspiring opportunity for those who are considering or pursuing a career in circuit design.

About the Presenter: Chih-Cheng Hsieh received the B.S., M.S., and Ph.D. degrees from the Department of Electronics Engineering, National Chiao Tung University, Hsinchu, Taiwan, in 1990, 1991, and 1997, respectively.,From 1999 to 2007, he was with an IC Design House, Pixart Imaging Inc., Hsinchu. He led the Mixed-Mode IC Department, as a Senior Manager and was involved in the development of CMOS image sensor ICs for PC, consumer, and mobile phone applications. In 2007, he joined the Department of Electrical Engineering, National Tsing Hua University, Hsinchu, where he is currently a Full Professor. His current research interests include low-voltage low-power smart CMOS image sensor IC, ADC, and mixed-mode IC development for artificial intelligence (AI), internet of things (IoT), biomedical, space, robot, and customized applications.,Dr. Hsieh serves as a TPC member of ISSCC and A-SSCC, and an Associate Editor of IEEE Solid–State Circuit Letters (SSC-L) and IEEE Circuits and Systems Magazine (CASM). He was the SSCS Taipei Chapter Chair and the Student Branch Counselor of NTHU, Taiwan.

Monday, December 01, 2025

Time-mode CIS paper

In a recent paper titled "An Extended Time-Mode Digital Pixel CMOS Image Sensor for IoT Applications" Kim et al from Yonsei University write:

Time-mode digital pixel sensors have several advantages in Internet-of-Things applications, which require a compact circuit and low-power operation under poorly illuminated environments. Although the time-mode digitization technique can theoretically achieve a wide dynamic range by overcoming the supply voltage limitation, its practical dynamic range is limited by the maximum clock frequency and device leakage. This study proposes an extended time-mode digitization technique and a low-leakage pixel circuit to accommodate a wide range of light intensities with a small number of digital bits. The prototype sensor was fabricated in a 0.18 μm standard CMOS process, and the measurement results demonstrate its capability to accommodate a 0.03 lx minimum light intensity, providing a dynamic range figure-of-merit of 1.6 and a power figure-of-merit of 37 pJ/frame·pixel. 

 



Figure 1. Operation principle of conventional CISs: (a) voltage mode; (b) fixed reference; and (c) ramp-down TMD.
Figure 2. Theoretical photo-transfer curve of conventional 6-bit TMDs.
Figure 3. The operation principle of the proposed E-TMD technique.
Figure 4. Theoretical photo-transfer curve of the proposed E-TMD: (a) TS = TU = TD = 2000tCK, Δ = 0; (b) TS = TU = TD = 100tCK, Δ = 0; (c) TS = TU = 0, TD = 45tCK, Δ = 0; and (d) TS = 0, TU = 25tCK, TD = 45tCK, Δ = 0.7.
Figure 5. The conventional time-mode digital pixel CIS adapted from [11]: (a) architecture; (b) pixel schematic diagram.
Figure 6. Architecture and schematic diagram of the proposed time-mode digital pixel CIS.
Figure 7. Operation of the proposed time-mode digital pixel CIS with α representing VDD-vREF-VT: (a) six operation phases and (b) timing diagram.
Figure 8. Transistor-level simulated photo-transfer curve comparison.

Figure 9. Chip micrograph.

 

Figure 10. Captured sample images: (a) 190 lx, TS = 17 ms, tCK = 50 µs; (b) 1.9 lx, TS = 400 ms, tCK = 2 µs.
Figure 11. Captured sample images and their histograms: (a) 20.5 lx, TS = 32.6 ms; (b) 200.6 lux, TS = 4.6 ms; (c) 2106 lux, TS = 0.64 ms; (d) 2106 lux, TS = 0.64 ms, TU = 0.74 ms, TD = 1.84 ms, Δ = 0.5.

Thursday, November 27, 2025

ISSCC 2026 Image Sensors session

ISSCC 2026 will be held Feb 15-19, 2026 in San Francisco, CA.

The advance program is now available: https://submissions.mirasmart.com/ISSCC2026/PDF/ISSCC2026AdvanceProgram.pdf 

Session 7 Image Sensors and Ranging (Feb 16)

Session Chair: Augusto Ximenes, CogniSea, Seattle, WA
Session Co-Chair: Andreas Suess, Google, Mountain View, CA

54×42 LiDAR 3D-Stacked System-On-Chip with On-Chip Point
Cloud Processing and Hybrid On-Chip/Package-Embedded 25V
Boost Generation

VoxCAD: A 0.82-to-81.0mW Intelligent 3D-Perception dToF SoC
with Sector-Wise Voxelization and High-Density Tri-Mode eDRAM
CIM Macro

A Multi-Range, Multi-Resolution LiDAR Sensor with
2,880-Channel Modular Survival Histogramming TDC and Delay
Compensation Using Double Histogram Sampling

A 480×320 CMOS LiDAR Sensor with Tapering 1-Step
Histogramming TDCs and Sub-Pixel Echo Resolvers

A 26.0mW 30fps 400x300-pixel SWIR Ge-SPAD dToF Range
Sensor with Programmable Macro-Pixels and Integrated
Histogram Processing for Low-Power AR/VR Applications

A 128×96 Multimodal Flash LiDAR SPAD Imager with Object
Segmentation Latency of 18μs Based on Compute-Near-Sensor
Ising Annealing Machine

A Fully Reconfigurable Hybrid SPAD Vision Sensor with 134dB
Dynamic Range Using Time-Coded Dual Exposures

A 55nm Intelligent Vision SoC Achieving 346TOPS/W System
Efficiency via Fully Analog Sensing-to-Inference Pipeline

A 1.09e--Random-Noise 1.5μm-Pixel-Pitch 12MP Global-Shutter-
Equivalent CMOS Image Sensor with 3μm Digital Pixels Using
Quad-Phase-Staggered Zigzag Readout and Motion
Compensation

A 200MP 0.61μm-Pixel-Pitch CMOS Imager with Sub-1e- Readout
Noise Using Interlaced-Shared Transistor Architecture and
On-Chip Motion Artifact-Free HDR Synthesis for 8K Video
Applications

Tuesday, November 25, 2025

Ubicept releases toolkit for SPAD and CIS

Ubicept Extends Availability of Perception Technology to Make Autonomous Systems Using Conventional Cameras More Reliable

Computer vision processing unlocks higher quality, more trustworthy visual data for machines whether they use advanced sensors from Pi Imaging Technology or conventional vision systems

BOSTON--(BUSINESS WIRE)--Ubicept, the computer vision startup operating at the limits of physics, today announced the release of the Ubicept Toolkit, which will bring its physics-based imaging to any modern vision system. Whether for single-photon avalanche diode (SPAD) sensors in next-generation vision systems or immediate image quality improvements with existing hardware, Ubicept provides a unified, physics-based approach that delivers high quality, trustworthy data.

“Ubicept’s technology revolutionizes how machines see the world by unlocking the full potential of today's and tomorrow's image sensors. Our physics-based approach captures the full complexity of motion, even in low-light or high-dynamic-range conditions, providing more trustworthy data than AI-based video enhancement,” said Sebastian Bauer, CEO of Ubicept. “With the Ubicept Toolkit, we’re now making our advanced single-photon imaging more accessible for a broad range of applications from robotics to automotive to industrial sensing.”

Ubicept’s solution is designed for the most advanced sensors to maximize image data quality and reliability. Now, the Toolkit will support any widely available CMOS camera with raw uncompressed output, giving perception developers immediate quality gains.

“Autonomous systems need a better way to understand the world. Our mission is to turn raw photon data into outputs that are specifically designed for computer vision, not human consumption,” said Tristan Swedish, CTO of Ubicept. “By making our technology available for more conventional vision systems, we are giving engineers the opportunity to experience the boost in reliability now while creating an easier pathway to SPAD sensor adoption.”

SPAD sensors – traditionally used in 3D systems – are poised to reshape the image sensor and computer vision landscape. While the CMOS sensor market is projected to grow to $30B by 2029 at 7.5% CAGR, the SPAD market is growing nearly three times faster, expected to reach $2.55B by 2029 at 20.1% CAGR.

Pi Imaging Technology is a leader in the field with its SPAD Alpha, a next-generation 1-megapixel single-photon camera that delivers zero read noise, nanosecond-level exposure control, and frame rates up to 73,000 fps. Designed for demanding scientific applications, it offers researchers and developers extreme temporal precision and light sensitivity. The Ubicept Toolkit builds on these strengths by transforming the SPAD Alpha’s raw photon data into clear, ready-to-use imagery for perception and analysis.

“Ubicept shares our deep commitment to advancing perception technology,” said Michel Antolović, CEO of Pi Imaging Technology. “By combining our SPAD Alpha’s state-of-the-art hardware with Ubicept’s real-time processing, perception engineers can get the most from what single-photon imaging has to offer.”

The Toolkit provides engineering teams with everything they need to visualize, capture, and process video data efficiently with the Ubicept Photon Fusion (UPF) algorithm. The SPAD Toolkit also includes Ubicept’s FLARE (Flexible Light Acquisition and Representation Engine) firmware for optimized photon capture. In addition, the Toolkit includes white-glove support to early adopters for a highly personalized and premium experience.

The Ubicept Toolkit will be available in December 2025. To learn how it can elevate perception performance and integrate into existing workflows, contact Ubicept here.

Monday, November 24, 2025

Job Postings - Week of November 23 2025


ByteDance

Image Sensor Digital Design Lead- Pico

San Jose, California, USA

Link

ST Microelectronics

Silicon Photonics Product Development Engineer

Grenoble, France

Link

DigitalFish

Senior Systems Engineer, Cameras/Imaging

Sunnyvale, California, USA [Remote]

Link

Imasenic

Digital IC Design Engineer

Barcelona, Spain

Link

Meta

Technical Program Manager, Camera Systems

Sunnyvale, California, USA

Link

Westlake University

Ph.D. Positions in Dark Matter & Neutrino Experiments

Hangzhou, Zhejiang,

China

Link

General Motors

Advanced Optical Sensor Test Engineer

Warren, Michigan, USA

[Hybrid]

Link

INFN

Post-Doc senior research grant in experimental physics

Frascati, italy

Link

Northrop Grumman

Staff EO/IR Portfolio Technical Lead

Melbourne, Florida, USA

Link

Friday, November 21, 2025

"Camemaker" image sensors search tool

An avid reader of the blog shared this handy little search tool for image sensors: 

https://www.camemaker.com/shop

Although it isn't comprehensive (only covers a few companies), you can filter by various sensor specs. Try it out? 

Monday, November 17, 2025

Event cameras: applications and challenges

Gregor Lenz (roboticist, and cofounder of Open Neuromorphic and Neurobus) has written a two-part blogpost that readers of ISW might find enlightening:

https://lenzgregor.com/posts/event-cameras-2025-part1/

https://lenzgregor.com/posts/event-cameras-2025-part2/ 

Gregor goes into various application domains where event cameras have been tried, but faced challenges, technical and otherwise.

Wide adoption will depend less and less on technical merit and more on how well the new sensor modality will fit into existing pipelines for X where X can be supply chain, hardware, software, manufacturing, assembly, testing, ...  pick your favorite!

Saturday, November 15, 2025

Conference List - May 2026

Quantum Photonics Conference, Networking and Trade Exhibition - 5-6 May 2026 - Erfurt, Germany - Website

Sensors Converge - 5-7 May 2026 - Santa Clara, California, USA -  Website

LOPS 2026 - 8-9 May 2026 - Chicago, Illinois, USA - Website

Embedded Vision Summit - 11-13 May 2026 - Santa Clara, California, USA - Website

CLEO - Congress on Lasers and Electro-Optics - 17-20 May 2026 - Charlotte, North Carolina, USA 

IEEE International Symposium on Robotic and Sensors Environments - 18-19 May 2026 - Norfolk, Virginia, USA - Website

IEEE International Symposium on Integrated Circuits and Systems - 24-27 May 2026 - Shanghai, China - Website

ALLSENSORS 2026 - 24-28 May 2026 - Venice, Italy - Website

Robotics Summit and Expo - 27-28 May 2026 - Boston, Massachusetts, USA - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Thursday, November 13, 2025

Metalenz announces face ID solution

Metalenz and UMC Bring Breakthrough Face Authentication Solution Polar ID to Mass Production

Boston, MA and Hsinchu, TAIWAN, November 12, 2025 - Metalenz, the leader in metasurface innovation and commercialization, and United Microelectronics Corporation (“UMC” NYSE: UMC, TWSW:2303), a leading global semiconductor foundry, today announced Metalenz’s its breakthrough face authentication solution, Polar ID, is now ready for mass production through UMC.

Polar ID is a compact, polarization-based biometric solution that leverages Metalenz’s metasurface technology to bring payment-grade security and advanced sensing capabilities to any device, even the most challenging of form factors. Using a polarization sensitive meta-optic and advanced algorithms, Polar ID extracts additional information sets such as material and contour information to provide secure face authentication in a single image, dramatically reducing cost and complexity over existing secure face unlock solutions.

Metalenz has already demonstrated the product, featuring a polarization sensitive meta-optic directly integrated onto an image sensor, on a smartphone reference platform powered by Snapdragon® mobile processors. UMC manufactures the meta-optic layer using its 40nm process and achieves sensor integration utilizing its wafer-on-wafer bonding technology. Leveraging UMC’s 300mm wafer manufacturing capabilities, as well as the qualification of this supply chain, Metalenz is ready to ramp into volume positioning Polar ID for widespread adoption across consumer electronics, mobile, and IoT platforms.

“By combining our metasurface innovation with UMC’s manufacturing scale and process maturity, Polar ID is ready to meet the demands of high-volume consumer electronics, and to bring secure, affordable face authentication to billions of devices,” said Rob Devlin, CEO and Co-Founder of Metalenz. “Metalenz is the critical enabler of the metasurface market. With the first generation of our technology already at work in the market replacing lens stacks in existing sensing solutions, we are now leveraging the unique capabilities of our technology to bring new forms of sensing to mass markets for the first time. With demand for secure and convenient biometrics rapidly expanding across consumer devices and IoT, Polar ID delivers secure face authentication in the smallest, simplest form factor, making advanced sensing accessible beyond premium tiers and in places it wasn’t previously possible.”

“Our state-of-the art 12-inch facilities and comprehensive portfolio of semiconductor manufacturing process technologies have made us the foundry partner of choice for some of the most advanced fabless semiconductor companies in the world. We have worked with Metalenz on commercializing their metasurface technology since 2021, and we are pleased to be their key manufacturing partner to support the high-volume production of next-generation polarization imaging modules,” said Steven Hsu, Vice President of Technology Development, UMC. “This collaboration will enable UMC to expand our offering into sensor integrated metasurfaces and play a pioneering role in delivering this disruptive imaging technology to market.”

Tuesday, November 11, 2025

Event-Driven Vision Summer School

Unfortunately the registration deadline has already passed, but I'm posting the program here if it's of interest to the blog readers.

https://edpr.iit.it/events/2026-evs

May 17th to May 23rd 2026
Hotel Punta San Martino, Arenzano (GE), Italy

Event-driven cameras adoption in real-world applications is steadily growing, thanks to their low-power, low latency, high temporal resolution, high dynamic range and highly compressive encoding. The EVS school is the first event focused on teaching computer vision with event cameras. Its aim is to offer students an in-depth knowledge of state-of-the-art methods to process event-driven camera data and teach them the required, practical, skills to develop their own applications.

Keynote Speakers

Davide Scaramuzza
University of Zurich, Zurich (Switzerland) 

Kynan Eng
SynSense, Zurich (Switzerland) 

Monday 18th
9:00 – 13:00 Lectures on fundamentals of Event-Driven Vision
14:00 – 18:00 Assignments on algorithmic approaches in Event-Driven Vision

Tuesday 19th
9:00 – 13:00 Lectures on algorithmic approaches in Event-Driven Vision
14:00 – 18:00 Assignments on algorithmic approaches in Event-Driven Vision
Application: optical flow tested on pan-tilt unit
21:30 – 22:30 Keynote, D. Scaramuzza

Wednesday 20th
9:00 – 13:00 Lectures on AI-based approaches for Event-Driven Vision
14:00 – 18:00 Assignments on AI-based approaches for Event-Driven Vision

Thursday 21st
9:00 – 13:00 Lectures on biologically inspired methods and implementation on neuromorphic hardware
14:00 – 18:00 Assignments on biologically inspired methods and implementation on neuromorphic hardware
21:30 – 22:30 Keynote, K. Eng

Friday 22nd
9:00 – 13:00 Lectures on event-based vision for robot control
14:00 – 18:00 Assignments on event-based vision for robot control
Application: closing the loop with robots

Thursday, October 30, 2025

Foveon X3 sensor update

 

 

Source: https://photorumors.com/2025/10/24/the-latest-updates-on-the-sigma-foveon-x3-sensor-with-111-technology/ 

Some updates from Foveon in a new video interview posted on YouTube:

  • Sigma is “still working on the development of the sensor” [17:00].
  • Current status: The project is still in the “technology development” stage [17:11]. They have not yet started the design of the actual, final sensor [17:11].
  • Focus: The team is currently working on the “design of the pixel architecture” [17:20].
  • Delays: The project has been “a little bit delayed” [17:30] because as they test prototype wafers, they encounter “technical issues” [17:53].
  • Development team: The sensor development is now being handled primarily by the Sigma Japan engineering team [18:02].
  • Path forward: Mr. Yamaki mentions that the technical problems “have been narrowing down” [18:12]. Once the team is confident that the technology is ready, they will start the final sensor design and move toward production [18:23]. 

Tuesday, October 28, 2025

Paper on flexible SWIR detector

Zhang et al from the National University of Singapore published a paper titled "Flexible InGaAs/InAlAs avalanche photodiodes for short-wave infrared detection" in Nature Communications. 
Abstract: 
Flexible detectors have gained growing research interest due to their promising applications in optical sensing and imaging systems with a broad field-of-view. However, most research have focused on conventional photodiodes of which the responsivity are limited at short-wave infrared due to the absence of internal multiplication gain. Here we have realized and demonstrated flexible thin-film InGaAs/InAlAs avalanche photodiodes on a mica substrate for short-wave infrared detection. This achievement was made possible by the development and implementation of a low-temperature bonding and well-optimized fabrication process. Our devices exhibit promising characteristics, including low dark current, good responsivity, and high multiplication gain. Even when subjected to bending conditions, the avalanche photodiodes maintain their general performance. The advent of such flexible InGaAs avalanche photodiodes with reliable and promising performance enables a significantly broader range of potential applications.


a The schematics of the proposed flexible InGaAs APD chip. b The schematic of the InGaAs/InAlAs APD featuring a separate-absorption-grading-charge-multiplication structure with a metal layer served as bottom metal contact and a reflector. c Optical image of the fabricated flexible APDs under test.

a Comparison of the Jdark–V curves under flat and bent conditions where the bending radius range from 5 to 1 cm. b Comparison of the Jtotal–V characteristics with the incident power of −39.6 dBm at the same bending radius. c Comparison of the dark current at 95% of Vbr at the same bending conditions. d Comparison of the breakdown voltage at the same bending conditions. e Comparison of the responsivity at unity gain at 1550 nm at the same bending conditions. f The multiplication gain at 95% of Vbr. The J–V curves show no significant changes at all flat and bent conditions.

Saturday, October 25, 2025

33rd IS&T Color and Imaging Conference Oct 27-31

https://imaging.org/IST/Conferences/CIC/CIC2025/CIC_Home.aspx

The 33rd Color and Imaging Conference will be held in Hong Kong—Monday 27 - Friday 31 October 2025—for the first time in Asia.

The CIC33 is organized around the following topics.

Color Perception and Cognition
Capture and Reproduction
Material and Color Appearance
Color in Illumination and Lighting
Color Theory
Image Quality
Multispectral Imaging
Specific Color Applications
Color in Computer Graphics
Color in Computer Vision
Motion Picture Imaging Pipeline 

Program Highlights 

KEYNOTE TALKS – Start Each Day Inspired
Oct 29
Mingxue Wang, Huawei Technologies
“Recent Development and Challenges of Smartphone Digital Imaging”
Oct 30
Shoji Tominaga, NTNU
“Colorimetry and Image Reproduction of Fluorescent Objects”
Oct 31
Hyeon-Jeong Suk, KAIST
“Skin Color in Culture and Technology”

EVENING TALK – Oct 29
Michael Freeman, Award-winning Photographer
“The Aesthetics of Imagery from the Real World”

Courses and Workshops

New Courses
SC01 Color Science Research and Application (Ronnier Luo and Minchen (Tommy) Wei)
SC02 Multimodal AI Essentials: Language, Vision, and Technical Use Cases (Orange Gao, Shida Yu, and Nanqi Gong, Amazon)
SC10 Camera Phone Image Quality (Jonathan B. Phillips)
SC12 Human Color Vision and Visual Processing and the Effects of Individual Differences (Andrew Stockman)
SC14 Color Grading for Photographers: From Perception to Practice (Marianna Santoni)
SC15 High Dynamic Range (HDR) Imaging: Capture, Standards, Display, and Color Management (Nicolas Bonnier, Paul Hubel, and Luke Wallis)

Hands-on Courses
SC02 Multimodal Al Essentials: Language, Vision, and Technical Use Cases (Orange Gao)
SC10 Camera Phone Image Quality (Jonathan B. Phillips)
SC11 Underwater Colorimetry (Derya Akkaynak)


Workshops
W1: Display Color Consistency and Individual Differences in Color Sensitivity (Francisco Imai and Shahram Peyvandi, convenors)
W2: Facial Appearance Measurement, Perception, and Applications (Yan Lu, convenor)
W3: New ICC Features in Real World Applications (Max Derhak, convenor)
W4: AR/MR/VR Color Perception and Rendering (Jiangtao Kuang and Kaida Xiao, convenors)

ST's new sensors for industrial automation, security, retail

https://newsroom.st.com/media-center/press-item.html/p4728.html

STMicroelectronics introduces a new family of 5MP CMOS image sensors: VD1943, VB1943, VD5943, and VB5943.

- Four new 5MP image sensors allow customers to optimize image capture with high speed, high detail with a single, flexible product instead of two chips
- New device family is ideal for high-speed automated manufacturing processes and object tracking
- New sensors leverage market-leading technology for both global and rolling shutter modes, with a compact 2.25µm pixel with advanced 3D stacking, and on-chip RGB-IR separation

Dual global and rolling shutter modes
The sensors provide hybrid global and rolling shutter modes, allowing developers to optimize image capture for specific application requirements. This functionality ensures motion-artifact-free video capture (global shutter) and low-noise, high-detail imaging (rolling shutter), making it ideal for high-speed object tracking and automated manufacturing processes.

Compact design with advanced pixel technology
Using 2.25 µm pixel technology and advanced 3D stacking, the sensors deliver high image quality in a smaller footprint. The die size is 5.76 mm by 4.46 mm, with a package size of 10.3 mm by 8.9 mm, and an industry-leading 73%-pixel array-to-die surface ratio. This compact design enables integration into space-constrained embedded vision systems without compromising performance.

On-chip RGB-IR separation
The RGB-IR variants of the sensors feature on-chip RGB-IR separation, eliminating the need for additional components and simplifying system design. This capability supports multiple output patterns, including 5MP RGB-NIR 4×4, 5MP RGB Bayer, 1.27MP NIR subsampling, and 5MP NIR smart upscale, with independent exposure times and instant output pattern switching. This integration reduces costs while maintaining full 5MP resolution for both color and infrared imaging.

Tuesday, October 21, 2025

[Video] Owl Thermal Imaging for Safer Streets

 

 

 Smarter Cameras Imply Safer Streets? | Interview with Wade Appelman, Owl Autonomous Imaging 

 Pedestrian deaths at night are rising around the world — and traditional car sensors aren't enough to stop them. In this eye-opening interview, we speak with Wade Appelman, Chief Business Officer at Owl Autonomous Imaging, about how thermal imaging and AI are working together to change that.

We explore:

  •     Why pedestrian safety at night is now a global concern
  •     How thermal cameras and convolutional neural networks (CNNs) work in real-time to detect people in low light
  •     How these smart systems integrate with vehicle software using ROS
  •     The technology behind monocular vision and its use in autonomous driving
  •     How advanced camera sensors are built — from microbolometers to full-scale production
  •     Real-world testing results from Detroit and Las Vegas that show how thermal vision outperforms lidar, radar, and regular RGB cameras



Monday, October 20, 2025

Single-Photon Challenge image reconstruction competition is now open!

The Single-Photon Challenge announced yesterday (Oct 19) at ICCV 2025 is a first-of-its-kind benchmark and open competition for photon-level imaging and reconstruction.

The competition is now open! The submission deadline is April 1, 2026 (AOE) and winners will be announced in summer 2026. 

The challenge provides access to single-photon datasets and a public leaderboard to benchmark algorithms for photon-efficient vision: https://SinglePhotonChallenge.com


For this image reconstruction challenge you will need to come up with a novel and creative ways to transform many single-photon camera frames into a single high quality image. This setting is very similar to traditional burst imaging but taken to its extreme limit. Instead of a few burst images you have access to a thousand, but the catch is that each input frame is extremely noisy.  

There are thousands of dollars in prizes to win, thanks to the sponsors Ubicept and Singular Photonics.

Thursday, October 16, 2025

Samsung announces 0.5um pixel

https://semiconductor.samsung.com/image-sensor/mobile-image-sensor/isocell-hp5/

Specifications:

Effective Resolution
 16,384 x 12,288 (200MP) 
Pixel Size
 0.5 μm 
Optical Format
 1/1.56" 
Color Filter
 Tetra²pixel RGB Bayer Pattern 
Normal Frame Rate
 7.5 fps @full, 30 fps @50MP, 90 fps @12.5MP 
Video Frame Rate
 30 fps @8K, 120 fps @4K, 480 fps @FHD (w/o AF) 
Shutter Type
 Electronic rolling shutter 
ADC Accuracy
 10-bit 
Supply Voltage
 2.2 V for analog, 1.8 V for I/O, 1.0 V for digital core supply 
Operating Temperature
 -20℃ to +85℃ 
Interface
 4 lanes (4.5 Gbps per lane) D-PHY / 3 trios (4 Gsps per trio) C-PHY 
Chroma
 Tetra²pixel 
Autofocus
 Super QPD (PDAF) 
HDR
 Smart-ISO Pro (iDCG), Staggered HDR 
Output Formats
 RAW8, RAW10, RAW12, RAW14 
Analog Gain
 16x @full, 256x @12.5MP

Excerpt from Baidu news (translated with Google translate): 

Samsung releases ISOCELL HP5, the world's first 0.5µm ultra-fine pixel 200 million image sensor

...  Samsung officially released the new 200-megapixel image sensor ISOCELL HP5, which is expected to be the first telephoto camera of the OPPO Find X9 Pro mobile phone.

... ISOCELL HP5 sensor is 1/1.56 inches in size, has an ultra-high resolution of 16384 x 12288, and compresses the unit pixel size to 0.5 microns. It is Samsung's first 200 million image sensor in the world equipped with 0.5µm ultra-micro pixels.

To overcome the challenges posed by small pixels, ISOCELL HP5 integrates multiple cutting-edge technologies. Among them, dual vertical transfer gate (D-VTG) and front deep trench isolation (FDTI) technologies work together to effectively increase the full well capacity of each pixel, or its ability to accommodate light signals. 

Tuesday, October 14, 2025

Tower Semiconductor preprint on 2.2um global shutter pixel

Yokoyama et al. from Tower Semiconductor have posted a preprint titled "Charge Domain Type 2.2um BSI Global Shutter Pixel with Dual Depth DTI Produced by Thick-Film Epitaxial Process":

Abstract: We developed a 2.2um Backside Illuminated (BSI) Global Shutter (GS) pixel with true charge-domain Correlated Double Sampling (CDS). A thick-film epitaxial deep DTI (Deep Trench Isolation) process was implemented to enhance 1/PLS (Parasitic Light Sensitivity) using a dual depth DTI structure.
The thickness of the epitaxial substrate was 8.5 um. This structure was designed using optical simulation. By using a thick epitaxial substrate, it is possible to reduce the amount of light that reaches the memory node. Dual-depth DTI, which shallows the DTI depth on the readout side, makes it possible to read signals from the PD to the memory node smoothly. To achieve this structure, we developed a process for thick epitaxial substrate, and the dual-depth DTI can be fabricated with a single mask. This newly developed pixel represents the smallest ever charge-domain GS pixel to date. Despite its compact size, this pixel achieved high QE (83%) and 1/PLS of over 10,000. The pixel maintains 80% of its peak QE at ±15 degrees. 1/PLS is stable even when the F# is small.

Full paper: https://sciprofiles.com/publication/view/7ae02d55ce8f3721ebfc8c35fb871d97 

Saturday, October 11, 2025

Conference List - April 2026

IEEE 23rd International Symposium on Biomedical Imaging - 8-11 April 2026 - London, UK - Website

SPIE Photonics Europe - 12-16 April 2026 - Strasbourg, France - Website

IEEE Silicon Photonics Conference - 13-15 April 2026 - Ottawa, Ontario, Canada - Website

IEEE Custom Integrated Circuits Conference - 19-22 April 2026 - Seattle, Washington, USA - Website

Compound Semiconductor International Conference - 20-22 April 2026 - Brussels. Belgium - Website

SPIE Defense + Security - 26-30 April 2026 - National Harbor, Maryland, USA - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Monday, October 06, 2025

Billion-pixel-resolution microscopy of curved surfaces

Recent Optical news article covers a publication by Yang et al. which presents a new technique for capturing high resolution microscopy images of curved surfaces. 

X. Yang, H. Chen, L. Kreiss, C.B. Cook, G. Kuczewski, M. Harfouche, M.O. Bohlen, R. Horstmeyer, “Curvature-adaptive gigapixel microscopy at submicron resolution and centimeter scale,” Opt. Lett., 50, 5977-5980 (2025).
DOI: 10.1364/OL.572466

New microscope captures large, high-resolution images of curved samples in single snapshot
Innovation promises faster insights for biology, medicine and industrial applications

Researchers have developed a new type of microscope that can acquire extremely large, high-resolution pictures of non-flat objects in a single snapshot. This innovation could speed up research and medical diagnostics or be useful in quality inspection applications.

“Although traditional microscopes assume the sample is perfectly flat, real-life samples such as tissue sections, plant samples or flexible materials may be curved, tilted or uneven,” said research team leader Roarke Horstmeyer from Duke University. “With our approach, it’s possible to adjust the focus across the sample, so that everything remains in focus even if the sample surface isn’t flat, while avoiding slow scanning or expensive special lenses.”

In the Optica Publishing Group journal Optics Letters, the researchers show that the microscope, which they call PANORAMA, can capture submicron details — 1/60 to 1/120 the diameter of a human hair — across an area roughly the size of a U.S. dime without moving the sample. It produces a detailed gigapixel-scale image, which has 10 to 50 times more pixels than the average smartphone camera image.

“This tool can be used wherever large-area, detailed imaging is needed. For instance, in medical pathology, it could scan entire tissue slides, such as those from a biopsy, at cellular resolution almost instantly,” said Haitao Chen, a doctoral student in Horstmeyer’s lab. “In materials science or industrial inspection, it could quickly inspect large surfaces such as a chip wafer at high detail.”


Friday, October 03, 2025

Webinar on metasurface optics design

Metasurface Optics for Information Processing and Computing
Presented by Shane Colburn
Thu, Oct 9, 2025 1:00P EDT


Optics has long played a central role in information processing, from early analog computing systems to modern optical imaging and communication platforms. Recent advancements in nanofabrication and wavefront control have enabled a new class of ultrathin optical elements known as metasurfaces, which significantly expand the design space for manipulating light. By tailoring local phase, amplitude, and polarization responses at subwavelength scales, metasurfaces offer a compact and highly controllable platform for performing complex transformations on optical wavefronts.

Metaoptics for optical information processing leverages co-design of optical elements and computational algorithms to perform operations typically handled in the digital domain. Metasurfaces can be engineered to modify the point spread function of imaging systems, enabling custom optical transformations that enhance task-specific performance. Convolutional metaoptics, in particular, allow spatial convolutions to be executed directly in the optical domain as part of a hybrid analog-digital pipeline. These approaches present opportunities for reducing latency and energy consumption in computational imaging and embedded vision systems. Key challenges remain in achieving robustness, scalability, and seamless integration with electronic hardware, motivating continued research at the intersection of optics, machine learning, and photonic device design.

Who should attend:
This session is ideal for professionals involved in research and development, optical engineering, photonic device development, computational imaging, machine learning for optics, and advanced nanofabrication. It is particularly relevant to those working with technologies such as metasurfaces, wavefront shaping, hybrid analog-digital imaging systems, convolutional metaoptics, embedded vision hardware, and optical information processing platforms.

About the presenter:
Shane Colburn received his Ph.D. in electrical engineering and completed his postdoctoral studies at the University of Washington. His research primarily focused on dielectric metasurfaces for computational imaging and information processing, emphasizing hybrid optical-digital systems that leverage the compact form factor offered by metasurfaces and the aberration mitigation capabilities of computational imaging. He developed design methods using metaoptics for object detection and performing convolutions in the optical domain. Additionally, he investigated methods for reconfiguring metasurfaces, including novel architectures, electromechanical tuning, and phase-change material metasurfaces.

Colburn was previously the director of optical design at Tunoptix, where he led the development of its proprietary designs and nanofabrication efforts for building robust, high-performance imaging systems using metaoptics. Colburn is now the founder and managing director of Edgedyne, a company that develops information processing technologies based on metaoptics and provides photonic design and consulting services to clients in a range of sectors, including telecommunications, semiconductor, remote sensing, medical imaging, and consumer electronics.

Thursday, October 02, 2025

Article about Japan's TDK and Apple iPhone cameras

Original article here: https://gori.me/iphone/iphone-news/161745

(Translated using google translate)

TDK's TMR Sensor is the Secret to iPhone Cameras, Tim Cook Praises About Japanese Technology

TDK reveals thirty years of technology accumulation and manufacturing process that competitors cannot imitate at the first release of Apple Yokohama Technology Center 

Apple CEO Tim Cook visited the Apple Yokohama Technology Center (YTC) in Tsunashima, Yokohama, during his visit to Japan. This is the first time that the facility has been opened to the public, and the reality of a state-of-the-art research and development center with about 6,000 square meters of lab space and a clean room has been revealed.

On the same day, YTC presented four of Apple's leading companies—TDK, AGC, Kyocera, and Sony Semiconductor Solutions— that support Apple’s innovation. Tim Cook told reporters, “Apple will never be happy with the situation. Continue to ask for something better. The same goes for Japanese companies. We will never be satisfied and will continue to develop with the aim of always further advancement,” he said, emphasizing the importance of collaborative relationships with Japanese companies. 

The partnership between TDK and Apple began before the first iPod and has been in a long-term relationship for more than three decades. Today, almost all Apple products use TDK technology, and contribute to a wide range of fields, including batteries, pass filters, inductors, microphones, and various sensors.

It’s worth noting that TDK uses 100% renewable energy in all of its products it manufactures for Apple products. In the background of the beautiful photo shoot of the iPhone, which is usually used casually, and the ultra-compact part called the TMR sensor developed by TDK functions as a technology that all iPhone users benefit from.

TMR sensor stands for “Tunnel Magnetoresistance Sensor” and is an ultra-small sensor that detects changes in magnetic fields with extremely high sensitivity. It is so small that it contains fifty thousand wine glasses in a glass of wine, and it is a size that is almost invisible to the naked eye.

The principle of operation of this sensor applies quantum mechanical phenomena. To put it simply, it is a structure in which an ultra-thin insulator is sandwiched between two magnetic materials , and the electrical resistance changes dramatically due to changes in the external magnetic field. Compared to conventional Hall elements, the TMR sensor has reached about a hundred times the sensitivity of the TMR sensor, and it is characterized by extremely clear reactions such as "zero or one". 

The experience of automatically focusing on the moment you launch the camera app on your iPhone and point the lens at the subject will be "natural" that many users feel on a daily basis. However, in this background, the TMR sensor accurately grasps the position of the lens in a thousandth of a second.

The specific mechanisms are as follows. When the lens moves back and forth, a small magnet moves with the lens. The TMR sensor detects the distance change with this magnet as a change in the magnetic field as a change in the magnetic field, and instantly grasps where the lens is now. The camera system makes appropriate focus adjustments by "detecting the position" rather than measuring the distance.

The TMR sensor, which was first used for autofocus applications on the iPhone X, has also been applied to the sensor shift image stabilization (OIS) from the iPhone 12 series. The minute movement due to the camera shake is also instantly detected, and the sensor itself is operated to correct it.

The latest iPhone 17 series also uses TMR sensors for the center stage function of the front camera, which detects the fine movement of the lens in real time with a sensitivity of 100 times more than general hall elements. 

An easy example of an easy-to-understand TMR sensor is the joystick of the game controller. In the conventional joystick, a mechanical part called a "potension meter" is used, and the angle is detected by physical contact.

On the other hand, joysticks using TMR sensors operate non-contact , which greatly improves response speed and accuracy. In addition, since there is no mechanical wear, it also realizes durability that does not deteriorate in accuracy even if it is used for a long time.

The structure of the TMR sensor itself can be understood by the competitors when it is disassembled. However, it is very difficult to actually produce an equivalent product. The reason is TDK’s proprietary manufacturing process technology.

Semiconductor-based equipment is used for manufacturing, but it is not the equipment itself that is important. Combining multiple specialized technologies such as TMR deposition, magnetic material plating, and dry etching, the process of creating a unique layered structure that detects the magnetic field from which direction it detects the magnetic field from and which direction does it not perceives it is at the core. 

Modern smartphones are becoming thinner, and many magnets are used inside. There may be concerns about whether the delicate TMR sensor will work properly in this environment.

TDK cooperates closely from the customer's design stage to propose optimal sensor placement and design . The influence of the magnetic field is rapidly weakened by simply securing 1cm of physical distance, so the interference problem can be solved with proper design. He visits Cupertino fourteen times a year, and his close work with Apple’s camera team is proof of that. 

TDK has leveraged ninety years of expertise in magnetic materials to establish this process technology. The TMR sensors manufactured at the Asama Techno Plant in Japan also have a sustainable manufacturing system using 100% renewable energy.

The background of each photo that iPhone users casually take is the result of such a long-term accumulation of Japanese precision technology and technology. TMR sensors are by no means a prominent component, but they will continue to evolve as an important technology that supports the modern smartphone experience. 

Wednesday, October 01, 2025

Sony announces IMX927 a 105MP global-shutter CIS

Product page: https://www.sony-semicon.com/en/products/is/industry/gs/imx927-937.html

Release page: https://www.sony-semicon.com/en/info/2025/2025092901.html 

PetaPixel article: https://petapixel.com/2025/09/29/sonys-new-global-shutter-sensor-captures-105-megapixels-at-100fps/ 

Sony Semiconductor Solutions to Release the Industry-Leading Global Shutter CMOS Image Sensor for Industrial Use That Achieves Both Approximately 105-Effective-Megapixels and High-Speed 100 FPS Output

Delivering high-resolution and high-frame-rate imaging to contribute to diversified, advanced inspections 

Atsugi, Japan — Sony Semiconductor Solutions Corporation (Sony) today announced the upcoming release of the IMX927 stacked CMOS sensor with back-illuminated pixel structure and global shutter. It is the industry-leading sensor that achieves both high-resolution of approximately 105-effective-megapixels and high-speed output at a maximum frame rate of 100 fps.

The new sensor product is equipped with Pregius S™ global shutter technology made possible by Sony’s original pixel structure, ensuring high-quality imaging performance with distortion free imaging and minimal noise. By optimizing the sensor drive in pixel reading and A/D converter, it supports high-speed image data output. Introducing this high-resolution and high-frame-rate model into the product lineup will help improving productivity in industrial equipment domain, where recognition targets and inspection methods continue to diversify.

With the automation of factories progressing, the need for machine vision cameras that can capture a variety of objects at high speed and high resolution is growing in industrial equipment domain. With its proprietary back-illuminated pixel structure, Sony’s global shutter CMOS image sensors deliver high sensitivity and saturation capacity. Because they can capture moving subjects at high resolution without distortion, they are increasingly being used in a wide range of applications such as precision component recognition and foreign matter inspection. The new IMX927 features a high resolution of approximately 105 effective megapixels while delivering a high frame rate of up to 100 fps, helping shorten measurement and inspection times. It also shows promise in advanced measurement and inspection applications, for instance imaging larger objects in high resolution and three-dimensional inspection using multiple sets of image data. 

Along with the IMX927, Sony will also release seven products with different image sizes and frame rates. It has also developed a new ceramic package with connector that is compatible with a series of all these products, which allows cameras to be designed with sensors removable from camera modules. This can contribute to streamlining camera assembly and sensor replacement. By expanding its global shutter product lineup, Sony is contributing to the advancement of industrial equipment, where recognition and inspection tasks continue to become ever more precise and diversified.

Main Features
■ Global shutter technology with Sony’s proprietary pixel structure for high-resolution and high-sensitivity imaging
The new sensor is equipped with Pregius S global shutter technology. The very small 2.74 μm hat use Sony’s proprietary back-illuminated pixels and stacked structure enable the approximately 105-effective-megapixels resolution in a compact size with a high level of sensitivity and saturation capacity. In addition to inspections of precision components such as semiconductors and flat-panel displays, which require a high degree of accuracy, this feature also enables the capture of larger objects with distortion free, high-resolution, low-noise images. Thereby machine vision cameras can achieve higher precision measurement and inspection processes in a wide range of applications.
■ Circuit structure enabling a highly efficient sensor drive that saves power and makes high-speed imaging possible
The new sensor employs a circuit structure that optimizes pixel reading and sensor drive in the A/D converter, which saves power and enables faster data processing. This design makes a high-speed frame rate of up to 100 fps possible, reducing the time to output image data for more efficiency in measurement and inspection tasks. It also shows promise for application in advanced inspections such as three-dimensional inspections, which use multiple image data sets.
■ New ceramic package with connector to streamline camera assembly and contribute to stable operation
Sony has also developed a new ceramic package with connector, which is compatible with a series of eight products including the IMX927, making it possible to combine or detach sensors from camera modules flexibly to design cameras. Using this package makes camera assembly easier and streamlines the process of replacing sensors to suit camera specifications. It also has a superior heat dissipation structure, which suppresses the impact of heat on camera performance, contributing to stable, long-term operation.

 

Friday, September 26, 2025

ISSW 2026 call for papers


The International SPAD Sensor Workshop

1st-4th June 2026 / Yonsei University, Seoul, South Korea

The 2026 International SPAD Sensor Workshop (ISSW) is a biennial event focusing on Single-Photon Avalanche Diodes (SPAD), SPAD-based sensors, and related applications. The workshop welcomes all researchers (including PhD students, postdocs, and early-career researchers), practitioners, and educators interested in these topics.

This fifth edition of the workshop will take place in Seoul, South Korea, hosted at Yonsei University, in a venue suited to encourage interaction and a shared experience among the attendees. The workshop will follow a 1-day introductory school on SPAD sensor technology, which will be held in the same venue as the workshop on June 1st, 2026.

The workshop will include a mix of invited talks and peer-reviewed contributions. Accepted works will be published on the International Image Sensor Society website (https://imagesensors.org/). Submitted works may cover any of the aspects of SPAD technology, including device modeling, engineering and fabrication, SPAD characterization and measurements, pixel and sensor architectures and designs, and SPAD applications.

Topics
Papers on the following SPAD-related topics are solicited:

●      CMOS/CMOS-compatible technologies
●      SiPMs
●      III-V, Ge-on-Si
●      Modeling
●      Quenching and front-end circuits
●      Architectures
●      Time-to-digital converters
●      Smart data-processing techniques
●      Applications of SPAD single pixel and arrays, such as:
o   Depth sensing / ToF / LiDAR
o   Time-resolved imaging
o   Low-light imaging
o   Quantum imaging
o   High-dynamic-range imaging
o   Biophotonics
o   Computational imaging
o   Quantum RNG
o   High-energy physics
o   Quantum communications
●      Emerging technologies & applications

Draft paper submission
Submission portal TBD.

Paper format - Each submission should comprise a 1000-character abstract and a 3-page paper, equivalent to 1 page of text and 2 pages of images. The submission must include the authors' name(s) and affiliation, mailing address, and email address. The formatting can adhere to either a style that integrates text and figures, akin to the standard IEEE format, or a structure with a page of text followed by figures, mirroring the format of the International Solid-State Circuits Conference (ISSCC) or the IEEE Symposium on VLSI Technology and Circuits. Examples illustrating these formats can be accessed in the online database of the International Image Sensor Society.

The deadline for paper submission is 23:59 CET, January 11th, 2026.

Papers will be considered on the basis of originality and quality. High-quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee.

Accepted papers will be made freely available for download from the International Image Sensor Society website.

Poster submission
In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics.

If you wish to take up this opportunity, please submit a 1000-character abstract and a 1-page description (including figures) of the proposed research activity, along with the authors’ name(s) and affiliation, mailing address, and e-mail address.

The deadline for paper submission is 23:59 CET, January 11th, 2026.

Key dates
The deadline for paper submission is 23:59 CET, January 11th, 2026.

Authors will be notified of the acceptance of their papers & posters latest by February 22nd, 2026.

The final paper submission date is March 29th, 2026.

The presentation material submission date is May 22nd, 2026.

Location
ISSW 2026 will be held fully in-person in Seoul, S. Korea, at the Baekyang Nuri Grand Ballroom at Yonsei University. 

Tuesday, September 16, 2025

Conference List - March 2026

Electronic Imaging - 1-5 March 2026 - Burlingame, California, USA - Website

22nd Annual Device Packaging Conference - 2-5 March 2026 - Phoenix, Arizona, USA - Website

EDIT (Excellence in Detector and Instrumentation Technologies) - 3-13 March 2026 - Geneva, Switzerland - Website

Laser World of Photonics China - 18-20 March 2026 - Shanghai, China - Website

Image Sensors Europe - 17-18 March 2026 - London, UK - Website

MEMS & Sensors Executive Conference - 31 March - 2 April 2026 - Cambridge, Massachusetts, USA - Website

If you know about additional local conferences, please add them as comments.

Return to Conference List index