Friday, December 12, 2025

Teradyne blog on automated test equipment - part 2

Part 2 of the blog is here: https://www.teradyne.com/2025/12/08/high-throughput-image-sensors-smart-testing-powers-progress/ 

High-Throughput Image Sensors: Smart Testing Powers Progress

As data streams grow, modern ATE ensures flexible, scalable accuracy

In the race to produce higher resolution image sensors—now pushing beyond 500 megapixels—the industry faces significant challenges. These sensors aren’t just capturing more pixels; they’re handling massive streams of data, validating intricate on-chip AI functions, and doing it all at breakneck speeds. For manufacturers, the challenge is as unforgiving as it is critical: test more complex devices, in less time, all while maintaining or even reducing costs.

Today’s high-resolution sensors must deliver more than just pixel perfection. They must demonstrate pixel uniformity, identify and compensate for defective pixels, verify electrical performance under varying conditions, and prove their resilience under strict power efficiency requirements. As AI functionality becomes integrated with image sensors, testing must also account for new processing capabilities and system interactions.   

The move toward higher resolutions introduces not only more data but significant production constraints as well. As sensors grow larger to accommodate more pixels, fewer can be tested simultaneously under the illumination field of automated test equipment (ATE). This site count constraint can be well-handled with strategies for faster data processing and smarter testing. This is where Teradyne plays an important industry role, moving beyond supplier, and stepping in as a strategic partner to help manufacturers redefine what’s possible.  

Why High Resolution Means High Stakes  
As image sensor resolutions soar—from smartphones to cars to industrial systems—so do data demands. Each leap in resolution extends the volume of data that must be captured and processed, including during testing. For example, a single 50-megapixel image sensor produces around 100 megabytes of data per image. Multiple images, as many as 25 or more, must be captured under different lighting conditions to validate pixel response, uniformity, and defect detection. When multiplied across millions of units, data quickly scales into terabytes for every production batch.  

Without innovation, this increase in data threatens to overwhelm production lines. Test times can double or even quadruple, slashing throughput and driving up costs. Manufacturers are left with a critical challenge: how to test sensors faster, without sacrificing accuracy or profitability.  

Teradyne Delivers High-throughput, Scalable Solutions  
Teradyne addresses these high-stakes dynamics with a combination of powerful, modular hardware and flexible software tools. At the heart of Teradyne’s approach is the belief that high performance and flexibility are essential. The company’s UltraSerial20G capture instrument for Teradyne UltraFLEXplus was built precisely for this moment and is designed to handle the enormous data loads of modern image sensors. It offers a modular architecture that enables manufacturers to respond to new interface protocols without expensive, time-consuming hardware redesigns.  

At the same time, Teradyne’s engineers kept the future in focus, looking beyond meeting current requirements by adding future capacity in the UltraSerial20G. Essentially, this provides customers with room to grow without replacing critical hardware. When new protocols emerge or higher data rates become the standard, manufacturers can count on the capabilities of their capture platform. While competitors scramble to keep pace, Teradyne customers are already testing the next generation of devices. 

Teradyne also recognizes that hardware is just part of the story. The company’s IG-XL software platform is where this flexibility comes to life. It gives engineers the tools to write custom test strategies at the pin level, controlling everything from voltage and timing to the finest adjustments of signal slopes. Importantly, this software environment allows manufacturers to build and refine their test programs without exposing sensitive intellectual property, a crucial advantage in an industry where secrecy is a competitive necessity.  

Overall, Teradyne’s flexible hardware and software architecture offers an integrated approach that enables manufacturers to manage increasing data volumes while maintaining production schedules.  Teradyne’s Alexander Metzdorf describes it as giving customers the tools to write their own destiny: “Our role is to provide a toolbox that’s flexible and powerful enough for our customers to test whatever they need—when they need it—without being held back by fixed systems.” 

Thursday, December 11, 2025

MRAM forum colocated with IEDM on Dec 11 (today!)

MRAM applications to image sensors may be of interest to our readers.

Masanori Hosomi, Sony - eMRAM for image sensor applications

eMRAM for Image Sensor Applications  

Since logic chip size of Sony's image sensor, bonding of pixel wafers and logic wafers, are limited to be equal to or smaller than that of the pixel chip, we have been using DRAM and SRAM while battling the enhancement of image sensor functions and area limitations. To incorporate further functionalities, there is a demand for better performance memory that can suppress the memory macro area with small bit cells, as well as non-volatile memory to keep a code or an administration data without external memories. Embedded STT-MRAM has a bit cell size that is less than one-third of that of SRAM and possesses non-volatile characteristics, and eMRAM is suitable for small-scale systems such as smart MCUs that do not have external memory. Therefore, at this point, eMRAM has been commercialized in GNSS (GPS), smart watch and wireless communication systems. In the same way, it is presumed to be suitable for frame buffer memory and data memory in image sensor systems. This talk will present how the device can enhance its functionality with the assumption of application in image sensors.

The full program is available here: https://sites.google.com/view/mramforum/program-booklet 

Wednesday, December 10, 2025

IEDM Image Sensors Session Dec 10 (today!)

Program link: https://iedm25.mapyourshow.com/8_0/sessions/session-details.cfm?ScheduleID=36&

Advanced Image Sensors
Wednesday, December 10
1:30 PM - 5:20 PM PST

This session includes 8 papers on the latest in image sensor technology. The first is an invited paper on progress in flash LiDAR using heterodyne detection. The next two papers present HDR imagers using LOFIC pixels that reach or exceed 120dB. This is followed by papers on specialty imagers, the first one describing an all-organic flexible imager, followed by an extremely high frame-rate burst CIS sensor. The final three papers of the session cover the latest technologies for shrinking pixels to sub-micron dimensions. Of special note is the last paper, which shrinks the dual-photodiode pixel to 0.7um.

3D FMCW Wide-angle Flash Lidar: Towards System Integration and In-pixel Frequency Measurement (Invited)

The Frequency Modulated Continuous Wave Light detection and ranging (FMCW Lidar) usually scans the scene point by point, and measures the distance by a Fourier Transform (FT) of the heterodyne signal. A promising solution for higher frame rate with larger image resolution is the flash version of the FMCW Lidar, using floodlight illumination and an image sensor. We first review our recent achievements of FMCW flash Lidar system developments with commercially available components and post-processing. Because FT is difficult to implement in small pixels, we then introduce the principle of a heterodyne image sensor with in-pixel frequency measurement combined with a multi-chirp laser modulation strategy, targeting video rate real-time measurements. 

A 120 dB Dynamic Range 3D Stacked 2-Stage LOFIC CMOS Image Sensor with Illuminance-Adaptive Signal Selection Function

This work presents a 3D stacked 2-stage lateral overflow integration capacitor (LOFIC) CMOS image sensor with illumination adaptive signal selection function. To reduce the high data rates of conventional wide dynamic range sensors, this work proposes an illuminance-adaptive signal selection circuit that non-destructively determines the light intensity level using electrons accumulated in the first stage of LOFIC. This allows the developed sensor to selectively output one or two most appropriate signals out of three, reducing the data rate while maintaining a wide dynamic range. Furthermore, a 3D-stacked Si trench capacitor is employed to achieve over 8.6Me- FWC with 5.6 µm pixel pitch. The fabricated chip demonstrates a dynamic range of 120 dB with selected signal readout and a maximum SNR of 67.5 dB. 

A 129 dB Dynamic Range Triple Readout CMOS Image Sensor with FWC Enhancement Technology

We present a 2.1 μm pixel CMOS image sensor for automotive applications achieving 129 dB single exposure dynamic range with triple readout. The advanced sub-pixel architecture incorporates FDTI, conformal doping and 3D-MIM technologies, significantly enhancing full-well capacity. The sensor enables a seamless triple-image composition with 29 dB SNR at connection points, suitable for high-temperature automotive environments. 

Flexible 256×256 All-Organic-Transistor Active-Matrix Optical Imager with Integrated Gate Driver

Solution-processed organic thin film transistors (OTFTs) provide a promising platform for truly flexible, large-area integrated sensor systems. Here, an all-organic-transistor active-matrix imager using OTFTs for both the backplane and optical sensing layer is developed. Through reducing the density of states at the channel interface for a steep subthreshold swing and low dark current, the resulting organic phototransistor (OPT) presents a high detectivity of 2.2×1016 Jones. The OPT is stacked on top of an OTFT switch with a high ON/OFF ratio of 4.7×1010 to form the active matrix and the gate driver is also integrated. Finally, a 256 × 256 (213 PPI) flexible active-matrix imager is demonstrated for fingerprint and low-distortion imaging with the constructed real-time imaging system. 

A Global Shutter Burst CMOS Image Sensor with 6-Tpixel/s Readout Speed, 256-recording Frames and -170dB Parasitic Light Sensitivity

This paper presents an ultra-high-speed (UHS) global shutter burst CMOS image sensor (CIS) featuring pixelwis analog memory arrays. The developed CIS with 628H x 480V pixels achieves a maximum frame rate of 20 Mfps and a readout speed of 6.03 Tpixel/s. A recording length of 256 frames and a parasitic light sensitivity (PLS) of -170 dB were also achieved simultaneously in a UHS camera. This low PLS is achieved through comprehensive metal shielding of the pixel circuit and memory regions, and by the spatial separation between the photodiode and memory regions, implemented using Si trench capacitors. The introduced bias adjustment circuit compensates for voltage variations among pixel positions due to the ground resistance and pixel circuit current during the pixel driving period, enabling high-resolution video recording with an effective 628H x 480V pixels and a 48 μm pitch. 

Silicon-on-Insulator Pixel FinFET Technology for a High Conversion Gain and Low Dark Noise 2-Layer Transistor Pixel Stacked CIS

This study presents a 2-Layer transistor pixel stacked 0.8-µm dual-pixel (DP) CIS with silicon-on-insulator (SOI) fin field-effect transistor (FinFET) technology. The application of SOI FinFETs as pixel transistors, featuring a body-less configuration on buried oxide, reduces parasitic capacitance at the floating diffusion node, thereby enhancing conversion gain and noise characteristics. The SOI FinFET achieves improved transconductance and source follower gain compared to a previous pixel FinFET. The resolution of challenges associated with the SOI structure is demonstrated through a 0.8μm DP CIS with SOI FinFETs. 

A 0.43µm Quad-Photodiode CMOS Image Sensor by 3-Wafer-Stacking and Dual-Backside Deep Trench Isolation Technologies

Scaling pixel pitch below 0.5um has become highly challenging in conventional 2-wafer stacked CMOS image sensors due to the limited silicon area shared among photodiodes, photodiode-photodiode isolation, and the associated functional transistors, while maintaining the excellent pixel performance. In this work, several advanced pixel technologies, including 3-wafer stacking, dual-backside deep trench isolation, and enhanced composite metal grid, were proposed and employed, to realize the world's smallest 0.43 µm pitch quad-photodiode pixel, achieving exceptional performance metrics of full well capacity of 6000 e-, dark current of 1.3 e-/s, and read noise of 1.5 e-rms, without degradation in conversion gain. 

A 2-layer, 0.7μm-pitch Dual Photodiode Pixel CMOS Image Sensors with Metaphotonic Color Router

In this article, a world’s smallest 0.7μm-pitch dual photodiode pixel is presented. We integrated 2-layer pixel with hybrid Cu-Cu bonding process only, without introducing pixel-level deep contacts. By optimizing layout of Cu pad layer, we suppressed capacitive coupling between neighboring floating diffusion nodes, still achieved similar conversion gain compared to that of 0.7μm-pitch, 1-layer single photodiode pixel. We overcome the degradation of the auto-focus (AF) separation ratio by incorporating multi-focal, metaphotonic color routers (MPCR).

Tuesday, December 09, 2025

Sony enters the 200MP race

Link: https://www.sony-semicon.com/en/info/2025/2025112701.html

Sony Semiconductor Solutions to Release Approx. 200-Effective Megapixel Image Sensor for Mobile Applications with Built-in AI Technology Achieving high definition and high image quality for high-powered zooming on monocular cameras

Atsugi, Japan — Sony Semiconductor Solutions Corporation (Sony) today announced the upcoming release of the 1/1.12-type large-format LYTIA 901 mobile image sensor with a high resolution of approximately 200-effective megapixels.*1 This product uses a pixel array format that delivers both high resolution and high sensitivity, further incorporates an image processing circuit utilizing AI technology within the sensor. It achieves high-definition image quality even with high-powered zooming of up to 4x on monocular cameras and offers new experiential value when shooting on mobile cameras.

Main Features
■Approximately 200-effective megapixels and Quad-Quad Bayer Coding (QQBC) array deliver both high resolution and high sensitivity
 The new sensor uses a pixel pitch of 0.7 μm for an approximately 200-effective megapixel resolution on a 1/1.12 large-format sensor. Advances in pixel structure and color filter design increase the saturation signal level, contributing to improved dynamic range.
 To leverage the high resolution of approximately 200-effective megapixels, the new product employs a Quad-Quad Bayer Coding (QQBC) array in which 16 (4×4) adjacent pixels are clustered with filters of the same color. During normal shooting, the signals of the 16 clustered pixels are processed as a single pixel unit, allowing the camera to maintain high sensitivity even at night and in dim indoor shooting conditions. On the other hand, during zoom shooting, a form of array conversion processing known as remosaicing reverts the clustered pixels to a normal pixel array, to deliver high-resolution imaging. 


 

■Equipped with an AI learning-based remosaicing function for high quality imaging while zooming
Array conversion processing (remosaicing), which reverts the QQBC array to a normal pixel array, requires extremely advanced calculation processes. For this product, Sony has developed a new AI learning-based remosaicing for the QQBC array and mounted the processing circuit inside the sensor, for another Sony industry-first.*2 This new technology makes it possible to process high-frequency component signals, which are generally difficult to reproduce, offering superior reproduction of details such as fine patterns and letters. Furthermore,incorporating AI learning-based remosaicing directly in the sensor enables high-speed processing and up to 30 fps high- quality video capture when shooting with up to 4x zoom in 4K resolution.

■High dynamic range and rich tonal expression enabled by various HDR technologies
 DCG-HDR and Fine12bit ADC technologies deliver high dynamic range and tonal expressions across the entire zoom range up to 4x
 In addition to Dual Conversion Gain‐HDR (DCG-HDR) technology, which composites data read at different gain settings in a single frame, the new sensor is equipped with Fine12bit ADC (AD converter) technology that improves the quantization bit depth from the conventional 10 bits to 12. These features deliver a high dynamic range and rich tonal expression across the entire zoom range up to 4x.
 HF-HDR technology delivers over 100 dB*3 high dynamic range performance
 Hybrid Frame-HDR (HF-HDR) is an HDR technology that composites frames captured in short exposures with DCG data on a post-processing application processor. HF-HDR significantly improves the dynamic range compared to conventional HDR technology, delivering performance of over 100 dB.*3 This significantly suppresses highlight blowout in bright areas, as well as blackout in dark areas, delivering images that more closely resemble what the human eye actually sees. 

 


 

Friday, December 05, 2025

Teradyne blog on automated test equipment for image sensors

Link: https://www.teradyne.com/2025/11/11/invisible-interfaces/

Invisible Interfaces: The Hidden Challenge Behind Every Great Image Sensor

Flexible, future-ready test strategies are crucial to the irregular cycle of sensor design and standards development.

Alexander Metzdorf, Teradyne Inc. 

When you snap a photo on your phone or rely on a car’s camera for lane detection, you’re trusting an unseen network of technologies to deliver or interpret image data flawlessly. But behind the scenes, the interface between the image sensor and its processor is doing the heavy lifting, moving megabtyes of data without error or delay.

While much of the industry conversation focuses on advances in resolution and sensor technology, another challenging aspect of modern imaging innovation is the interfaces—the invisible pathways that connect these sensors to the systems around them, including the processors tasked with interpreting their data. One of the most pressing and underappreciated imaging challenges lies in the ability of the interfaces to handle growing demands for speed, bandwidth, and reliability. The challenge isn’t one-size-fits-all. Smartphone cameras may need ultra-high resolution over short distances, while automotive sensors prioritize robustness and wider areas.

As image sensors and the technologies used to interpret the data evolve to deliver higher resolutions and even integrate artificial intelligence directly onto the chip, these interfaces are under more pressure than ever before. The challenge is both technical and practical: how do you design and test interfaces that must support vastly different applications, from the low-power demands of smartphones to the rugged, long-distance requirements of automotive systems?

And even more critically, how do you keep up when the rules change every few months?

The Growing Challenge in Image Sensor Development

The industry’s insatiable appetite for higher resolutions is well known, but what often goes unnoticed is the corresponding explosion in data traffic. A single image sensor on a smartphone might capture 500 megabytes of data in one shot. In automotive systems, that sensor could be sending critical visual information across several meters of cabling to a centralized processor, where decisions like emergency braking or obstacle detection happen in real-time. Industrial imaging is pushing resolutions even higher (up to 500 megapixels in some cases) to support inspection and automation systems, creating enormous data handling and processing demands.

Each of these scenarios represents wildly different demands on the interfaces connecting sensors to the rest of the system. In smartphones, the processor is typically located just millimeters away from the image sensor. Power efficiency is paramount, and interfaces must support blisteringly fast data rates to process high-resolution images without draining the battery. In an automotive application, a vehicle’s safety system might require those same sensors to transmit data over longer distances, and deliver real-time information and decision-making in harsh environments, while meeting stringent reliability and safety standards.

It’s a challenge compounded by the fact that image sensor manufacturers rarely control these interface requirements. Industry-wide, sensor manufacturers are generally forced to adopt a growing variety of interface standards and proprietary solutions, each with unique requirements for bandwidth, distance, latency, and power consumption.

This creates a relentless cycle of adaptation, where manufacturers are forced to develop and validate new interfaces almost as quickly as they can design the sensors themselves. It’s not uncommon for entirely new interface requirements to be handed down with lead times as short as six months. Unpredictability follows for both image sensor designers and the teams responsible for testing these devices.

The Shift Toward Proprietary Interfaces

While MIPI remains the dominant open standard for image sensor interfaces, proprietary protocols are growing. These custom protocols are typically developed privately by major technology companies to support their unique product requirements, for example, to achieve specific performance advantages. These custom interfaces are closely guarded secrets and often remain entirely undocumented outside of the companies that develop them, making it extremely difficult for test equipment vendors to keep pace.
Even a full teardown of a high-end smartphone won’t reveal how its camera interfaces are engineered. Yet, despite having no access to these underlying specifications, test teams are still expected to validate sensor performance against them.

For manufacturers and test engineers, this creates a near-constant state of uncertainty. New protocols can emerge rapidly and without warning, and must be supported almost immediately, which can cause test equipment providers to scramble to retool systems.

Teradyne’s Approach: Flexibility as a Strategic Imperative

Teradyne has set out to solve this challenge, developing a modular, future-ready approach that gives manufacturers the flexibility they need to thrive in unpredictable environments.

At the hardware level, Teradyne’s UltraSerial20G capture instrument for the UltraFLEXplus is designed for adaptability. Its modular architecture allows changes in key components and software to quickly accommodate new protocols.

Additional flexibility is added with Teradyne’s IG-XL software. Customers are empowered to develop highly customized test strategies, controlling every detail of the testing process, from voltage and timing to signal slopes and data handling.

The Path Ahead: Staying Competitive in a Fragmented, Fast-moving Market

For image sensor makers, the message is clear: choose test platforms that are prepared for proprietary protocols, evolving standards, and ever-tighter time-to-market demands.

In this landscape, Teradyne’s modular hardware and powerful, agile software ensure that manufacturers are meeting current demands and are prepared for whatever comes next. With early interface testing capabilities and scalable solutions that can adapt on the fly, Teradyne customers stay ahead of integration risks, control costs, and accelerate time-to-market.

In an industry where speed, innovation, and reliability are everything, that kind of flexibility is more than just a technical feature. It’s a strategic necessity that offers manufacturers the freedom to innovate, knowing they have the flexibility they need in their test solutions.

Wednesday, December 03, 2025

A-SSCC Circuit Insights CMOS Image Sensor

 

A-SSCC 2025 - Circuit Insights #4: Introduction to CMOS Image Sensors - Prof. Chih-Cheng Hsieh

About Circuit Insights: Circuit Insights features internationally renowned researchers in circuit design, who will deliver engaging and accessible lectures on fundamental circuit concepts and diverse application areas, tailored to a level suitable for senior undergraduate students and early graduate students. The event will provide a valuable and inspiring opportunity for those who are considering or pursuing a career in circuit design.

About the Presenter: Chih-Cheng Hsieh received the B.S., M.S., and Ph.D. degrees from the Department of Electronics Engineering, National Chiao Tung University, Hsinchu, Taiwan, in 1990, 1991, and 1997, respectively.,From 1999 to 2007, he was with an IC Design House, Pixart Imaging Inc., Hsinchu. He led the Mixed-Mode IC Department, as a Senior Manager and was involved in the development of CMOS image sensor ICs for PC, consumer, and mobile phone applications. In 2007, he joined the Department of Electrical Engineering, National Tsing Hua University, Hsinchu, where he is currently a Full Professor. His current research interests include low-voltage low-power smart CMOS image sensor IC, ADC, and mixed-mode IC development for artificial intelligence (AI), internet of things (IoT), biomedical, space, robot, and customized applications.,Dr. Hsieh serves as a TPC member of ISSCC and A-SSCC, and an Associate Editor of IEEE Solid–State Circuit Letters (SSC-L) and IEEE Circuits and Systems Magazine (CASM). He was the SSCS Taipei Chapter Chair and the Student Branch Counselor of NTHU, Taiwan.

Monday, December 01, 2025

Time-mode CIS paper

In a recent paper titled "An Extended Time-Mode Digital Pixel CMOS Image Sensor for IoT Applications" Kim et al from Yonsei University write:

Time-mode digital pixel sensors have several advantages in Internet-of-Things applications, which require a compact circuit and low-power operation under poorly illuminated environments. Although the time-mode digitization technique can theoretically achieve a wide dynamic range by overcoming the supply voltage limitation, its practical dynamic range is limited by the maximum clock frequency and device leakage. This study proposes an extended time-mode digitization technique and a low-leakage pixel circuit to accommodate a wide range of light intensities with a small number of digital bits. The prototype sensor was fabricated in a 0.18 μm standard CMOS process, and the measurement results demonstrate its capability to accommodate a 0.03 lx minimum light intensity, providing a dynamic range figure-of-merit of 1.6 and a power figure-of-merit of 37 pJ/frame·pixel. 

Sensors 2025, 25(23), 7228; https://doi.org/10.3390/s25237228

 



Figure 1. Operation principle of conventional CISs: (a) voltage mode; (b) fixed reference; and (c) ramp-down TMD.
Figure 2. Theoretical photo-transfer curve of conventional 6-bit TMDs.
Figure 3. The operation principle of the proposed E-TMD technique.
Figure 4. Theoretical photo-transfer curve of the proposed E-TMD: (a) TS = TU = TD = 2000tCK, Δ = 0; (b) TS = TU = TD = 100tCK, Δ = 0; (c) TS = TU = 0, TD = 45tCK, Δ = 0; and (d) TS = 0, TU = 25tCK, TD = 45tCK, Δ = 0.7.
Figure 5. The conventional time-mode digital pixel CIS adapted from [11]: (a) architecture; (b) pixel schematic diagram.
Figure 6. Architecture and schematic diagram of the proposed time-mode digital pixel CIS.
Figure 7. Operation of the proposed time-mode digital pixel CIS with α representing VDD-vREF-VT: (a) six operation phases and (b) timing diagram.
Figure 8. Transistor-level simulated photo-transfer curve comparison.

Figure 9. Chip micrograph.

 

Figure 10. Captured sample images: (a) 190 lx, TS = 17 ms, tCK = 50 µs; (b) 1.9 lx, TS = 400 ms, tCK = 2 µs.
Figure 11. Captured sample images and their histograms: (a) 20.5 lx, TS = 32.6 ms; (b) 200.6 lux, TS = 4.6 ms; (c) 2106 lux, TS = 0.64 ms; (d) 2106 lux, TS = 0.64 ms, TU = 0.74 ms, TD = 1.84 ms, Δ = 0.5.

Thursday, November 27, 2025

ISSCC 2026 Image Sensors session

ISSCC 2026 will be held Feb 15-19, 2026 in San Francisco, CA.

The advance program is now available: https://submissions.mirasmart.com/ISSCC2026/PDF/ISSCC2026AdvanceProgram.pdf 

Session 7 Image Sensors and Ranging (Feb 16)

Session Chair: Augusto Ximenes, CogniSea, Seattle, WA
Session Co-Chair: Andreas Suess, Google, Mountain View, CA

54×42 LiDAR 3D-Stacked System-On-Chip with On-Chip Point
Cloud Processing and Hybrid On-Chip/Package-Embedded 25V
Boost Generation

VoxCAD: A 0.82-to-81.0mW Intelligent 3D-Perception dToF SoC
with Sector-Wise Voxelization and High-Density Tri-Mode eDRAM
CIM Macro

A Multi-Range, Multi-Resolution LiDAR Sensor with
2,880-Channel Modular Survival Histogramming TDC and Delay
Compensation Using Double Histogram Sampling

A 480×320 CMOS LiDAR Sensor with Tapering 1-Step
Histogramming TDCs and Sub-Pixel Echo Resolvers

A 26.0mW 30fps 400x300-pixel SWIR Ge-SPAD dToF Range
Sensor with Programmable Macro-Pixels and Integrated
Histogram Processing for Low-Power AR/VR Applications

A 128×96 Multimodal Flash LiDAR SPAD Imager with Object
Segmentation Latency of 18μs Based on Compute-Near-Sensor
Ising Annealing Machine

A Fully Reconfigurable Hybrid SPAD Vision Sensor with 134dB
Dynamic Range Using Time-Coded Dual Exposures

A 55nm Intelligent Vision SoC Achieving 346TOPS/W System
Efficiency via Fully Analog Sensing-to-Inference Pipeline

A 1.09e--Random-Noise 1.5μm-Pixel-Pitch 12MP Global-Shutter-
Equivalent CMOS Image Sensor with 3μm Digital Pixels Using
Quad-Phase-Staggered Zigzag Readout and Motion
Compensation

A 200MP 0.61μm-Pixel-Pitch CMOS Imager with Sub-1e- Readout
Noise Using Interlaced-Shared Transistor Architecture and
On-Chip Motion Artifact-Free HDR Synthesis for 8K Video
Applications

Tuesday, November 25, 2025

Ubicept releases toolkit for SPAD and CIS

Ubicept Extends Availability of Perception Technology to Make Autonomous Systems Using Conventional Cameras More Reliable

Computer vision processing unlocks higher quality, more trustworthy visual data for machines whether they use advanced sensors from Pi Imaging Technology or conventional vision systems

BOSTON--(BUSINESS WIRE)--Ubicept, the computer vision startup operating at the limits of physics, today announced the release of the Ubicept Toolkit, which will bring its physics-based imaging to any modern vision system. Whether for single-photon avalanche diode (SPAD) sensors in next-generation vision systems or immediate image quality improvements with existing hardware, Ubicept provides a unified, physics-based approach that delivers high quality, trustworthy data.

“Ubicept’s technology revolutionizes how machines see the world by unlocking the full potential of today's and tomorrow's image sensors. Our physics-based approach captures the full complexity of motion, even in low-light or high-dynamic-range conditions, providing more trustworthy data than AI-based video enhancement,” said Sebastian Bauer, CEO of Ubicept. “With the Ubicept Toolkit, we’re now making our advanced single-photon imaging more accessible for a broad range of applications from robotics to automotive to industrial sensing.”

Ubicept’s solution is designed for the most advanced sensors to maximize image data quality and reliability. Now, the Toolkit will support any widely available CMOS camera with raw uncompressed output, giving perception developers immediate quality gains.

“Autonomous systems need a better way to understand the world. Our mission is to turn raw photon data into outputs that are specifically designed for computer vision, not human consumption,” said Tristan Swedish, CTO of Ubicept. “By making our technology available for more conventional vision systems, we are giving engineers the opportunity to experience the boost in reliability now while creating an easier pathway to SPAD sensor adoption.”

SPAD sensors – traditionally used in 3D systems – are poised to reshape the image sensor and computer vision landscape. While the CMOS sensor market is projected to grow to $30B by 2029 at 7.5% CAGR, the SPAD market is growing nearly three times faster, expected to reach $2.55B by 2029 at 20.1% CAGR.

Pi Imaging Technology is a leader in the field with its SPAD Alpha, a next-generation 1-megapixel single-photon camera that delivers zero read noise, nanosecond-level exposure control, and frame rates up to 73,000 fps. Designed for demanding scientific applications, it offers researchers and developers extreme temporal precision and light sensitivity. The Ubicept Toolkit builds on these strengths by transforming the SPAD Alpha’s raw photon data into clear, ready-to-use imagery for perception and analysis.

“Ubicept shares our deep commitment to advancing perception technology,” said Michel Antolović, CEO of Pi Imaging Technology. “By combining our SPAD Alpha’s state-of-the-art hardware with Ubicept’s real-time processing, perception engineers can get the most from what single-photon imaging has to offer.”

The Toolkit provides engineering teams with everything they need to visualize, capture, and process video data efficiently with the Ubicept Photon Fusion (UPF) algorithm. The SPAD Toolkit also includes Ubicept’s FLARE (Flexible Light Acquisition and Representation Engine) firmware for optimized photon capture. In addition, the Toolkit includes white-glove support to early adopters for a highly personalized and premium experience.

The Ubicept Toolkit will be available in December 2025. To learn how it can elevate perception performance and integrate into existing workflows, contact Ubicept here.

Monday, November 24, 2025

Job Postings - Week of November 23 2025


ByteDance

Image Sensor Digital Design Lead- Pico

San Jose, California, USA

Link

ST Microelectronics

Silicon Photonics Product Development Engineer

Grenoble, France

Link

DigitalFish

Senior Systems Engineer, Cameras/Imaging

Sunnyvale, California, USA [Remote]

Link

Imasenic

Digital IC Design Engineer

Barcelona, Spain

Link

Meta

Technical Program Manager, Camera Systems

Sunnyvale, California, USA

Link

Westlake University

Ph.D. Positions in Dark Matter & Neutrino Experiments

Hangzhou, Zhejiang,

China

Link

General Motors

Advanced Optical Sensor Test Engineer

Warren, Michigan, USA

[Hybrid]

Link

INFN

Post-Doc senior research grant in experimental physics

Frascati, italy

Link

Northrop Grumman

Staff EO/IR Portfolio Technical Lead

Melbourne, Florida, USA

Link