Thursday, February 12, 2026

IR sensor tech firm Senseeker acquires Axis Machine

Santa Barbara, California (February 11th, 2025) - Senseeker Corp, a leading innovator of digital infrared image sensing technology, can now respond to customer requirements more quickly and thoroughly through the acquisition of Axis Machine (Santa Barbara, California) by Senseeker Machining Company (SMC).

Senseeker Machining Company will continue to support and grow Axis Machine’s established customer base built up over 20+ years in delivering high-quality machined parts. The acquisition will enable Senseeker to further grow mechanical component lines and to reduce the lead time on machined parts used in Senseeker’s programs and portfolio of industry standard commercial cryogenic test equipment for testing infrared focal plane arrays.

SMC will continue to operate from the existing machine shop facility, located at 81 David Love Place, just a short walk from the Senseeker Corp headquarters in Santa Barbara. The SMC facility is equipped with several 3-Axis and 4-Axis CNC Machining Centers, Lathes and Multi-Axis Milling Equipment to be able to maintain a high throughput of work. A Mitutoyo DCC-CMM, optical comparator and a full range of precision inspection tools are used for quality control. SMC also runs industry standard CAD and CNC programming software.

“Bringing high-quality machining capability to Senseeker is an important step in the evolution of the company’s unique lateral business model. Senseeker’s cryogenic Sensor Test Unit product lines have grown significantly in recent years and this acquisition will help accelerate delivery times,” said Kenton Veeder, CEO of Senseeker. “Additionally, our mechanical engineering has expanded across our program portfolio and our new machining capability will help us build better mechanical systems through tight coupling between machining and engineering. We are excited to build SMC into a high-quality machining organization for existing shop customers and new sensor community customers alike.”

https://senseeker.com/news/PR-20260211.htm 

Monday, February 09, 2026

Paper on 3D-stacked InGaAs/InP SPAD

In a "hot-off-the-press" paper in Optics Express titled "Room-temperature, 96×96 pixel 3D-stacked InGaAs/InP SPAD sensor with complementary gating for flash LiDAR", Yildirim et al. from EPFL/Fraunhofer/FBH write:

A room-temperature 3D-stacked flash LiDAR sensor is presented for the short-wave infrared (SWIR). The 96×96 InGaAs-InP SPAD array in the top tier is biased by a circuit at the bottom tier that implements a complementary cascoded gating at the pixel level to control noise and afterpulsing. The bottom-tier chip is fabricated in a 110-nm CMOS technology. The sensor is tested with a 1550nm laser operating at 100μW to 3.1mW average power. The SPADs are gated with 3ns pulses with 500ps skew. Intensity images and depth maps are shown both indoors and outdoors at 10m in 120 klux background light with telemetry up to 100m, having better than 2% accuracy.


Proposed complementary optical gating pixel for InGaAs SPADs (a) arranged in a 9696 array (b) and its timing diagram (c).

Micrograph of the bottom tier (a) and 3D-stacked chip micrograph (b). Illustration of the indium bump bonding scheme (c).
 


Outdoors flash LiDAR images with 120klux background sunlight. The scene, intensity image and depth image shown for 3m(a-c) and 10m(d-f).

Friday, February 06, 2026

Passive SPAD simulator and dataset

Preprint: https://arxiv.org/abs/2601.12850

In a preprint titled "Accurate Simulation Pipeline for Passive Single-Photon Imaging" Suonsivu et al. write:

Single-Photon Avalanche Diodes (SPADs) are new and promising imaging sensors. These sensors are sensitive enough to detect individual photons hitting each pixel, with extreme temporal resolution and without readout noise. Thus, SPADs stand out as an optimal choice for low-light imaging. Due to the high price and limited availability of SPAD sensors, the demand for an accurate data simulation pipeline is substantial. Indeed, the scarcity of SPAD datasets hinders the development of SPAD-specific processing algorithms and impedes the training of learning-based solutions. In this paper, we present a comprehensive SPAD simulation pipeline and validate it with multiple experiments using two recent commercial SPAD sensors. Our simulator is used to generate the SPAD-MNIST, a single-photon version of the seminal MNIST dataset, to investigate the effectiveness of convolutional neural network (CNN) classifiers on reconstructed fluxes, even at extremely low light conditions, e.g., 5 mlux. We also assess the performance of classifiers exclusively trained on simulated data on real images acquired from SPAD sensors at different light conditions. The synthetic dataset encompasses different SPAD imaging modalities and is made available for download. 

The dataset download link is here: https://boracchi.faculty.polimi.it/Projects/SPAD-MNIST.html

This is based on work presented at the European Conference on Computer Vision, Synthethic Data for Computer Vision Workshop in 2024 

 

Wednesday, February 04, 2026

Samsung's US fab for iPhone CIS

TheElec reported in August 2025 that Samsung plans to use its Austin, Texas fab to make sensors for future iPhones:

Samsung to form smartphone image sensor line in Austin for Apple

3-layer stacked CMOS image sensor to power iPhone 18 in 2026

The plan ... seems to be a response to tariffs on South Korea-made semiconductors that the Trump Administration plans to impose.

If all goes to plan, it will mark the first time that Samsung is manufacturing CIS in the US.

The CIS is made with wafer-to-wafer hybrid bonding ... requires precise process control and only Sony and Samsung have commercialized it.

Monday, February 02, 2026

Canon's weighted photon counting SPAD array

In June 2025 Canon announced an HDR SPAD sensor that performs weighted counting (as opposed to simply accumulating photon counts): https://global.canon/en/news/2025/20250612.html

Canon develops High Dynamic Range SPAD sensor with potential to detect subjects even in low-light conditions or environments with strong lighting contrasts thanks to unique technology

TOKYO, June 12, 2025—Canon Inc. announced today that it has developed a 2/3" SPAD sensor featuring approximately 2.1 megapixels and a high dynamic range of 156dB. Thanks to a unique circuit technology, it realizes high dynamic range, low power consumption, and the ability to mitigate flickering from LED lights. Canon will continue further technological development and aims to start mass production.

 SPAD sensors employ a principle called photon counting, which detects each photon (light particle) entering a pixel and counts the incident number of photons. This sensor does not take in any noise during the readout process, making it possible to capture a clear image of subjects. Also, it can measure the distance to the subject at high speed with excellent timing precision.

However, due to limitations in processing speed, when the incident number of photons exceed a certain threshold level under high-illuminance conditions, conventional SPAD sensors experienced difficulties when separating individual photons to read out, which led the acquired image to white-out. In addition, such sensors consume a large amount of power as each photon counting independently consumes power.
On the other hand, Canon's newly developed SPAD sensor uses a unique technology called “weighted photon counting.” Focusing on the fact that the frequency at which photons reach the sensor correlates with illuminance, this technology measures the time it takes for the initial photon to reach the pixel within a certain time frame, then estimates the total number of photons that will arrive at the pixel over a certain time period. As a result, the image does not white-out due to a large number of photons precisely estimated while they are not being actually counted, allowing the subject to be captured clearly.

While the conventional SPAD sensor actually counts all incident photons one by one, the new method estimates the total amount of incident photons within a certain timeframe based on the time it takes for the first incident photon to arrive. As a result, the new sensor achieves a high dynamic range of 156dB, approximately five times higher than the previous sensor2. At the same time, this approach limits the power consumption per pixel by roughly 75% by reducing the frequency of photon detections. In addition, this technology also mitigates the flickering that occurs when capturing light from LEDs such as traffic lights.

Canon anticipates that this new sensor will have a wide variety of applications, such as surveillance, onboard vehicle equipment, and industrial use. For instance, it is expected to be applied to autonomous driving3 and advanced driving-assistance systems3. As autonomous driving technology advances, the demand for onboard sensors is increasing. At the same time, as many countries increasingly tighten related safety standards, there is a need for advanced sensor technology to ensure the safety of autonomous driving. However, the currently available CMOS sensors that are commonly used in vehicles are known to have several issues with visibility in environments with strong contrasts between bright and dark scenes, such as tunnel exits, or extremely low light conditions. Canon has addressed these issues by combining new features with the conventional SPAD sensors, which excel in low-light shooting.

Canon announced this new sensor technology on June 12, 2025 at the 2025 Symposium on VLSI Technology and Circuits held in Kyoto, Japan.

  •  While conventional SPAD sensors count all incident photons one by one, the newly developed SPAD sensor uses a unique technology called weighted photon counting that estimates the total amount of incident photons within a certain period of time based on the detection of the first incident photon. This greatly widens the number of photons that can be measured.
  •  This technology can also mitigate flickering when light from LEDs such as traffic lights is captured.

 

Weighted photon counting enables photon detection in both high and low levels of illuminance
 
With excellent high dynamic range performance of 156dB, a clear image is captured including bright and dark subjects

Simplified illustration of the weighted photon counting technique. The earlier the arrival of the first incident photon, the brighter the incident light.

Friday, January 30, 2026

Sony releases image stabilizer chip

Link: https://www.sony-semicon.com/en/products/lsi-ic/stabilizer.html

The Stabilizer Large-Scale Integration (LSI) CXD5254GG chip combines an image sensor and 6-axis inertial measurement unit (IMU) to perform electronic image stabilization (EIS), removing vibrations and maintaining a level horizon in the video input from the image sensor, and outputting the stabilized image. The advanced algorithm for attitude control reduces blurs caused by camera vibrations and achieves both real-time horizon stabilization and suppression of “jello effect” video distortion. The Stabilizer LSI is also equipped with Sony’s unique contrast improvement feature, the intelligent Picture Controller (iPC). Together with the stabilizing features, it enables the camera to clearly capture objects or information that could not be previously recognized due to vibrations.

The CXD5254GG creates new imaging value that conventional camera technologies cannot achieve, enabling applications across a wide range of fields including broadcasting, sports entertainment, security, and robotics. In addition to the CXD5254GG itself, a choice of compact camera modules combining the IMX577 sensor and lens is also available for broadcasting/video production applications, meeting a wide range of user needs.

The product performs a wide range of signal processing including high-precision blur correction via EIS, horizon maintenance, suppression of the jello effect, and lens distortion correction. We also provide established stabilizer sample parameters, derived from a variety of actual applications including onboard cameras, dashboard cameras, wearable devices, first-person view (FPV) drones, remote-controlled (RC) cars, and fixed-point cameras, backed by Sony’s many years of expertise and know-how. These sample parameter configurations can be optimized for specific applications to maximize the potential of the CXD5254GG’s stabilizing performance.


 

Wednesday, January 28, 2026

EETimes Prophesee article


Few quotes:

“We have the sensor, defined use cases, and the full-stack demonstration, [including] machine learning models to software integration in platforms such as Raspberry Pi,” Ferré said. “What probably [has been] missing is the scale of the business and demonstration of value.”

“Our technology is fantastic, but the way to make money with it…probably needed a bit of tuning, so this is what we’re doing,” he added.

“I’ve been on the phone with one of our integrators for Electronic Supervision System cameras, and they said, ‘we’ve never sold so many evaluation kits in so many industries—drones, manufacturing’. There’s traction [here]…this is huge.”

When asked about acquisition potential—given the recent SynSense-iniVation merger, and myriad market heavyweights—he replied: “We’re talking to very powerful players. They are not looking to buy us.”

Monday, January 26, 2026

Sony's global shutter image sensor in JSSC

In a recent paper titled "A 5.94-μm Pixel-Pitch 25.2-Mpixel 120-Frames/s Full-Frame Global Shutter CMOS Image Sensor With Pixel-Parallel 14-bit ADC", Sakakibara et al. from Sony Semiconductor Solutions (Japan) write:

We present a 25.2-Mpixel, 120-frames/s full-frame global shutter CMOS image sensor (CIS) featuring pixel-parallel analog-to-digital converters (ADCs). The sensor addresses the limitations of conventional rolling shutters (RSs)—including motion distortion, flicker artifacts, and flash banding—while maintaining image quality suitable for professional and advanced amateur photography. A stacked architecture with 3- μ m-pitch Cu–Cu hybrid bonding enables more than 50 million direct connections between the pixel array and the ADC circuits. The pixel-parallel single-slope ADCs operate with a comparator current of 25 nA and use a positive-feedback (PFB) scheme with noise-bandwidth control using an additional 11.4-fF capacitor, achieving 2.66 e−rms ( 166.8 μVrms ) random noise (RN) at 0-dB gain with an REF slope of 2161 V/s. The 5.94- μ m pixel pitch accommodates 30-bit latches designed under SRAM rules in a 40-nm CMOS process. Noise analysis reveals that in subthreshold operation, the dominant noise contributors are the comparator current, REF slope, and second-stage load capacitance. The sensor delivers 14-bit resolution, a 75.5-dB dynamic range (DR), and 120-frames/s operation at a power consumption of 1545 mW. A figure of merit of 0.083 e−rms⋅  pJ/step is comparable to state-of-the-art RS sensors. These results demonstrate that pixel-parallel ADC technology can be scaled to tens of megapixels while preserving high image quality and energy efficiency, enabling motion-artifact-free imaging in battery-powered consumer cameras.






 Full paper link [behind paywall]: https://ieeexplore.ieee.org/document/11219086

Sunday, January 25, 2026

Conference List - July 2026

2nd International Conference on Optical Imaging and Detection Technology (OIDT 2026) - 3-5 July 2026 - Yulin, China - Website

New Developments in Photodetection - 6-10 July 2026 - Troyes, France - Website

11th International Smart Sensor Technology Exhibition - 8-10 July 2026 - Goyang, South Korea - Website

Tenth International Conference on Imaging, Signal Processing and Communications - 11-13 July 2026 - Kobe, Japan - Website

IEEE International Conference on Flexible Printable Sensors and Systems - 12-15 July 2026 - Atlanta, Georgia, USA - Website

Optica Sensing Congress - 12-17 July 2026 - Maastricht, Netherlands - Website

IEEE Sensors Applications Symposium - 15-17 July 2026 - Vitoria, Brazil - Website

American Association of Physicists in Medicine 67th Annual Meeting and Exhibition - 19-22 July 2026 - Vancouver, BC, Canada - Website

IEEE Nuclear & Space Radiation Effects Conference (NSREC) - 20-24 July 2026 - San Juan, Puerto Rico, USA - Website

34th International Workshop  on Vertex Detectors - 20-24 July 2026 - Stoos, Switzerland - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Thursday, January 22, 2026

Synthetic aperture imager

Link: scitechdaily.com/this-breakthrough-image-sensor-lets-scientists-see-tiny-details-from-far-away/

Open-access paper: Multiscale aperture synthesis imager  https://www.nature.com/articles/s41467-025-65661-8

A new lens-free imaging system uses software to see finer details from farther away than optical systems ever could before.

Imaging technology has reshaped how scientists explore the universe – from charting distant galaxies using radio telescope arrays to revealing tiny structures inside living cells. Despite this progress, one major limitation has remained unresolved. Capturing images that are both highly detailed and wide in scope at optical wavelengths has required bulky lenses and extremely precise physical alignment, making many applications difficult or impractical.

Researchers at the University of Connecticut may have found a way around this obstacle. A new study led by Guoan Zheng, a biomedical engineering professor and director of the UConn Center for Biomedical and Bioengineering Innovation (CBBI), along with his team at the University of Connecticut College of Engineering, was published in Nature Communications. The work introduces a new imaging strategy that could significantly expand what optical systems can do in scientific research, medicine, and industrial settings.

Why Synthetic Aperture Imaging Breaks Down at Visible Light

“At the heart of this breakthrough is a longstanding technical problem,” said Zheng. “Synthetic aperture imaging – the method that allowed the Event Horizon Telescope to image a black hole – works by coherently combining measurements from multiple separated sensors to simulate a much larger imaging aperture.”

This approach works well in radio astronomy because radio waves have long wavelengths, which makes precise coordination between sensors achievable. Visible light operates on a much smaller scale. At those wavelengths, the physical accuracy needed to keep multiple sensors synchronized becomes extremely difficult to maintain, placing strict limits on traditional optical synthetic aperture systems.

Letting Software Do the Synchronizing

The Multiscale Aperture Synthesis Imager (MASI) addresses this challenge in a fundamentally different way. Instead of requiring sensors to remain perfectly synchronized during measurement, MASI allows each optical sensor to collect light on its own. Computational algorithms are then used to align and synchronize the data after it has been captured.

Zheng describes the concept as similar to several photographers observing the same scene. Rather than taking standard photographs, each one records raw information about the behavior of light waves. Software later combines these independent measurements into a single image with exceptionally high detail.

This computational approach to phase synchronization removes the need for rigid interferometric setups, which have historically prevented optical synthetic aperture imaging from being widely used in real-world applications.

How MASI Captures and Rebuilds Light

MASI differs from conventional optical systems in two major ways. First, it does not rely on lenses to focus light. Instead, it uses an array of coded sensors placed at different locations within a diffraction plane. Each sensor records diffraction patterns, which describe how light waves spread after interacting with an object. These patterns contain both amplitude and phase information that can later be recovered using computational methods.

After the complex wavefield from each sensor is reconstructed, the system digitally extends the data and mathematically propagates the wavefields back to the object plane. A computational phase synchronization process then adjusts the relative phase differences between sensors. This iterative process increases coherence and concentrates energy in the combined image.

This software-based optimization is the central advance. By aligning data computationally rather than physically, MASI overcomes the diffraction limit and other restrictions that have traditionally governed optical imaging.

A Virtual Aperture With Fine Detail

The final result is a virtual synthetic aperture that is larger than any single sensor. This allows the system to achieve sub-micron resolution while still covering a wide field of view, all without using lenses.
Traditional lenses used in microscopes, cameras, and telescopes force engineers to balance resolution against working distance. To see finer details, lenses usually must be placed very close to the object, sometimes just millimeters away. That requirement can limit access, reduce flexibility, or make certain imaging tasks invasive.

MASI removes this constraint by capturing diffraction patterns from distances measured in centimeters and reconstructing images with sub-micron detail. Zheng compares this to being able to examine the fine ridges of a human hair from across a desk rather than holding it just inches from your eye.

Scalable Applications Across Many Fields

“The potential applications for MASI span multiple fields, from forensic science and medical diagnostics to industrial inspection and remote sensing,” said Zheng, “But what’s most exciting is the scalability – unlike traditional optics that become exponentially more complex as they grow, our system scales linearly, potentially enabling large arrays for applications we haven’t even imagined yet.”

The Multiscale Aperture Synthesis Imager represents a shift in how optical imaging systems can be designed. By separating data collection from synchronization and replacing bulky optical components with software-controlled sensor arrays, MASI shows how computation can overcome long-standing physical limits. The approach opens the door to imaging systems that are highly detailed, adaptable, and capable of scaling to sizes that were previously out of reach.