Wednesday, July 09, 2025

Hamamatsu webinar on SPAD and SPAD arrays

 

 

The video is a comprehensive webinar on Single Photon Avalanche Diodes (SPADs) and SPAD arrays, addressing their theory, applications, and recent advancements. It is led by experts from the New Jersey Institute of Technology and Hamamatsu, discussing technical fundamentals, challenges, and innovative solutions to improve the performance of SPAD devices. Key applications highlighted include fluorescence lifetime imaging, remote gas sensing, quantum key distribution, and 3D radiation detection, showcasing SPAD's unique ability to timestamp events and enhance photon detection efficiency.

Monday, July 07, 2025

Images from the world's largest camera

Story in Nature news: https://www.nature.com/articles/d41586-025-01973-5

First images from world’s largest digital camera leave astronomers in awe

The Rubin Observatory in Chile will map the entire southern sky every three to four nights.

The Trifid Nebula (top right) and the Lagoon Nebula, in an image made from 678 separate exposures taken at the Vera C. Rubin Observatory in Chile. Credit: NSF-DOE Vera C. Rubin Observatory

 

The Vera C. Rubin Observatory in Chile has unveiled its first images, leaving astronomers in awe of the unprecedented capabilities of the observatory’s 3,200-megapixel digital camera — the largest in the world. The images were created from shots taken during a trial that started in April, when construction of the observatory’s Simonyi Survey Telescope was completed.

...

One image (pictured) shows the Trifid Nebula and the Lagoon Nebula, in a region of the Milky Way that is dense with ionized hydrogen and with young and still-forming stars. The picture was created from 678 separate exposures taken by the Simonyi Survey Telescope in just over 7 hours. Each exposure was monochromatic and taken with one of four filters; they were combined to give the rich colours of the final product. 

Friday, July 04, 2025

ETH Zurich and Empa develop perovskite image sensor

In a new paper in Nature, a team from ETH Zurich and Empa have demonstrated a new lead halide perovskite thin-film photodetector.

Tsarev et al., "Vertically stacked monolithic perovskite colour photodetectors, " Nature (2025)
Open access paper link: https://www.nature.com/articles/s41586-025-09062-3 

News release: https://ethz.ch/en/news-und-veranstaltungen/eth-news/news/2025/06/medienmitteilung-bessere-bilder-fuer-mensch-und-maschine.html

Better images for humans and computers

Researchers at ETH Zurich and Empa have developed a new image sensor made of perovskite. This semiconductor material enables better colour reproduction and fewer image artefacts with less light. Perovskite sensors are also particularly well suited for machine vision. 

Image sensors are built into every smartphone and every digital camera. They distinguish colours in a similar way to the human eye. In our retinas, individual cone cells recognize red, green and blue (RGB). In image sensors, individual pixels absorb the corresponding wavelengths and convert them into electrical signals.

The vast majority of image sensors are made of silicon. This semiconductor material normally absorbs light over the entire visible spectrum. In order to manufacture it into RGB image sensors, the incoming light must be filtered. Pixels for red contain filters that block (and waste) green and blue, and so on. Each pixel in a silicon image sensor thus only receives around a third of the available light.

Maksym Kovalenko and his team associated with both ETH Zurich and Empa have proposed a novel solution, which allows them to utilize every photon of light for colour recognition. For nearly a decade, they have been researching perovskite-based image sensors. In a new study published in the renowned journal Nature, they show: The new technology works.

Stacked pixels
The basis for their innovative image sensor is lead halide perovskite. This crystalline material is also a semiconductor. In contrast to silicon, however, it is particularly easy to process – and its physical properties vary with its exact chemical composition. This is precisely what the researchers are taking advantage of in the manufacture of perovskite image sensors.

If the perovskite contains slightly more iodine ions, it absorbs red light. For green, the researchers add more bromine, for blue more chlorine – without any need for filters. The perovskite pixel layers remain transparent for the other wavelengths, allowing them to pass through. This means that the pixels for red, green and blue can be stacked on top of each other in the image sensor, unlike with silicon image sensors, where the pixels are arranged side-by-side.


Thanks to this arrangement, perovskite-based image sensors can, in theory, capture three times as much light as conventional image sensors of the same surface area while also providing three times higher spatial resolution. Researchers from Kovalenko's team were able to demonstrate this a few years ago, initially with individual oversized pixels made of millimeter-large single crystals.

Now, for the first time, they have built two fully functional thin-film perovskite image sensors. “We are developing the technology further from a rough proof of principle to a dimension where it could actually be used,” says Kovalenko. A normal course of development for electronic components: “The first transistor consisted of a large piece of germanium with a couple of connections. Today, 60 years later, transistors measure just a few nanometers.”

Perovskite image sensors are still in the early stages of development. With the two prototypes, however, the researchers were able to show that the technology can be miniaturized. Manufactured using thin-film processes common in industry, the sensors have reached their target size in the vertical dimension at least. “Of course, there is always potential for optimization,” notes co-author Sergii Yakunin from Kovalenko's team.

In numerous experiments, the researchers put the two prototypes, which differ in their readout technology, through their paces. Their results prove the advantages of perovskite: The sensors are more sensitive to light, more precise in colour reproduction and can offer a significantly higher resolution than conventional silicon technology. The fact that each pixel captures all the light also eliminates some of the artifacts of digital photography, such as demosaicing and the moiré effect.

Machine vision for medicine and the environment
However, consumer digital cameras are not the only area of application for perovskite image sensors. Due to the material's properties, they are also particularly suitable for use in machine vision. The focus on red, green and blue is dictated by the human eye: These image sensors work in RGB format because our eyes see in RGB mode. However, when solving specific tasks, it is advisable to specify other optimal wavelength ranges that the computer image sensor should read. Often there are more than three – so-called hyperspectral imaging.

Perovskite sensors have a decisive advantage in hyperspectral imaging. Researchers can precisely control the wavelength range they absorb by each layer. “With perovskite, we can define a larger number of colour channels that are clearly separated from each other,” says Yakunin. Silicon, with its broad absorption spectrum, requires numerous filters and complex computer algorithms. “This is very impractical even with a relatively small number of colours,” Kovalenko sums up. Hyperspectral image sensors based on perovskite could be used in medical analysis or in automated monitoring of agriculture and the environment, for example.

In the next step, the researchers want to further reduce the size and increase the number of pixels in their perovskite image sensors. Their two prototypes have pixel sizes between 0.5 and 1 millimeters. Pixels in commercial image sensors fall in the micrometer range (1 micrometre is 0.001 millimetre). “It should be possible to make even smaller pixels from perovskite than from silicon,” says Yakunin. The electronic connections and processing techniques need to be adapted for the new technology. “Today's readout electronics are optimized for silicon. But perovskite is a different semiconductor, with different material properties,” says Kovalenko. However, the researchers are convinced that these challenges can be overcome. 

Wednesday, July 02, 2025

STMicro releases image sensor solution for human presence detection

New technology delivers more than 20% power consumption reduction per day in addition to improved security and privacy

ST solution combines market leading Time-of-Flight (ToF) sensors and unique AI algorithms for a seamless user experience

Geneva, Switzerland, June 17, 2025 -- STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, introduces a new Human Presence Detection (HPD) technology for laptops, PCs, monitors and accessories, delivering more than 20% power consumption reduction per day in addition to improved security and privacy. ST’s proprietary solution combines market-leading FlightSense™ Time-of-Flight (ToF) sensors with unique AI algorithms to deliver a hands-free fast Windows Hello authentication; and delivers a range of benefits such as longer battery lifetime, and user-privacy or wellness notifications. 

“Building on the integration of ST FlightSense technology in more than 260 laptops and PC models launched in recent years, we are looking forward to see our new HPD solution contributing to make devices more energy-efficient, secure, and user-friendly,” said Alexandre Balmefrezol, Executive Vice President and General Manager of the Imaging Sub-Group at STMicroelectronics. “As AI and sensor technology continue to advance, with greater integration of both hardware and software, we can expect to see even more sophisticated and intuitive ways of interacting with our devices, and ST is best positioned to continue to lead this market trend.” 

“Since 2023, 3D sensing in consumer applications has gained new momentum, driven by the demand for better user experiences, safety, personal robotics, spatial computing, and enhanced photography and streaming. Time-of-Flight (ToF) technology is expanding beyond smartphones and tablets into drones, robots, AR/VR headsets, home projectors, and laptops. In 2024, ToF modules generated $2.2 billion in revenue, with projections reaching $3.8 billion by 2030 (9.5% CAGR). Compact and affordable, multizone dToF sensors are now emerging to enhance laptop experiences and enable new use cases,” said Florian Domengie, PhD Principal Analyst, Imaging at Yole Group. 

The 5th generation turnkey ST solution
By integrating hardware and software components by design, the new ST solution is a readily deployable system based on FlightSense 8x8 multizones Time-of-Flight sensor (VL53L8CP) complemented by proprietary AI-based algorithms enabling functionalities such as human presence detection, multi-person detection, and head orientation tracking. This integration creates a unique ready-to-use solution for OEMs that requires no additional development for them. 

This 5th generation of sensors also integrates advanced features such as gesture recognition, hand posture recognition, and wellness monitoring through human posture analysis. 

ST’s Human Presence Detection (HPD) solution enables enhanced features such as:
-- Adaptive Screen Dimming tracks head orientation to dim the screen when the user isn’t looking, reducing power consumption by more than 20%.
-- Walk-Away Lock & Wake-on-Attention automatically locks the device when the user leaves and wakes up upon return, improving security and convenience.
-- Multi-Person Detection alerts the user if someone is looking over their shoulder, enhancing privacy.

Tailored AI algorithm
STMicroelectronics has implemented a comprehensive AI-based development process that from data collection, labeling, cleaning, AI training and integration in a mass-market product. This effort relied on thousands of data-logs from diverse sources, including contributions from workers who uploaded personal seating and movement data over several months, enabling the continuous refinement of AI algorithms. 

One significant achievement is the transformation of a Proof-Of-Concept (PoC) into a mature solution capable of detecting a laptop user's head orientation using only 8x8 pixels of distance data. This success was driven through a meticulous development process that included four global data capture campaigns, 25 solution releases over the course of a year, and rigorous quality control of AI training data. The approach also involved a tailored pre-processing method for VL53L8CP ranging data, and the design of four specialized AI networks: Presence AI, HOR (Head Orientation) AI, Posture AI, and Hand Posture AI. Central to this accomplishment was the VL53L8CP ToF sensor, engineered to optimize the Signal-To-Noise ratio (SNR) per zone, which played a critical role in advancing these achievements. 

Enhanced user experience & privacy protection
The ToF sensor ensures complete user privacy without capturing images or relying on the camera, unlike previous versions of webcam-based solutions. 

Adaptive Screen Dimming:
-- Uses AI algorithms to analyze the user's head orientation. If the user is not looking at the screen, the system gradually dims the display to conserve power.
-- Extends battery life by minimizing energy consumption.
-- Optimizes for low power consumption with AI algorithms and can be seamlessly integrated into existing PC sensor hubs.

Walk-Away Lock (WAL) & Wake-on-Approach (WOA):
-- The ToF sensor automatically locks the PC when the user moves away and wakes it upon their return, eliminating the need for manual interaction.
-- This feature enhances security, safeguards sensitive data, and offers a seamless, hands-free user experience.
-- Advanced filtering algorithms help prevent false triggers, ensuring the system remains unaffected by casual passerby.

Multi-Person Detection (MPD):
-- The system detects multiple people in front of the screen and alerts the user if someone is looking over their shoulder.
-- Enhances privacy by preventing unauthorized viewing of sensitive information.
-- Advanced algorithms enable the system to differentiate between the primary user and other nearby individuals.

Technical highlights: VL53L8CP: ST FlightSense 8x8 multizones ToF sensor. https://www.st.com/en/imaging-and-photonics-solutions/time-of-flight-sensors.html 
-- AI-based: compact, low-power algorithms suitable for integration into PC sensor hubs.
-- A complete ready-to-use solution includes hardware (ToF sensor) and software (AI algorithms).

Conference List - January 2026

SPIE Photonics West - 17-22 January 2026 - San Francisco, CA, USA - Website

62nd International Winter Meeting on Nuclear Physics - 19-23 January 2026 - Bormio, Italy - Website

The 39th International Conference on Micro Electro Mechanical Systems (IEEE MEMS 2026) - 25-29 January 2026 - Salzburg, Austria - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Monday, June 30, 2025

MIPI C-PHY v3.0 upgrades data rates

News: https://www.businesswire.com/news/home/20250507526963/en/MIPI-C-PHY-v3.0-Adds-New-Encoding-Option-to-Support-Next-Generation-of-Image-Sensor-Applications

The MIPI Alliance, an international organization that develops interface specifications for mobile and mobile-influenced industries, today announced a major update to its high-performance, low-power and low electromagnetic interference (EMI) C-PHY interface specification for connecting cameras and displays. Version 3.0 introduces support for an 18-Wirestate mode encoding option, increasing the maximum performance of a C-PHY lane by approximately 30 to 35 percent. This enhancement delivers up to 75 Gbps over a short channel, supporting the rapidly growing demands of ultra-high-resolution, high-fidelity image sensors.

The new, more efficient encoding option, 32b9s, transports 32 bits over nine symbols and maintains MIPI C-PHY’s industry-leading low EMI and low power properties. For camera applications, the new mode enables the use of lower symbol rates or lane counts for existing use cases, or higher throughput with current lane counts to support new use cases involving very high-end image sensors such as:

  •  Next-generation prosumer video content creation on smartphones, with high dynamic range (HDR), smart region-of-interest detection and advanced motion vector generation
  •  Machine vision quality-control systems that can detect the smallest of defects in fast-moving production lines
  •  Advanced driver assistance systems (ADAS) in automotive that can analyze the trajectory and behavior of fast-moving objects in the most challenging lighting conditions 
C-PHY Capabilities and Performance Highlights
MIPI C-PHY supports the MIPI Camera Serial Interface 2 (MIPI CSI-2) and MIPI Display Serial Interface 2 (MIPI DSI-2) ecosystems in low-power, high-speed applications for the typical interconnect lengths found in mobile, PC compute and IoT applications. The specification:
  •  Provides high throughput, a minimized number of interconnect signals and superior power efficiency to connect cameras and displays to an application processor. This is due to efficient three-phase coding unique to C-PHY that reduces the number of system interconnects and minimizes electromagnetic emissions to sensitive RF receiver circuitry that is often co-located with C-PHY interfaces.
  •  Offers flexibility to reallocate lanes within a link because C-PHY functions as an embedded clock link
  •  Enables low-latency transitions between high-speed and low-power modes
  •  Includes an alternate low power (ALP) feature, which enables a link operation using only C-PHY’s high-speed signaling levels. An optional fast lane turnaround capability utilizes ALP and supports asymmetrical data rates, which enables implementers to optimize the transfer rates to system needs.
  •  Can coexist on the same device pins as MIPI D-PHY, so designers can develop dual-mode devices
Support for C-PHY v3.0 was included in the most recent MIPI CSI-2 v4.1 embedded camera and imaging interface specification, published in April 2024. To aid implementation, C-PHY v3.0 is backward-compatible with previous C-PHY versions.

“C-PHY is MIPI's ternary-based PHY for smartphones, IoT, drones, wearables, PCs, and automotive cameras and displays,” said Hezi Saar, chair of MIPI Alliance. “It supports low-cost, low-resolution image sensors with fewer wires and high-performance image sensors in excess of 100 megapixels. The updated specification enables forward-looking applications like cinematographic-grade video on smartphones, machine vision quality-control systems and ADAS applications in automotive.”

Forthcoming MIPI D-PHY Updates
Significant development work is continuing on MIPI's other primary shorter-reach physical layer, MIPI D-PHY. D-PHY v3.5, released in 2023, includes an embedded clock option for display applications, while the forthcoming v3.6 specification will expand embedded clock support for camera applications, targeting PC / client computing platforms. The next full version, v4.0, will further expand D-PHY’s embedded clock support for use in mobile and beyond-mobile machine vision applications, and further increase D-PHY’s data rate beyond its current 9 Gbps per lane.

Also, MIPI Alliance last year conducted a comprehensive channel signal analysis to document the longer channel lengths of both C- and D-PHY. The resulting member application note, "Application Note for MIPI C-PHY and MIPI D-PHY IT/Compute," demonstrated that both C-PHY and D-PHY can be used in larger end products, such as laptops and all-in-ones, with minimal or no changes to the specifications as originally deployed in mobile phones or tablets, or for even longer lengths by operating at a reduced bandwidth. 

Sunday, June 29, 2025

NIT announces SWIR line sensor

New SWIR InGaAs Line Scan Sensor NSC2301 for High-Speed Industrial Inspection

New Imaging Technologies (NIT) announces the release of its latest SWIR InGaAs line scan sensor,
the NSC2301, designed for demanding industrial inspection applications. With advanced features and
performance, this new sensor sets a benchmark in SWIR imaging for production environments.

Key features

  • 0.9µm to 1.7µm spectrum
  • 2048x1px @8µm pixel pitch
  • 90e- readout noise
  • Line rate >80kHz @ 2048 pixel resolution
  • Single stage TEC cooling
  • Configurable exposure times
  • ITR & IWR readout modes

The NSC2301 features a 2048 x 1 resolution with an 8 µm pixel pitch, delivering sharp, detailed line-
scan imaging. The size format is best suited to fit standard 1.1’’ optical format optics. This SWIR line-
scan sensor supports line rates over 80 kHz, making it ideal for fast-moving inspection tasks.
With configurable exposure times and both ITR (Integration Then Read) and IWR (Integration While
Read) readout modes, the sensor offers unmatched adaptability for various lighting and motion
conditions.

Thanks to its 3 gains available, the NSC2301 provides the perfect combination of High Sensitivity
(read out noise 90e- in High Gain) and High Dynamic Range, crucial for imaging challenging materials
or capturing subtle defects in high-speed production lines.

This new sensor expands NIT’s proprietary SWIR sensor portfolio and will be officially introduced
at Laser World of Photonics 2025 in Munich.

Applications
Typical use cases for the NSC2301 include silicon wafer inspection, solar panel inspection, hot glass
quality control, waste sorting, and optical coherence tomography, especially where high-resolution
and high-speed line-scan imaging is critical.

Camera
Complementing the launch of the sensor, NIT will release LiSaSWIR v2, a high-performance camera
integrating the NSC2301, in late summer. The camera will feature Smart CameraLink for fast data
transmission and plug-and-play integration.
With the NSC2301, NIT continues its mission of delivering cutting-edge SWIR imaging technology,
developed and produced in-house.

Friday, June 27, 2025

TechInsights blog on Samsung's hybrid bond image sensor

Link: https://www.techinsights.com/blog/samsung-unveils-first-imager-featuring-hybrid-bond-technology

In a recent breakthrough discovery by TechInsights, the Samsung GM5 imager, initially thought to be a standard back-illuminated CIS, has been revealed to feature a pioneering hybrid bond design. This revelation comes after a year-long investigation following its integration into the Google Pixel 7 Pro.

Initially cataloged as a regular back-illuminated CIS due to the absence of through silicon vias (TSVs), further analysis was prompted by its appearance in the Google Pixel 8 Pro, boasting remarkable resolution. This led to an exploratory cross-section revealing the presence of a hybrid bond, also known as Direct Bond Interconnect (DBI). 

 


Wednesday, June 25, 2025

Webinar on image sensors for astronomy

 

 

The Future of Detectors in Astronomy


In this webinar, experts from ESO and Caeleste explore the current trends and future directions of detector technologies in astronomy. From ground-based observatories to cutting-edge instrumentation, our speakers share insights into how sensor innovations are shaping the way we observe the universe.
 

Speakers:
Derek Ives (ESO) – Head of Detector Systems at ESO
Elizabeth George (ESO) – Detector Physicist
Ajit Kalgi (Caeleste) – Director of Design Center
Jan Vermeiren (Caeleste) – Business Development Manager

Monday, June 23, 2025

Open Letter from Johannes Solhusvik, New President of the International Image Sensor Society (IISS)

Dear all, 
 
As announced by Junichi Nakamura during the IISW’25 banquet dinner, I have now taken over as President of the International Image Sensor Society (IISS). I will do my best to serve the imaging community and to ensure the continued success of our flagship event the International Image Sensor Workshop (IISW). 
 
The workshop objective is to provide an opportunity to exchange the latest progress in image sensor and related R&D activities amongst the top image sensor technologists in the world in an informal atmosphere. 
 
With the retirement of Junichi Nakamura from the Board, as well as Vladimir Koifman who also completed his service period, two very strong image sensor technologists have joined the IISS Board, namely Min-Woong Seo (Samsung) and Edoardo Charbon (EPFL). Please join me in congratulating them. 
 
Finally, I would like to solicit any suggestions and insights from the imaging community how to improve the IISS and to start planning your paper submission to the next workshop in Canada in 2027. More information will be provided soon at our website www.imagesensors.org 
 
Best regards, 
 
Johannes Solhusvik 
President of IISS 
VP, Head of Sony Semiconductor Solutions Europe