Thursday, July 31, 2025

Conference List - February 2026

TIPP 2026 (International Conference on Technology & Instrumentation in Particle Physics) - 2-6 February 2026 - Mumbai, India - Website

IEEE International Solid-State Circuits Conference (ISSCC) - 15-19 February 2026 - San Francisco, California, USA - Website

SPIE Medical Imaging - 15-19 February 2026 - Vancouver, British Columbia, Canada - Website

innoLAE (Innovations in Large-Area Electronics) - 17-19 February 2026 - Cambridge, UK - Website

Wafer-Level Packaging Symposium - 17-19 February 2026 - Burlingame, California, USA - Website

IEEE Applied Sensing Conference - 23-26 February 2026 - Delhi, India - Website

MSS Parallel (BSD, Materials & Detectors, and Passive Sensors) Conference - 23-27 February 2026 - Orlando, Florida, USA - Website - (Clearances may be required)

22nd Annual IEEE International Conference on Sensing, Communication, and Networking (SECON) - 26-28 February 2026 - Abu Dhabi, United Arab Emirates - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Wednesday, July 30, 2025

Princeton Infrared Technologies closing business

From Princeton Infrared Technologies: https://www.princetonirtech.com/

Today marks a bittersweet milestone as we officially close the doors of Princeton Infrared Technologies.

It’s a moment of mixed emotions. Pride in what we’ve accomplished and gratitude for the people who made it possible. Over the past 13 years, we built cutting-edge products in the shortwave infrared and fueled innovation in unique applications.

To our incredible and inspiring employees: thank you! Your passion, resilience and brilliance made the impossible possible. You brought our vision to life and made PIRT what it was and how it will always be remembered.

To our customers, research collaborators, partners, and investors: your trust fueled our work and allowed us to push the boundaries of what’s possible in SWIR imaging. Together, we achieved breakthroughs, made discoveries, and moved the industry forward in ways that should bring us pride.

While it’s hard to see this chapter end, I’m deeply grateful for the journey we’ve taken together. I only wish we had more time to continue the work we’ve shared. This will be our final message as a company. Thank you for being such an important part of our story.

Here’s to new beginnings.

If there are any questions or you need any help please contact:
Brian W. Hofmeister, Esq.
(P)(609) 890-1500
bwh@hofmeisterfirm.com

Monday, July 28, 2025

3D stacked edge-AI chip with CIS + deep neural network

In a recent preprint titled "J3DAI: A tiny DNN-Based Edge AI Accelerator for 3D-Stacked CMOS Image Sensor," Tain et al. write:

This paper presents J3DAI, a tiny deep neural network-based hardware accelerator for a 3-layer 3D-stacked CMOS image sensor featuring an artificial intelligence (AI) chip integrating a Deep Neural Network (DNN)-based accelerator. The DNN accelerator is designed to efficiently perform neural
network tasks such as image classification and segmentation. This paper focuses on the digital system of J3DAI, highlighting its Performance-Power-Area (PPA) characteristics and showcasing advanced edge AI capabilities on a CMOS image sensor. To support hardware, we utilized the Aidge comprehensive software framework, which enables the programming of both the host processor and the DNN accelerator. Aidge supports post-training quantization, significantly reducing memory footprint and computational complexity, making it crucial for deploying models on resource-constrained hardware like J3DAI.
Our experimental results demonstrate the versatility and efficiency of this innovative design in the field of edge AI, showcasing its potential to handle both simple and computationally intensive tasks.
Future work will focus on further optimizing the architecture and exploring new applications to fully leverage the capabilities of J3DAI. As edge AI continues to grow in importance, innovations like J3DAI will play a crucial role in enabling real-time, low-latency, and energy-efficient AI processing at the edge.


 




Friday, July 25, 2025

Call for Papers: Image Sensors at ISSCC 2026

New for IEEE ISSCC 2026, we are pleased to announce the creation of a new sub-committee dedicated to Image Sensors & Displays. The Call for Papers includes, but is not limited to, the following topics:

Image sensors • vision sensors and event-based and computer vision sensors • LIDAR, time-of-flight, depth sensing • machine learning and edge computing for imaging applications • display drivers, touch sensing • haptic displays • interactive display and sensing technologies for AR/VR

ISSCC is the foremost global forum for presentation of advances in solid-state circuits and systems-on-a-chip. This is a great opportunity to increase the presence of image sensors at the Conference and offers a unique opportunity for engineers working at the cutting edge of IC design and application to maintain technical currency, and to network with leading experts.

For more information, contact the sub-committee chair, Bruce Rae (STMicroelectronics) via LinkedIn


Wednesday, July 23, 2025

STMicro and Metalenz sign new licensing deal

 STMicroelectronics and Metalenz have signed a license agreement to scale the production of metasurface optics for high-volume applications in consumer, automotive, and industrial markets.
 
This collaboration aims to meet the growing demand in sectors like smartphone biometrics, LIDAR, and robotics, as the metasurface optics market is projected to reach $2 billion by 2029.
 
ST will leverage its 300mm semiconductor and optics manufacturing platform to integrate Metalenz’s technology, ensuring greater precision and cost-efficiency at scale. Since 2022, ST has already shipped over 140 million units of metasurface optics and FlightSense modules using Metalenz IP.

Full press release below. https://newsroom.st.com/media-center/press-item.html/t4717.html 

STMicroelectronics and Metalenz Sign a New License Agreement to Accelerate Metasurface Optics Adoption
 
New license agreement enabling the proliferation of metasurface optics across high-volume consumer, automotive and industrial markets: from smartphone applications like biometrics, LIDAR and camera assist, to robotics, gesture recognition, or object detection.
 
The agreement broadens ST’s capability to use Metalenz IP to produce advanced metasurface optics while leveraging ST’s unique technology and manufacturing platform combining 300mm semiconductor and optics production, test and qualification.

 
STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications and Metalenz, the pioneer of metasurface optics, announced a new license agreement. The agreement broadens ST’s capability to use Metalenz IP to produce advanced metasurface optics while leveraging ST’s unique technology and manufacturing platform combining 300mm semiconductor and optics production, test and qualification.
 
“STMicroelectronics is the unique supplier on the market offering a groundbreaking combination of optics and semiconductor technology. Since 2022, we have shipped well over 140 million metasurface optics and FlightSense™ modules using Metalenz IP. The new license agreement with Metalenz bolsters our technology leadership in consumer, industrial and automotive segments, and will enable new opportunities from smartphone applications like biometrics, LIDAR and camera assist, to robotics, gesture recognition, or object detection,” underlined Alexandre Balmefrezol, Executive Vice President and General Manager of STMicroelectronics’s Imaging Sub-Group. “Our unique model, processing optical technology in our 300mm semiconductor fab, ensures high precision, cost-effectiveness, and scalability to meet the requests of our customers for high-volume, complex applications.”
 
“Our agreement with STMicroelectronics has the potential to further fast-track the adoption of metasurfaces from their origins at Harvard to adoption by market leading consumer electronics companies,” said Rob Devlin, co-founder and CEO of Metalenz. “By enabling the shift of optics production into semiconductor manufacturing, this agreement has the possibility to further redefine the sensing ecosystem. As use cases for 3D sensing continue to expand, ST’s technology leadership in the market together with our IP leadership solidifies ST and Metalenz as the dominant forces in the emergent metasurface market we created.”
 
The new license agreement aims to address the growing market opportunity for metasurface optics projected to experience significant growth to reach $2B by 2029*; largely driven by the industry’s role in emerging display and imaging applications. (*Yole Group, Optical Metasurfaces, 2024 report)
 
In 2022, metasurface technology from Metalenz, which spun out of Harvard and holds the exclusive license rights to the foundational Harvard metasurface patent portfolio, debuted with ST’s market leading direct Time-of-Flight (dToF) FlightSense modules.
 
Replacing the traditional lens stacks and shifting to metasurface optics instead has improved the optical performance and temperature stability of the FlightSense modules while reducing their size and complexity.
 
The use of 300mm wafers ensures high precision and performance in optical applications, as well as the inherent scalability and robustness advantage of semiconductor manufacturing process.

Monday, July 21, 2025

Turn your global shutter CMOS sensor into a LiDAR

In a paper titled "A LiDAR Camera with an Edge" in IOP Measurement Science and Technology journal, Oguh et al. describe an interesting approach of turning a conventional global shutter CMOS image sensor into a LiDAR. The key idea is neatly explained by these two sentences in the paper: "... we recognize a simple fact: if the shutter opens before the arrival time of the photons, the camera will see them. Otherwise, the camera will not. Thus, if the shutter jitter range remains the same and its distribution is uniform, the average intensity of the object in many camera frames will be uniquely associated with the arrival time of the photons."

Abstract: A novel light detection and ranging (LiDAR) design was proposed and demonstrated using just a conventional global shutter complementary metal-oxide-semiconductor (CMOS) camera. Utilizing the jittering rising edge of the camera shutter, the distance of an object can be obtained by averaging hundreds of camera frames. The intensity (brightness) of an object in the image is linearly proportional to the distance from the camera. The achieved time precision is about one nanosecond while the range can reach beyond 50 m using a modest setup. The new design offers a simple yet powerful alternative to existing LiDAR techniques."

 



Full paper (paywalled): https://iopscience.iop.org/article/10.1088/1361-6501/adcb5c

Sunday, July 20, 2025

Job Postings - Week of 20 July 2025


Fairchild Imaging

CMOS Image Sensor Characterization Engineer

San Jose, California, USA

Link

CERN

Design Engineer - Monolithic Pixel Sensors

Geneva, Switzerland

Link

Apple

Camera Image Sensor Digital Design Engineer Lead

Cupertino, California, USA

Link

Tsinghua University

Postdoctoral Positions in Experimental High Energy Physics

Beijing, China

Link

Concurrent Technologies

Sensor Scientist

Dayton, Ohio, USA

Link

CNRS-LPHNE

Postdoctoral position on Hyper-Kamiokande

Paris, France

Link

Attollo Engineering

Infrared Sensor and FPA Test Engineer

Camarillo, California, USA

Link

Imasenic

Image Sensor Internships

Barcelona, Spain

Link

Imperx

Director of Camera Development

Boca Raton, Florida, USA

Link

Friday, July 18, 2025

Samsung blog article on nanoprism pixels

News: https://semiconductor.samsung.com/news-events/tech-blog/nanoprism-optical-innovation-in-the-era-of-pixel-miniaturization/

Nanoprism: Optical Innovation in the Era of Pixel Miniaturization 

The evolution of mobile image sensors is ultimately linked to the advancement of pixel technology. The market's demand for high-quality images with smaller and thinner devices is becoming increasingly challenging, making 'fine pixel' technology a core task in the mobile image sensor industry.
In this trend, Samsung System LSI continues to advance its technology, drawing on its experience in the field of small-pixel image sensors. The recently released mobile image sensor ISOCELL JNP is the industry's first to apply Nanoprism, pushing the boundaries on the physical limitations of pixels.
Let's explore how Nanoprism, the first technology to apply Meta-Photonics to image sensors, was created and how it was implemented in ISOCELL JNP.
 
Smaller Pixels, More Light
Sensitivity in image sensors is a key factor in realizing clear and vivid images. Pixel technology has evolved over time to capture as much light as possible. Examples include the development from front-side illumination (FSI) to back-side illumination (BSI) and various technologies such as deep trench isolation (DTI).
In particular, technology has evolved in the direction of making pixels smaller and smaller to realize high-resolution images without increasing the size of smartphone camera modules. However, this has gradually reduced the sensitivity of unit pixels and caused image quality degradation due to crosstalk between pixels. As a result, it was hard to avoid the limitation of a sharp decline in image quality in low-light environments.
To solve this problem, Samsung introduced a Front Deep Trench Isolation (FDTI) structure that creates a physical barrier between pixels and also developed ISOCELL 2.0 , which isolates even the color filters on top of the pixels. Furthermore, Samsung considered an approach to innovate the optical structure of the pixel itself, which can utilize even the peripheral light that could not be accepted in the existing structure. Nanoprism was born out of this consideration.
More details on the pixel technology of Samsung can be found at the link below.
Pixel Technology
 
Nanoprism: Refracting Light to Collect More
Nanoprism is a new technology first proposed in 2017 based on Meta-Photonics source technology that Samsung Advanced Institute of Technology (SAIT) has accumulated for many years. Unlike meta-lens research, which was active in Meta-Photonics research at the time and minimized light dispersion, it used the reverse idea of maximizing dispersion to separate colors. The Nanoprism is a meta-surface-based prism structure that can perform color separation.
So, what has changed from the existing pixel structure? In the existing microlens-based optics, the microlens and the color filter of the pixel are matched 1:1, so only the light of the color corresponding to the color filter of each pixel can be accepted by the pixel. In other words, there was a physical limit that light could only be received as much as the size of the defined pixel. 

 However, Nanoprism sets an optimized optical path so that light can be directed to each color-matched pixel by placing a nanoscale structure in the microlens position. Simply put, the amount of light received by each pixel has increased, because light that was lost due to color mismatch can be sent to adjacent pixels using refraction and dispersion of light. Nanoprism allows pixels to receive more light than the existing microlens structure, and it was possible to improve the sensitivity reduction, which was a concern due to the smaller pixels.

 
Applying Nanoprism to Image Sensors
Commercializing Meta-Photonics technology in image sensors was a challenging task. Securing both customer reliability and technical completeness was vital. To operate properly as a product, not only the structure of Nanoprism had to be implemented, but also dozens of indicators had to be satisfied.
Samsung's relevant teams worked closely together, repeating the design-process-measurement loop, and made the best efforts to secure performance by considering and reflecting various scenarios from the initial design stage and establishing a reliable verification procedure.
As can be inferred from its name Nanoprism, it was especially difficult from process development to mass production because precise and complex nanometer (nm) structures had to be implemented in pixels. In order to bring the new technology to life, special techniques and methods were introduced, including CMP (Chemical Mechanical Polishing) and low-temperature processes for Nanoprism implementation as well as TDMS (Thermal Desorption Mass Spectrometry) for image sensor production.
 
ISOCELL JNP Enables Brighter and Clearer Images
ISOCELL JNP with Nanoprism has been in mass production this year, and is incorporated in recent smartphones, contributing to an enhanced user experience. Because more light can be received without loss, it is possible to take bright and clear pictures, especially in challenging light conditions. In fact, the ISOCELL JNP with Nanoprism has 25% improved sensitivity compared to the previous ISOCELL JN5 with the same specifications.


Of course, increasing the size of the image sensor can improve the overall performance of the camera, but in mobile, there is a limit to increasing the size of the image sensor indefinitely due to design constraints such as 'camera bump'. Samsung System LSI tried to break through this limitation head-on with Nanoprism. Even in situations where pixels are getting smaller, this technology has improved the sensitivity and color reproduction of each pixel, and applied to ISOCELL JNP.
More details on the product can be found at the link below.

https://semiconductor.samsung.com/image-sensor/mobile-image-sensor/isocell-jnp/ 

The need for high-resolution image implementation in the mobile market will continue. Accordingly, the trend of pixel miniaturization will continue, and even if pixels become smaller, the development of pixel technology to secure high sensitivity, quantum efficiency, and noise reduction will be required. Nanoprism is a technology to increase sensitivity among these, and Samsung aims to move towards further innovation in a direction that goes beyond the existing physical limitations.
Building on this collaboration, continued cross-functional, cross-team efforts aim to explore new direction for next-generation image sensor technologies. 

Wednesday, July 16, 2025

iToF webinar - onsemi's Hyperlux ID solution

Overcoming iToF Challenges: Enabling Precise Depth Sensing for Industrial and Commercial Innovation


 

Sunday, July 13, 2025

Single-photon computer vision workshop @ ICCV 2025

📸✨ Join us at ICCV 2025 for our workshop on Computer Vision with Single-Photon Cameras (CVSPC)!

🗓️  Sunday, Oct 19th, 8:15am-12:30pm at the Hawai'i Convention Center

🔗 Full Program: https://cvspc.cs.pdx.edu/

🗣️ Invited Speakers: Mohit Gupta, Matthew O'Toole, Dongyu Du, David Lindell, Akshat Dave

📍 Submit your poster and join the conversation! We welcome early ideas & in-progress work.

📝 Poster submission form: https://forms.gle/qQ7gFDwTDexy6e668

🏆 Stay tuned for a CVSPC competition announcement!

👥Organizers: Atul Ingle, Sotiris Nousias, Mel White, Mian Wei and Sacha Jungerman.


Single-photon cameras (SPCs) are an emerging class of camera technology with the potential to revolutionize the way today’s computer vision systems capture and process scene information, thanks to their extreme sensitivity, high speed capabilities, and increasing commercial availability.

They provide extreme dynamic range and long-range high-resolution 3D imaging, well beyond the capabilities of CMOS image sensors. SPCs thus facilitate various downstream computer vision applications such as low-cost, long-range cameras for self-driving cars and autonomous robots, high-sensitivity cameras for night photography and fluorescence-guided surgeries, and high dynamic range cameras for industrial machine vision and biomedical imaging applications.

The goal of this half-day workshop at ICCV 2025 is to showcase the myriad ways in which SPCs are used today in computer vision and inspire new applications. The workshop features experts on several key topics of interest, as well as a poster session to highlight in-progress work. 

We welcome submissions to CVSPC 2025 for the poster session, which we will host during the workshop. We invite posters presenting research relating to any aspect of single-photon imaging, such as those using or simulating SPADs, APDs, QIS, or other sensing methods that operate at or near the single-photon limit. Posters may be of new or prior work. If the content has been previously presented in another conference or publication, please note this in the abstract. We especially encourage submissions of in-progress work and student projects.

Please submit a 1-page abstract via this Google Form. These abstracts will be used for judging poster acceptance/rejection, and will not appear in any workshop proceedings. Please use any reasonable format that includes a title, list of authors and a short description of the poster. If this poster is associated with a previously accepted conference or journal paper please be sure to note this in the abstract and include a citation and/or a link to the project webpage.

Final poster size will be communicated to the authors upon acceptance.

Questions? Please email us at cvspc25 at gmail.

Poster Timeline:
📅 Submission Deadline: August 15, 2025
📢 Acceptance Notification: August 22, 2025 

Friday, July 11, 2025

X-FAB's new 180nm process for SPAD integration

News link: https://www.xfab.com/news/details/article/x-fab-expands-180nm-xh018-process-with-new-isolation-class-for-enhanced-spad-integration

X-FAB Expands 180nm XH018 Process with New Isolation Class for Enhanced SPAD Integration

NEWS – Tessenderlo, Belgium – Jun 19, 2025

New module enables more compact designs resulting in reduced chip size

X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has released a new isolation class within its 180nm XH018 semiconductor process. Designed to support more compact and efficient single-photon avalanche diode (SPAD) implementations, this new isolation class enables tighter functional integration, improved pixel density, and higher fill factor – resulting in smaller chip area.
SPADs are critical components in a wide range of emerging applications, including LiDAR for autonomous vehicles, 3D imaging, depth sensing in AR/VR systems, quantum communication and biomedical sensing. X-FAB already offers several SPAD devices built on its 180nm XH018 platform, with active areas ranging from 10µm to 20µm. This includes a near-infrared optimized diode for elevated photon detection probability (PDP) performance.

To enable high-resolution SPAD arrays, a compact pitch and elevated fill factor are essential. The newly released module ISOMOS1, a 25V isolation class module, allows for significantly more compact transistor isolation structures, eliminating the need for an additional mask layer and aligning perfectly with X-FAB’s other SPAD variants.

The benefits of this enhancement are evident when comparing SPAD pixel layouts. In a typical 4x3 SPAD array with 10x10µm² optical areas, the adoption of the new isolation class enables a ~25% reduction in total area and boosts fill factor by ~30% compared to the previously available isolation class. With carefully optimized pixel design, even greater gains in area efficiency and detection sensitivity are achievable.

X-FAB’s SPAD solution has been widely used in applications that require direct Time-of-Flight, such as smartphones, drones, and projectors. This new technological advancement directly benefits these applications in which high-resolution sensing with a compact footprint is essential. It enables accurate depth sensing in multiple scenarios, including industrial distance detection and robotics sensing, for example, by protecting the area around a robot and avoiding collisions when robots are working as cobots. Beyond increasing performance and integration density, the new isolation class opens up opportunities for a broader range of SPAD-based systems requiring low-noise, high-speed single-photon detection within a compact footprint.

Heming Wei, X-FAB’s Technical Marketing Manager for Optoelectronics, explains: “The introduction of a new isolation class in XH018 marks an important step forward for SPAD integration. It enables tighter layouts and better performance, while allowing for more advanced sensing systems to be developed using our proven, reliable 180 nanometer platform.”

Models and PDKs, including the new ISOMOS1 module, are now available, supporting efficient evaluation and development of next-generation SPAD arrays. X-FAB will be exhibiting at Sensors Converge 2025 in Santa Clara, California (June 24–26) at booth #847, showcasing its latest sensor technologies. 

 

 

Example design of 4x3 SPAD pixel using new compact 25 V isolation class with ISOMOS1 module (right) and with previous module (left)

Wednesday, July 09, 2025

Hamamatsu webinar on SPAD and SPAD arrays

 

 

The video is a comprehensive webinar on Single Photon Avalanche Diodes (SPADs) and SPAD arrays, addressing their theory, applications, and recent advancements. It is led by experts from the New Jersey Institute of Technology and Hamamatsu, discussing technical fundamentals, challenges, and innovative solutions to improve the performance of SPAD devices. Key applications highlighted include fluorescence lifetime imaging, remote gas sensing, quantum key distribution, and 3D radiation detection, showcasing SPAD's unique ability to timestamp events and enhance photon detection efficiency.

Monday, July 07, 2025

Images from the world's largest camera

Story in Nature news: https://www.nature.com/articles/d41586-025-01973-5

First images from world’s largest digital camera leave astronomers in awe

The Rubin Observatory in Chile will map the entire southern sky every three to four nights.

The Trifid Nebula (top right) and the Lagoon Nebula, in an image made from 678 separate exposures taken at the Vera C. Rubin Observatory in Chile. Credit: NSF-DOE Vera C. Rubin Observatory

 

The Vera C. Rubin Observatory in Chile has unveiled its first images, leaving astronomers in awe of the unprecedented capabilities of the observatory’s 3,200-megapixel digital camera — the largest in the world. The images were created from shots taken during a trial that started in April, when construction of the observatory’s Simonyi Survey Telescope was completed.

...

One image (pictured) shows the Trifid Nebula and the Lagoon Nebula, in a region of the Milky Way that is dense with ionized hydrogen and with young and still-forming stars. The picture was created from 678 separate exposures taken by the Simonyi Survey Telescope in just over 7 hours. Each exposure was monochromatic and taken with one of four filters; they were combined to give the rich colours of the final product. 

Friday, July 04, 2025

ETH Zurich and Empa develop perovskite image sensor

In a new paper in Nature, a team from ETH Zurich and Empa have demonstrated a new lead halide perovskite thin-film photodetector.

Tsarev et al., "Vertically stacked monolithic perovskite colour photodetectors, " Nature (2025)
Open access paper link: https://www.nature.com/articles/s41586-025-09062-3 

News release: https://ethz.ch/en/news-und-veranstaltungen/eth-news/news/2025/06/medienmitteilung-bessere-bilder-fuer-mensch-und-maschine.html

Better images for humans and computers

Researchers at ETH Zurich and Empa have developed a new image sensor made of perovskite. This semiconductor material enables better colour reproduction and fewer image artefacts with less light. Perovskite sensors are also particularly well suited for machine vision. 

Image sensors are built into every smartphone and every digital camera. They distinguish colours in a similar way to the human eye. In our retinas, individual cone cells recognize red, green and blue (RGB). In image sensors, individual pixels absorb the corresponding wavelengths and convert them into electrical signals.

The vast majority of image sensors are made of silicon. This semiconductor material normally absorbs light over the entire visible spectrum. In order to manufacture it into RGB image sensors, the incoming light must be filtered. Pixels for red contain filters that block (and waste) green and blue, and so on. Each pixel in a silicon image sensor thus only receives around a third of the available light.

Maksym Kovalenko and his team associated with both ETH Zurich and Empa have proposed a novel solution, which allows them to utilize every photon of light for colour recognition. For nearly a decade, they have been researching perovskite-based image sensors. In a new study published in the renowned journal Nature, they show: The new technology works.

Stacked pixels
The basis for their innovative image sensor is lead halide perovskite. This crystalline material is also a semiconductor. In contrast to silicon, however, it is particularly easy to process – and its physical properties vary with its exact chemical composition. This is precisely what the researchers are taking advantage of in the manufacture of perovskite image sensors.

If the perovskite contains slightly more iodine ions, it absorbs red light. For green, the researchers add more bromine, for blue more chlorine – without any need for filters. The perovskite pixel layers remain transparent for the other wavelengths, allowing them to pass through. This means that the pixels for red, green and blue can be stacked on top of each other in the image sensor, unlike with silicon image sensors, where the pixels are arranged side-by-side.


Thanks to this arrangement, perovskite-based image sensors can, in theory, capture three times as much light as conventional image sensors of the same surface area while also providing three times higher spatial resolution. Researchers from Kovalenko's team were able to demonstrate this a few years ago, initially with individual oversized pixels made of millimeter-large single crystals.

Now, for the first time, they have built two fully functional thin-film perovskite image sensors. “We are developing the technology further from a rough proof of principle to a dimension where it could actually be used,” says Kovalenko. A normal course of development for electronic components: “The first transistor consisted of a large piece of germanium with a couple of connections. Today, 60 years later, transistors measure just a few nanometers.”

Perovskite image sensors are still in the early stages of development. With the two prototypes, however, the researchers were able to show that the technology can be miniaturized. Manufactured using thin-film processes common in industry, the sensors have reached their target size in the vertical dimension at least. “Of course, there is always potential for optimization,” notes co-author Sergii Yakunin from Kovalenko's team.

In numerous experiments, the researchers put the two prototypes, which differ in their readout technology, through their paces. Their results prove the advantages of perovskite: The sensors are more sensitive to light, more precise in colour reproduction and can offer a significantly higher resolution than conventional silicon technology. The fact that each pixel captures all the light also eliminates some of the artifacts of digital photography, such as demosaicing and the moiré effect.

Machine vision for medicine and the environment
However, consumer digital cameras are not the only area of application for perovskite image sensors. Due to the material's properties, they are also particularly suitable for use in machine vision. The focus on red, green and blue is dictated by the human eye: These image sensors work in RGB format because our eyes see in RGB mode. However, when solving specific tasks, it is advisable to specify other optimal wavelength ranges that the computer image sensor should read. Often there are more than three – so-called hyperspectral imaging.

Perovskite sensors have a decisive advantage in hyperspectral imaging. Researchers can precisely control the wavelength range they absorb by each layer. “With perovskite, we can define a larger number of colour channels that are clearly separated from each other,” says Yakunin. Silicon, with its broad absorption spectrum, requires numerous filters and complex computer algorithms. “This is very impractical even with a relatively small number of colours,” Kovalenko sums up. Hyperspectral image sensors based on perovskite could be used in medical analysis or in automated monitoring of agriculture and the environment, for example.

In the next step, the researchers want to further reduce the size and increase the number of pixels in their perovskite image sensors. Their two prototypes have pixel sizes between 0.5 and 1 millimeters. Pixels in commercial image sensors fall in the micrometer range (1 micrometre is 0.001 millimetre). “It should be possible to make even smaller pixels from perovskite than from silicon,” says Yakunin. The electronic connections and processing techniques need to be adapted for the new technology. “Today's readout electronics are optimized for silicon. But perovskite is a different semiconductor, with different material properties,” says Kovalenko. However, the researchers are convinced that these challenges can be overcome. 

Wednesday, July 02, 2025

STMicro releases image sensor solution for human presence detection

New technology delivers more than 20% power consumption reduction per day in addition to improved security and privacy

ST solution combines market leading Time-of-Flight (ToF) sensors and unique AI algorithms for a seamless user experience

Geneva, Switzerland, June 17, 2025 -- STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, introduces a new Human Presence Detection (HPD) technology for laptops, PCs, monitors and accessories, delivering more than 20% power consumption reduction per day in addition to improved security and privacy. ST’s proprietary solution combines market-leading FlightSense™ Time-of-Flight (ToF) sensors with unique AI algorithms to deliver a hands-free fast Windows Hello authentication; and delivers a range of benefits such as longer battery lifetime, and user-privacy or wellness notifications. 

“Building on the integration of ST FlightSense technology in more than 260 laptops and PC models launched in recent years, we are looking forward to see our new HPD solution contributing to make devices more energy-efficient, secure, and user-friendly,” said Alexandre Balmefrezol, Executive Vice President and General Manager of the Imaging Sub-Group at STMicroelectronics. “As AI and sensor technology continue to advance, with greater integration of both hardware and software, we can expect to see even more sophisticated and intuitive ways of interacting with our devices, and ST is best positioned to continue to lead this market trend.” 

“Since 2023, 3D sensing in consumer applications has gained new momentum, driven by the demand for better user experiences, safety, personal robotics, spatial computing, and enhanced photography and streaming. Time-of-Flight (ToF) technology is expanding beyond smartphones and tablets into drones, robots, AR/VR headsets, home projectors, and laptops. In 2024, ToF modules generated $2.2 billion in revenue, with projections reaching $3.8 billion by 2030 (9.5% CAGR). Compact and affordable, multizone dToF sensors are now emerging to enhance laptop experiences and enable new use cases,” said Florian Domengie, PhD Principal Analyst, Imaging at Yole Group. 

The 5th generation turnkey ST solution
By integrating hardware and software components by design, the new ST solution is a readily deployable system based on FlightSense 8x8 multizones Time-of-Flight sensor (VL53L8CP) complemented by proprietary AI-based algorithms enabling functionalities such as human presence detection, multi-person detection, and head orientation tracking. This integration creates a unique ready-to-use solution for OEMs that requires no additional development for them. 

This 5th generation of sensors also integrates advanced features such as gesture recognition, hand posture recognition, and wellness monitoring through human posture analysis. 

ST’s Human Presence Detection (HPD) solution enables enhanced features such as:
-- Adaptive Screen Dimming tracks head orientation to dim the screen when the user isn’t looking, reducing power consumption by more than 20%.
-- Walk-Away Lock & Wake-on-Attention automatically locks the device when the user leaves and wakes up upon return, improving security and convenience.
-- Multi-Person Detection alerts the user if someone is looking over their shoulder, enhancing privacy.

Tailored AI algorithm
STMicroelectronics has implemented a comprehensive AI-based development process that from data collection, labeling, cleaning, AI training and integration in a mass-market product. This effort relied on thousands of data-logs from diverse sources, including contributions from workers who uploaded personal seating and movement data over several months, enabling the continuous refinement of AI algorithms. 

One significant achievement is the transformation of a Proof-Of-Concept (PoC) into a mature solution capable of detecting a laptop user's head orientation using only 8x8 pixels of distance data. This success was driven through a meticulous development process that included four global data capture campaigns, 25 solution releases over the course of a year, and rigorous quality control of AI training data. The approach also involved a tailored pre-processing method for VL53L8CP ranging data, and the design of four specialized AI networks: Presence AI, HOR (Head Orientation) AI, Posture AI, and Hand Posture AI. Central to this accomplishment was the VL53L8CP ToF sensor, engineered to optimize the Signal-To-Noise ratio (SNR) per zone, which played a critical role in advancing these achievements. 

Enhanced user experience & privacy protection
The ToF sensor ensures complete user privacy without capturing images or relying on the camera, unlike previous versions of webcam-based solutions. 

Adaptive Screen Dimming:
-- Uses AI algorithms to analyze the user's head orientation. If the user is not looking at the screen, the system gradually dims the display to conserve power.
-- Extends battery life by minimizing energy consumption.
-- Optimizes for low power consumption with AI algorithms and can be seamlessly integrated into existing PC sensor hubs.

Walk-Away Lock (WAL) & Wake-on-Approach (WOA):
-- The ToF sensor automatically locks the PC when the user moves away and wakes it upon their return, eliminating the need for manual interaction.
-- This feature enhances security, safeguards sensitive data, and offers a seamless, hands-free user experience.
-- Advanced filtering algorithms help prevent false triggers, ensuring the system remains unaffected by casual passerby.

Multi-Person Detection (MPD):
-- The system detects multiple people in front of the screen and alerts the user if someone is looking over their shoulder.
-- Enhances privacy by preventing unauthorized viewing of sensitive information.
-- Advanced algorithms enable the system to differentiate between the primary user and other nearby individuals.

Technical highlights: VL53L8CP: ST FlightSense 8x8 multizones ToF sensor. https://www.st.com/en/imaging-and-photonics-solutions/time-of-flight-sensors.html 
-- AI-based: compact, low-power algorithms suitable for integration into PC sensor hubs.
-- A complete ready-to-use solution includes hardware (ToF sensor) and software (AI algorithms).

Conference List - January 2026

SPIE Photonics West - 17-22 January 2026 - San Francisco, CA, USA - Website

62nd International Winter Meeting on Nuclear Physics - 19-23 January 2026 - Bormio, Italy - Website

The 39th International Conference on Micro Electro Mechanical Systems (IEEE MEMS 2026) - 25-29 January 2026 - Salzburg, Austria - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index