Wednesday, January 15, 2025

Sony stacked CIS+iToF sensor (IEDM 2024)

Article (in German): https://www.pcgameshardware.de/Wissenschaft-Thema-237118/News/Fuer-Kameras-Sony-stapelt-Farb-Tiefensensor-keine-Verzerrungen-mehr-1462040/

English translation from Google Translate (with some light editing) below:

 

Depth sensors, which provide an image with spatial information, have become increasingly widespread in recent years. They can be used, for example, to create 3D scans or for targeted, subsequent blur effects - for example in smartphone cameras. In most cases, so-called ToF sensors (Time of Flight) are used, in which each pixel is measured when previously radiated infrared light is reflected back.

Not next to each other, but on top of each
So far, however, there has been a problem in the implementation in combination with normal camera sensors. Either the ToF sensor is located next to the camera sensor. Then there are are concealed areas through the different angles, but above all on edges, and not every color value can be assigned a depth value. Or ToF and color pixels sit on the same sensor and take away the space from each other. In other words: The resolution is reduced.

However, the camera division of Sony now claims to have found a way out. At the IEDM 2024 semiconductor trade fair, a combination sensor was presented in which the camera sensor is located directly above the depth sensor. This is made possible by the use of a new material: normally the color pixels would be located on silicon, but the broadband light would be absorbed and thus the depth pixels covered. However, Sony has apparently solved this problem by means of a new construction on a broadband transparent, organic photo-leading film. Visible wavelength hits the color sensors, while infrared light falls further down on the IR pixels of the ToF sensor.



Above each ToF pixel, which each occupies 4um, there are four RGB pixels with 1um each. In total, there is talk of a resolution of 1004 x 756 pixels for the depth map and 4016 x 3024 pixels for the color image. At least in this respect, the prototype has apparently already reached a usable area.

 

However, it is still unclear whether and when corresponding sensors should go into mass production. However, if Sony can potentially eliminate existing problems, the wide availability of such a sensor would offer numerous options. For example, you could simplify the creation of high-resolution 3D scans for games and movies and also make the data collection of robots significantly more reliable.

New opening in Prof Guy Meynants research group

KU Leuven

Electronics Design Engineer for Space Exploration (post-doctoral assistant) - Geel, Belgium - Link

Monday, January 13, 2025

Global shutter quantum dot image sensor for NIR imaging

L. Baudoin et al. of  ISAE SUPAERO, University of Toulouse, Toulouse, France recently published a paper titled "Global Shutter and Charge Binning With Quantum Dots Photodiode Arrays for NIR Imaging" in the IEEE Journal of the Electron Devices Society.

Open access link: https://ieeexplore.ieee.org/document/10742005

Abstract:  New applications like depth measurements or multispectral imaging require to develop image sensors able to sense efficiently in the Near Infrared and Short-Wave Infrared where silicon is weakly sensitive. Colloidal Quantum Dots (CQD) technology is an interesting candidate to address these new applications as it allows to develop image sensors with high quantum efficiency at excitonic peak and high-resolution images. In this paper, we present an electrical model describing the electrical behavior of a designed and manufactured CQD photodiode. We use this model to explore a different architecture collecting holes instead of electrons. This architecture allows to control the charge collection inside the CQD thin film through the electric field. This property enables to implement global shutter functionality, to bin charges from several photodiodes, or to operate two physically interleaved photodiodes arrays alternatively with different types of pixel circuitries. These operating modes extend the capabilities of CQD image sensors in terms of applications.

Overview of the CQD thin film properties.


(a) Electron microscopy cross section of the characterized photodiode [16] (b) Scheme of the simulated device for electrons collection (c) CQD photodiode process flow [16].


(a) CQD photodiode absorption spectrum (b) Current vs Voltage CQD photodiode characteristic – experiment vs simulation.


(a) Scheme of the simulated device for holes collection (b) Band diagram of the photodiode varying the voltage of the bottom electrode (c) Physical phenomena explaining the photodiode current vs voltage characteristic.


Current vs Voltage characteristics vs (a) CQD thin film holes mobility (b) carriers lifetime (c) CQD thin film electron affinity (d) ETL electron affinity (e) HTL electron affinity Turn-on bias vs (f) CQD thin film holes mobility (g) carriers lifetime (h) CQD thin film electron affinity (i) HTL electron affinity.


(a) Scheme of the multi-electrodes device working principle (b) Multi-electrodes photodiode architecture for holes collection control alternating collection on pixels A (top image) and collection on pixel B (bottom image).

Electrostatic potential and band diagrams explaining the carriers’ collection control for: (a) electrons collecting photodiodes (b) holes collecting photodiodes.


Current-Voltage characteristics explaining the carriers’ collection control for: (a) electrons collecting photodiodes (b) holes collecting photodiodes.


Electric field for photodiodes with central bottom electrode biased and various bottom electrodes’ widths.

Turn-on bias vs work functions for various electrodes’ size.

 

Current-Voltage characteristics of the collecting and non-collecting electrodes at various illuminations.

Thursday, January 09, 2025

imec SWIR quantum dot sensor

From optics.org news: https://optics.org/news/15/12/28

imec group launches SWIR sensor with lead-free quantum dot photodiodes

Technology is a step toward “greener” IR imagers for autonomous driving, medical diagnostics.

Last week, at the 2024 IEEE International Electron Devices Meeting, in San Francisco, imec, a research and innovation hub in nanoelectronics and digital technologies, and its partners in the Belgian project Q-COMIRSE, presented the first prototype shortwave infrared image (SWIR) sensor based on indium arsenide quantum dot photodiodes.

The sensor demonstrated successful 1390 nm imaging results, offering an environmentally-friendly alternative to first-generation quantum dots that contain lead, which limited their widespread manufacturing. The proof-of-concept is a critical step toward mass-market infrared imaging with low-cost and non-toxic photodiodes.

By detecting wavelengths beyond the visible spectrum, SWIR sensors can provide enhanced contrast and detail, as materials reflect differently in this range.

Face recognition and eye-tracking

These sensors can distinguish objects that appear identical to the human eye and penetrate through fog or mist, suiting them to applications such as face recognition or eye-tracking in consumer electronics, and autonomous vehicle navigation. While current versions are costly and limited to high-end applications, wafer-level integration promises broader accessibility.

Tuned for SWIR, quantum dots offer compact, low-cost absorbers, since integration into CMOS circuits and existing manufacturing processes is possible. However, first-generation QDs often contain toxic heavy metals such as lead and mercury, and the search for alternatives continues.

At 2024 IEDM, imec and its partners within the Q-COMIRSE project (Ghent University, QustomDot BV, ChemStream BV and ams OSRAM) presented a SWIR image sensor featuring a lead-free quantum dot alternative as absorber; indium arsenide (InAs). The proof-of-concept sensor, tested on both glass and silicon substrates, was the first of its kind to produce successful 1390 nm imaging results, imec announced.

Pawel Malinowski, imec technology manager and domain lead imaging, emphasized the significance of the achievement: “The first generation of QD sensors was crucial for showcasing the possibilities of this flexible platform. We are now working towards a second generation that will serve as a crucial enabler for the masses, aiming at cost-efficient manufacturing in an environmentally friendly way,” he said.

“With major industry players looking into quantum dots, we are committed to further refine this semiconductor technology towards accessible, compact, multifunctional image sensors with new functionalities.”

Stefano Guerrieri, Engineering Fellow at ams Osram, added, “Replacing lead in colloidal quantum dots with a more environmentally friendly material was our key goal in Q-COMIRSE. Our remarkable development work with imec and the others paves the way toward a low-cost and lead-free SWIR technology that, once mature for industrial products, could enable unprecedented applications in robotics, automotive, AR/VR and consumer electronics among others.”

Tuesday, January 07, 2025

Ubicept superpowers computer vision for a world in motion

Computer Vision Pioneer Ubicept to Showcase Breakthrough in Machine Perception at CES 2025


Game-Changing Photonic Computer Vision Technology Now Available for Rapid Prototyping Across Autonomous Vehicles, Robotics, AR/VR and More 


Las Vegas, January 7, 2025 – Ubicept, founded by computer vision experts from MIT, University of Wisconsin-Madison, and veterans of Google, Facebook, Skydio and Optimus Ride, today unveiled breakthrough technology that processes photon-level image data to enable unprecedented machine perception clarity and precision. The company will debut its innovation at CES 2025; demonstrations will show how the Ubicept approach handles challenging scenarios that stymie current computer vision systems, from autonomous vehicles navigating dark corners to robots operating in variable lighting conditions.

In their current state, cameras and image sensors cannot handle multiple challenging lighting conditions at the same time. Image capture in complex circumstances such as fast movement at night yields results that are too noisy or too blurry, severely limiting the potential of AI and other technologies that depend on computer vision clarity. Such systems also require different solutions to address different lighting conditions, resulting in disparate imaging systems with unreliable outputs. 

Now, Ubicept is bringing maximum visual perception to the computer vision ecosystem to make image sensors and cameras more powerful than ever before. The technology combines proprietary software with Single-Photon Avalanche Diode (SPAD) sensors -- the same technology used in iPhone LiDAR systems – to create a unified imaging solution that eliminates the need for multiple specialized cameras. This enables:

  • Crystal-clear imaging in extreme low light without motion blur

  • High-speed motion capture without light streaking

  • Simultaneous handling of bright and dark areas in the same environment

  • Precise synchronization with lights (LEDs, lasers) for 3D applications


“Ubicept has developed the optimal imaging system,” said Sebastian Bauer, cofounder and CEO, Ubicept. “By processing individual photons, we're enabling machines to see with astounding clarity across all lighting conditions simultaneously, including pitch darkness, bright sunlight, fast motion, and 3D sensing.” 

Ubicept is making its technology available via its new FLARE (Flexible Light Acquisition and Representation Engine) Development Kit, combining a 1-megapixel, full-color SPAD sensor from a key hardware partner with Ubicept’s sensor-agnostic processing technologies. This development kit will enable camera companies, sensor makers, and computer vision engineers to seamlessly integrate Ubicept technology into autonomous vehicles, robotics, AR/VR, industrial automation, and surveillance applications.

In addition to SPAD sensors, Ubicept also seamlessly integrates with existing cameras and CMOS sensors, easing the transition to next generation technologies and enabling any camera to be transformed into an advanced imaging system. 

“The next big AI wave will be enabled by computer vision powered applications in the real world; however, today’s cameras were designed for humans, and using standard image data for computer vision systems won’t get us there,” said Tristan Swedish, cofounder and CTO, Ubicept. “Ubicept’s technology bridges that gap, enabling computer vision systems to achieve ideal perception. Our mission is to create a scalable, software-defined camera system that powers the future of computer vision.”

Ubicept is backed by Ubiquity Ventures, E14 Fund, Wisconsin Alumni Research Foundation, Convergent Ventures, and other investors, with a growing customer base that includes leading brands in the automotive and AR/VR industries. 

The new FLARE Development Kit is now available for pre-order; visit www.ubicept.com/preorder to sign-up and learn more, or see Ubicept’s technology in action at CES, Las Vegas Convention Center, North Hall, booth 9467.

About Ubicept

Ubicept has pushed computer vision to the limits of physics. Developed out of MIT and the University of Wisconsin-Madison, Ubicept technology enables super perception for a world in motion by transforming photon image data into actionable information through advanced processing algorithms. By developing groundbreaking technology that optimizes imaging in low light, fast motion and high dynamic range environments, Ubicept enables industries to overcome the limitations of conventional vision systems, unlocking new possibilities for computer vision and beyond. Learn more at ubicept.com or follow Ubicept on LinkedIn

Media Contact:

Dana Zemack

Scratch Marketing + Media for Ubicept

ubicept@scratchmm.com 

Monday, January 06, 2025

Video of the day: Oculi Smart Sensing


Visual Intelligence at the Edge, by Fred Brady

Fred is currently the Chief Technical Product Officer for Oculi, a Rochester-based start-up in the smart sensing field. He presented this talk in the Society for Imaging Science and Technology (IS&T)'s Rochester NY Chapter seminar series on 11 Dec. 2024.
Today's image sensors are inefficient for vision AI - they were developed for human presence detection. These solutions are slow, power-hungry, and expensive. We will discuss Oculi's Intellipixel solution, which puts smarts at the ‘edge of the edge’ to output just the data needed for AI.
00:00 - Introduction
00:38 - Visual Intelligence at the Edge
13:00 - Oculi Output Examples
18:32 - Face and Pupil Detection
20:42 - Wrap-up
22:00 - Discussion


Friday, January 03, 2025

Another 2025 CES innovation award: Lidwave's 4D LiDAR sensor

From: https://www.einpresswire.com/article/768427169/lidwave-s-odem-4d-lidar-sensor-receives-the-prestigious-ces-innovation-award-2025

Lidwave's technology receives acknowledgment once more, with in the form of "CES innovation award" for its Odem 4D LiDAR sensor

JERUSALEM, ISRAEL, December 12, 2024 /EINPresswire.com/ -- Lidwave, a pioneer in the field of coherent LiDAR, is proud to share that its revolutionary Odem 4D Sensor has been recognized as an Honoree in the CES Innovation Awards 2025 in the Imaging category. Lidwave, a pioneer in the field of coherent LiDAR, is proud to share that its revolutionary Odem 4D Sensor has been recognized as an Honoree in the CES Innovation Awards 2025 in the Imaging category. “This recognition underscores Odem’s potential to redefine machine perception across industries, enabling smarter, more efficient systems, powered by Lidwave's innovative Finite Coherent Ranging (FCR™) technology” said Yehuda Vidal, Lidwave’s CEO.

At its core, Odem is a 4D coherent LiDAR that delivers both high-resolution 3D spatial data and instantaneous velocity information at the pixel level. This ability to capture an object’s location and motion in real time transforms how machines perceive and respond to their surroundings. From autonomous vehicles and robotics to industrial automation and smart infrastructure, Odem empowers systems with the precision and speed required for decision-making in dynamic environments.
One of Odem’s standout features is its software-defined architecture, which allows users to adapt key parameters - such as field of view, resolution, detection range, and frame rate – to their needs, with no change to the hardware. This flexibility enables industries to test and optimize Odem for their unique applications, making it a powerful tool for innovation across diverse sectors. Whether streamlining factory operations, enhancing transportation systems, or advancing next-generation robotics, Odem is designed to meet the evolving needs of its users.

Beyond its exceptional performance in both short- and long-range applications, Odem represents a breakthrough in scalability and affordability. By integrating a complete LiDAR system - including lasers, amplifiers, receivers, and optical routing - onto a single chip, Lidwave has made high-performance sensing technology accessible at scale. This achievement addresses one of the industry’s most critical challenges, ensuring that advanced LiDAR solutions can be deployed widely and cost-effectively.
Reliability is at the heart of Odem’s design. Built to perform under all conditions—including total darkness, glaring sunlight, fog, and dust—Odem ensures consistent and accurate detection in even the most challenging scenarios. Its robustness makes it an indispensable solution for demanding applications where precision and dependability are essential.

“We are thrilled to receive this recognition for Odem,” said Yehuda Vidal, CEO of Lidwave. “This sensor combines advanced capabilities with unmatched scalability and reliability. Its ability to provide detailed spatial and motion data in real time, while being scalable and cost-effective, is a game-changer for industries worldwide.”

“This award highlights Odem’s transformative impact,” added Dr. Yossi Kabessa, Lidwave’s CTO. “With its 4D data capabilities and flexibility, Odem empowers industries to adopt cutting-edge sensing solutions that drive innovation and progress.”

“This acknowledgment joins the feedback we get from our partners in various fields,” said Nitsan Avivi, Head of Business Development at Lidwave. “ and makes it clear that Odem will have an enormous impact on machine vision. Its unique capabilities and scalability are paving the way for new use cases, expanding the horizons of LiDAR applications”  


Wednesday, January 01, 2025

SOLiDVUE wins CES 2025 innovation award for solid-state LiDAR

From PR Newswire: https://www.prnewswire.com/news-releases/solidvue-sets-new-standards-with-ces-innovation-award-winning-high-resolution-lidar-sensor-ic-sl-2-2-302329805.html

SOLiDVUE Sets New Standards with CES Innovation Award-Winning High-Resolution LiDAR Sensor IC, 'SL-2.2'

SEOUL, South Korea, Dec. 16, 2024 /PRNewswire/ -- SOLiDVUE, Korea's exclusive enterprise specialized in CMOS LiDAR (Light Detection and Ranging) sensor IC development, announced that its groundbreaking single-chip LiDAR sensor IC, the SL-2.2, boasting a world-first 400x128 resolution, has been honored with the CES Innovation Award® at CES 2025.

LiDAR is a next-generation core component for autonomous vehicles and robotics, capable of precisely measuring the shape and distance of objects to output 3D images. This technology enables accurate object recognition for applications such as autonomous vehicles, drones, robots, security cameras, and traffic management systems.

Established in 2020, SOLiDVUE focuses on designing SoCs (System-on-Chip) for LiDAR sensors, which form the core of a LiDAR system. "While mechanical LiDAR has been the standard, the latest trend is to replace it with semiconductor chips," said co-CEO, Jung-Hoon Chun. SOLiDVUE is the only company in South Korea to have developed LiDAR sensors that completely replace mechanical components with semiconductor technology.

SOLiDVUE's LiDAR sensors are compatible with solid-state LiDAR systems, which are 10 times smaller and 100 times cheaper than traditional mechanical LiDAR systems. "Our sensors offer ultra-compact chip solution compared to competitors, but their performance is not just on par—it's superior," co-CEO, Jaehyuk Choi stated confidently.

The company's proprietary technologies, such as CMOS SPAD (Single Photon Avalanche Diode) technology, single-chip sensor architecture, and image signal processor, underpin its competitive edge. CMOS SPAD technology enhances measurement accuracy by detecting sparse photons even the single-photon level. Globally, only a few companies, including SOLiDVUE, possess such single-chip sensor technology.

SOLiDVUE's technological prowess has been repeatedly acknowledged at the IEEE ISSCC (International Solid-State Circuits Conference), marking a remarkable achievement for a Korean fabless company. Furthermore, the recent CES Innovation Award has once again affirmed its prominence in the LiDAR sensor industry.

SOLiDVUE's award-winning SL-2.2 pushes the boundaries of resolution with its ability to output high-resolution 3D images up to 400x128 pixels, surpassing the 200x116 resolution of existing products. The SL-2.2 can detect objects up to 200 meters away with an exceptional 99.9% accuracy.

As a single-chip sensor, the SL-2.2 is fabricated using standard CMOS semiconductor processes and benefits from SOLiDVUE's proprietary ultra-miniaturization technology. The sensor core measures just 0.9cm x 0.9cm and is packaged in a compact 1.4cm x 1.4cm BGA-type package, enabling seamless integration into various LiDAR systems. Its single-chip design reduces power consumption, enhancing energy efficiency and ensuring high reliability.

The SL-2.2 is a successor to the company's first product, the SV-110, which features a 200x116 resolution and a 128-meter detection range. The SL-2.2 is scheduled for an official release in 2025 and is expected to play a pivotal role in advancing LiDAR technology across applications such as autonomous vehicles, robotics, drones, and smart cities.

Co-CEO Jaehyuk Choi emphasized, "At SOLiDVUE, we are actively collaborating with numerous domestic and international companies and research institutions to push the boundaries of LiDAR technology. With the rapidly growing demand for LiDAR, we are committed to continuously expanding our product lineup to meet diverse market needs. Our mission is to lead the LiDAR industry by delivering innovative solutions that address the evolving challenges of tomorrow."



Monday, December 30, 2024

MagikEye to present 5cm to 5m depth sensing solution at CES

From Businesswire: https://www.businesswire.com/news/home/20241218853081/en/MagikEye-Brings-%E2%80%9CSeeing-in-3D-from-Near-to-Far%E2%80%9D-to-CES-2025-Now-Enabling-Depth-Sensing-from-5cm-to-5m

MagikEye Brings “Seeing in 3D from Near to Far” to CES 2025: Now Enabling Depth Sensing from 5cm to 5m

STAMFORD, Conn.--(BUSINESS WIRE)--MagikEye Inc. (www.magik-eye.com), a leader in advanced 3D depth sensing technology, is pleased to offer private demonstrations of its latest Invertible Light™ Technology (ILT) advancements at the 2025 Consumer Electronics Show (CES) in Las Vegas, NV. Building on a mission to provide the “Eyes of AI,” the newest iteration of ILT can measure depth as close as 5cm and reaching to 5m. This expanded range can transform how developers take advantage of 3D vision in their products, allowing seeing 3D from near-to-far.

By leveraging a simple, low-cost projector and a standard CMOS image sensor, MagikEye’s ILT solution delivers 3D with unparalleled cost and power savings. A small amount of software running on any low-power microcontroller enables a broad spectrum of applications—ranging from consumer electronics and robotics to AR/VR, industrial automation, and transportation—without the cost of specialized silicon or sensors. With the newest version of ILT, manufacturers can inexpensively add depth capabilities to more devices, increasing product versatility and improving product performance.

“This new generation of ILT redefines what’s possible in 3D sensing,” said Takeo Miyazawa, Founder & CEO of MagikEye. “By bringing the near-field range down to 5cm, we enable a richer, more immersive interaction between devices and their environment, while providing more complete data for AI applications. From tiny consumer gadgets to large-scale robotic systems, our technology scales effortlessly, helping our customers drive innovation, enhance user experiences, and unlock new market opportunities.”

During CES 2025, MagikEye invites interested partners, product designers, and customers to arrange a private demonstration of the enhanced ILT technology. These one-on-one sessions will provide an in-depth look at how to seamlessly integrate ILT into existing hardware and software platforms and explore its potential across a multitude of applications.

Monday, December 23, 2024

Yole Webinar on Status of CIS Industry in 2024

Yole recently held a webinar on the latest trends and emerging applications in the CMOS image sensor market.

It is still available to view with a free registration at this link: https://attendee.gotowebinar.com/register/3603702579220268374?source=Yole+webinar+page

More information:

https://www.yolegroup.com/event/trade-shows-conferences/webinar-the-cmos-image-sensor-industry/


The CMOS image sensor (CIS) market, which is projected to grow at a 4.7% compound annual growth rate from 2023 to 2029, growing to $28.6 billion, is undergoing a transformation. Declining smartphone sales, along with weakening demand in devices such as laptops and tablet computers, are key challenges to growth.

We forecast that automotive cameras and other emerging applications will instead be the key drivers of future CIS market growth. Technology innovations such as triple-stacked architectures and single-photon avalanche diode-based sensors are improving performance, enabling new applications in low light and 3D imaging, for example, while high dynamic range and LED flicker mitigation are key requirements for automotive image sensors.

This webinar, co-organized with the Edge AI + Vision alliance, will discuss how CIS suppliers are focusing on enhancing sensor capabilities, along with shifting their product mixes towards higher potential value markets. Our experts will also explore how emerging sensing modalities such as neuromorphic, optical metasurfaces, short-wave infrared and multispectral imaging will supplement, and in some cases supplant, CMOS image sensors in the future.