Wednesday, November 20, 2024

Single Photon Avalanche Diodes - Buyer's Guide

Photoniques magazine published an article titled "Single photon avalanches diodes" by Angelo Gulinatti (Politecnico di Milano).

Abstract: Twenty years ago the detection of single photons was little more than a scientific curiosity reserved to a few specialists. Today it is a flourishing field with an ecosystem that extends from university laboratories to large semiconductor manufacturers. This change of paradigm has been stimulated by the emergence of critical applications that rely on single photon detection, and by technical progresses in the detector field. The single photon avalanche diode has unquestionably played a major role in this process.

Full article [free access]: https://www.photoniques.com/articles/photon/pdf/2024/02/photon2024125p63.pdf

 


Figure 1: Fluorescence lifetime measured by time-correlated single-photon counting (TCSPC). The sample is excited by a pulsed laser and the delay between the excitation pulse and the emitted photon is measured by a precision clock. By repeating multiple times, it is possible to build a histogram of the delays that reproduces the shape of the optical signal.



Figure 3: By changing the operating conditions or the design parameters, it is possible to improve some
performance metrics at the expenses of others.



Conference List - February 2025

Electronic Imaging (EI 2025) - 2-6 Feb 2025 - Burlingame, California - Website

MSS Detectors and Materials, and Passive Sensors Conference (clearance may be required) - 10-14 Feb 2025 - Orlando, Florida, USA - Website

IEEE International Solid-State Circuits Conference (ISSCC) - 16-20 Feb 2025 - San Francisco, California, USA - Website

SPIE Medical Imaging - 16-20 Feb 2025 - San Diego, California, USA - Website

innoLAE (Innovations in Large-Area Electronics) - 17-19 Feb 2025 - Cambridge, UK - Website

Wafer-Level Packaging Symposium - 18-20 Feb 2025 - San Francisco, California, USA - Website

Return to Conference List Index  


Monday, November 18, 2024

IEDM 2024 Program is Live

70th Annual IEEE International Electron Devices Meeting (IEDM) will be held December 7-11, 2024 in San Francisco California. Session #41 is on the topic of "Advanced Image Sensors":

https://iedm24.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=58

Title: 41 | ODI | Advanced Image Sensors
Description:
This session includes 6 papers on latest image sensor technologies developments. To be noticed this year the multiple ways of stacking layer with new features. The first stack involves a dedicated AI image processing layer based on neural networks for a 50 Mpix sensor. The second one shows progress on small pixel noise with 2-layer pixel and additional intermediate interconnection. Third stack, very innovative with organic pixel on top of conventional Si based ITOF pixel for true single device RGB-Z sensor. All three papers are authored by Sony Semiconductors. InAs QD image sensors are also reported for the first time as a lead-free option for SWIR imaging by both IMEC and Sony Semiconductors Also progress in conventional IR global shutter with newly nitrated MIM capacitor and optimized DTI filling for crosstalk and QE improvement is presented by Samsung semiconductor.

Wednesday, December 11, 2024 - 01:35 PM
41-1 | A Novel 1/1.3-inch 50 Megapixel Three-wafer-stacked CMOS Image Sensor with DNN Circuit for Edge Processing
This study reports the first ever 3-wafer-stacked CMOS image sensor with DNN circuit. The sensor was fabricated using wafer-on-wafer-on-wafer process and DNN circuit was placed on the bottom wafer to ensure heat dissipation. This device can incorporate the HDR function and enlarge the pixel array area to remarkably improve image-recognition.


Wednesday, December 11, 2024 - 02:00 PM
41-2 | Low Dark Noise and 8.5k e− Full Well Capacity in a 2-Layer Transistor Stacked 0.8μm Dual Pixel CIS with Intermediate Poly-Si Wiring
This paper demonstrates a 2-layer transistor pixel stacked CMOS image sensor with the world’s smallest 0.8μm dual pixel. We improved the layout flexibility with intermediate poly-Si wiring technique. Our advanced 2-layer pixel device achieved low input-referred random noise of 1.3 e−rms and high full well capacity of 8.5k e−.


Wednesday, December 11, 2024 - 02:25 PM
41-3 | A High-Performance 2.2μm 1-Layer Pixel Global Shutter CMOS Image Sensor for Near-Infrared Applications
A high performance and low cost 2.2μm 1-layer pixel near infrared (NIR) global shutter (G/S) CMOS image sensor (CIS) was demonstrated. In order to improve quantum efficiency (QE), thick silicon with high aspect ratio full-depth deep trench isolation (FDTI) and backside scattering technology are implemented. Furthermore, thicker sidewall oxide for deep trench isolation and oxide filled FDTI were applied to enhance a modulation transfer function (MTF). In addition, 3-dimensional metal-insulator-metal capacitors were introduced to suppress temporal noise (TN). As a result, we have demonstrated industry-leading NIR G/S CIS with 2.71e- TN, dark current of 8.8e-/s, 42% QE and 58% MTF.


Wednesday, December 11, 2024 - 03:15 PM
41-4 | First Demonstration of 2.5D Out-of-Plane-Based Hybrid Stacked Super-Bionic Compound Eye CMOS Chip with Broadband (300-1600 nm) and Wide-Angle (170°) Photodetection
We propose a hybrid stacked CMOS bionic chip. The surface employs a fabrication process involving binary-pore anodic aluminum oxide (AAO) templates and integrates monolayer graphene (Gr) to mimic the compound eyes, thereby enhancing detection capabilities in the ultraviolet and visible ranges. Utilizing a 2.5D out-of-plane architecture, it achieves a wide-angle detection effect (170°) equivalent to curved surfaces while enhancing absorption in the 1550 nm communication band to nearly 100%. Additionally, through-silicon via (TSV) technology is integrated for wafer-level fabrication, and a CMOS 0.18-µm integrated readout circuit is developed, achieving the super-bionic compound eye chip based on hybrid stacked integration.


Wednesday, December 11, 2024 - 03:40 PM
41-5 | Pseudo-direct LiDAR by deep-learning-assisted high-speed multi-tap charge modulators
A virtually direct LiDAR system based on an indirect ToF image sensor and charge-domain temporal compressive sensing combined with deep learning is demonstrated. This scheme has high spatio-temporal sampling efficiency and offers advantages such as high pixel count, high photon-rate tolerance, immunity to multipath interference, constant power consumption regardless of incident photon rates, and motion artifact-free. The importance of increasing the number of taps of the charge modulator is suggested by simulation.


Wednesday, December 11, 2024 - 04:05 PM
41-6 | A Color Image Sensor Using 1.0-μm Organic Photoconductive Film Pixels Stacked on 4.0-μm Si Pixels for Near-Infrared Time-of-Flight Depth Sensing
We have developed an image sensor capable to simultaneously acquire high-resolution RGB images with good color reproduction and parallax-free ranging information by 1.0-μm organic photoconductive film RGB pixels stacked on 4.0-μm NIR silicon pixels for iToF depth sensing.


Wednesday, December 11, 2024 - 04:30 PM
41-7 | Pb-free Colloidal InAs Quantum Dot Image Sensor for Infrared
We developed an image sensor using colloidal InAs quantum dot (QD) for photoconversion. After spincoating the QDs on a wafer and standard semiconductor processing, the sensor exhibited infrared sensitivity and imaging capability. This approach facilitates easier production of lead-free infrared sensors for consumer use.


Wednesday, December 11, 2024 - 04:55 PM
41-8 | Lead-Free Quantum Dot Photodiodes for Next Generation Short Wave Infrared Optical Sensors
Colloidal quantum dot sensors are disruptingimaging beyond the spectral limits of silicon. In this paper,we present imagers based on InAs QDs as alternative for 1stgeneration Pb-based stacks. New synthesis method yields 9nm QDs optimized for 1400 nm and solution-phase ligandexchange results in uniform 1-step coating. Initial EQE is17.4% at 1390 nm on glass and 5.8% EQE on silicon(detectivity of 7.4 × 109 Jones). Using metal-oxide transportlayers and >300 hour air-stability enable compatibility withfab manufacturing. These results are a starting point towardsthe 2nd generation quantum dot SWIR imagers.


Also of interest, the following talk in Tuesday's session "Major Consumer Image Sensor Innovations Presented at IEDM"

Description: Authors: Albert Theuwissen, Harvest Imaging
Image Sensors past, and progress made over the years

Sunday, November 17, 2024

Job Postings - Week of 17 November 2024

Apple

Camera Sensing Systems Engineer

Cupertino, California, USA

Link

Camera Image Sensor Digital Design Engineer Lead

Austin, Texas, USA

Link

Fairchild Imaging

Director of Sensor Product Engineering

San Jose, California, USA

Link

NASA

Development of infrared detectors and focal plane arrays for space instruments - Postdoc

Pasadena, California, USA

Link

Ouster

Silicon Photonics Packaging Engineer

San Francisco, California, USA

Link

DESY

PhD Student in cryogenic detector systems

Hamburg, Germany

Link

Teledyne

Senior Program Manager - Thermal Cameras

Montreal, Quebec, Canada

Link

Sandia National Laboratories

Postdoctoral Appointee - Integrated Photonics for Imaging and Remote Sensing Applications

Albuquerque, New Mexico, USA

Link

Shanghai Jiao Tong University

Postdoc-Experiment, Particle and Nuclear Division

Shanghai, China

Link

Wednesday, November 13, 2024

Videos from EPIC Neuromorphic Cameras Online Meeting

Presentations from the recent EPIC Online Technology Meeting on Neuromorphic Cameras are available on YouTube:

 

IKERLAN – DVS Pre-processor & Light-field DVS – Xabier Iturbe, NimbleAI EU Project Coordinator
IKERLAN is a leading technology centre in providing competitive value to industry, since 1974. They offer integral solutions in three main areas: digital technologies and artificial intelligence, embedded electronic systems and cybersecurity, and mechatronic and energy technologies. They currently have a team of more than 400 people and offices in Arrasate-Mondragón, Donostialdea and Bilbao. As a cooperative member of the MONDRAGON Corporation and the Basque Research and Technology Alliance (BRTA), IKERLAN represents a sustainable, competitive business model in permanent transformation.


FlySight – Neuromorphic Sensor for Security and Surveillance – Niccolò Camarlinghi, Head of Research
FlySight S.r.l. (A single member Company) is the Defense and Security subsidiary company of Flyby Group, a satellite remote sensing solutions company.
FlySight team offers integrated solutions of data exploitation, image processing, avionic data/sensors fusion. Our products are mainly dedicated to the exploitation of data captured by many sensors typeand our solutions are intended both for the on-ground as well as for the on-board segments.
Through itsexperience in C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance), FlySight offers innovative software development and geospatial application technology programs (GIS) customized for the best results.
Our staff can apply the right COTS for your specific mission.
The instruments and products developed for this sector can find application as dual use tools also in many civil fields like Environmental Monitoring, Oil & Gas, Precision Farming and Maritime/Coastal Planning.


VoxelSensors – Active Event Sensors : an Event-based Approach to Single-photon Sensing of Sparse Optical Signals – Ward van der Tempel, CTO
VoxelSensors is at the forefront of 3D perception, providing cutting-edge sensors and solutions for seamless integration of the physical and digital worlds. Our patented Switching Pixels® Active Event Sensor (SPAES) technology represents a novel category of efficient 3D perception systems, delivering exceptionally low latency with ultra-low power consumption by capturing a new Voxel with fewer than 10 photons. SPAES is a game-changing innovation that unlocks the true potential of fully immersive experiences for both consumer electronics and enterprise AR/VR/MR wearables.


PROPHESEE – Christoph Posch, Co-Founder and CTO
Prophesee is the inventor of the world’s most advanced neuromorphic vision systems. Prophesee’s patented sensors and AI algorithms, introduce a new computer vision paradigm based on how the human eye and brain work. Like the human vision, it sees events: essential, actionable motion information in the scene, not a succession of conventional images.


SynSense – Neuromorphic Processing and Applications – Dylan Muir, VP, Global Research Operations
SynSense is a leading-edge neuromorphic computing company. It provides dedicated mixed-signal/fully digital neuromorphic processors which overcome the limitations of legacy von Neumann computers to provide an unprecedented combination of ultra-low power consumption and low-latency performance. SynSense was founded in March 2017 based on advances in neuromorphic computing hardware developed at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich. SynSense is developing “full-stack” custom neuromorphic processors for a variety of artificial-intelligence (AI) edge-computing applications that require ultra-low-power and ultra-low-latency features, including autonomous robots, always-on co-processors for mobile and embedded devices, wearable health-care systems, security, IoT applications, and computing at the network edge.


Thales – Eric Belhaire, Senior Expert in the Technical Directorate
Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity. It develops products and solutions that help make the world safer, greener and more inclusive.

Monday, November 11, 2024

"Photon inhibition" to reduce SPAD camera power consumption

In a paper titled "Photon Inhibition for Energy-Efficient Single-Photon Imaging" presented at the European Conference on Computer Vision (ECCV) 2024 Lucas Koerner et al. write:

Single-photon cameras (SPCs) are emerging as sensors of choice for various challenging imaging applications. One class of SPCs based on the single-photon avalanche diode (SPAD) detects individual photons using an avalanche process; the raw photon data can then be processed to extract scene information under extremely low light, high dynamic range, and rapid motion. Yet, single-photon sensitivity in SPADs comes at a cost — each photon detection consumes more energy than that of a CMOS camera. This avalanche power significantly limits sensor resolution and could restrict widespread adoption of SPAD-based SPCs. We propose a computational-imaging approach called photon inhibition to address this challenge. Photon inhibition strategically allocates detections in space and time based on downstream inference task goals and resource constraints. We develop lightweight, on-sensor computational inhibition policies that use past photon data to disable SPAD pixels in real-time, to select the most informative future photons. As case studies, we design policies tailored for image reconstruction and edge detection, and demonstrate, both via simulations and real SPC captured data, considerable reduction in photon detections (over 90% of photons) while maintaining task performance metrics. Our work raises the question of “which photons should be detected?”, and paves the way for future energy-efficient single-photon imaging.

 







 

Lucas Koerner, Shantanu Gupta, Atul Ingle, and Mohit Gupta. "Photon Inhibition for Energy-Efficient Single-Photon Imaging." In European Conference on Computer Vision, pp. 90-107 (2024)
[preprint link]

Wednesday, November 06, 2024

Hamamatsu acquires BAE Systems Imaging [Update: Statement from Fairchild Imaging]

Press release: https://www.hamamatsu.com/us/en/news/announcements/2024/20241105000000.html

Acquisition of BAE Systems Imaging Solutions, Inc. Strengthening the Opto-semiconductor segment and accelerating value-added growth

2024/11/05
Hamamatsu Photonics K.K.

Photonics Management Corp. (Bridgewater, New Jersey, USA), a subsidiary of Hamamatsu Photonics K.K. (Hamamatsu City, Japan), has purchased the stock of BAE Systems Imaging Solutions, Inc. a subsidiary of BAE Systems, Inc. (Falls Church, Virginia, USA). In recognition of the company’s deep roots starting in 1920 as the Fairchild Aerial Camera Corporation, the company will return to the name first used in 2001, Fairchild Imaging.

Fairchild Imaging is a semiconductor manufacturer specializing in high-performance CMOS image sensors in the visible to near-infrared and X-ray regions, and it has the world’s best low-noise CMOS image sensor design technology. Fairchild Imaging’s core products include scientific CMOS image sensors for scientific measurement applications that simultaneously realize high sensitivity, high-speed readout, and low noise, as well as X-ray CMOS image sensors for dental and medical diagnostic applications.

Fairchild Imaging’s core products are two-dimensional CMOS image sensors that take pictures in dark conditions where low noise is essential. These products complement Hamamatsu Photonics’ one-dimensional CMOS image sensors, which are used for analytical instruments and factory automation applications such as displacement meters and encoders. Therefore, Fairchild Imaging’s technologies will enhance Hamamatsu’s CMOS image sensor product line.

Through the acquisition of shares, we expect the following:

1. Promote sales activities of Fairchild Imaging’s products by utilizing the global sales network currently established by Hamamatsu Photonics Group.
2. While Hamamatsu Photonics’ dental business serves the European and the Asian regions including Japan, Fairchild Imaging serves North America. This will lead to the expansion and strengthening of our worldwide dental market share.
3. Fairchild Imaging will become Hamamatsu’s North American design center for 2D, low-noise image sensors. This will strengthen CMOS image sensor design resources and utilize our North American and Japanese locations to provide worldwide marketing and technical support.
4. Create new opportunities and products by combining Fairchild Imaging’s CMOS image sensor design technology with Hamamatsu Photonics’ MEMS technology to support a wider range of custom CMOS image sensors and provide higher value-added products.

BAE Systems is retaining the aerospace and defense segment of the BAE Systems Imaging Solution’s portfolio, which was transferred to the BAE Systems, Inc. Electronic Systems sector, prior to the closing of this stock purchase transaction.

Fairchild Imaging will continue their operating structure and focus on developing and providing superior products and solutions to their customers.
 
 
[Update Nov 6, 2024: statement from Fairchild Imaging]
 
We are very happy to announce a new chapter in the storied history of Fairchild Imaging! BAE Systems, Inc., which had owned the stock of Fairchild Imaging, Inc. for the past 13 years, has processed a stock sale to Photonics Management Corporation, a subsidiary of Hamamatsu Photonics K.K. Resuming the identity as Fairchild Imaging, Inc., we will operate as an independent, yet wholly owned, US entity.

Fairchild Imaging is a CMOS imaging sensor design and manufacturing company, specializing in high-performance image sensors. Our x-ray and visible spectrum sensors provide class leading performance in x-ray, and from ultraviolet through visible and into near-infrared wavelengths. Fairchild Imaging’s core products include medical x-ray sensors for superior diagnostics, as well as scientific CMOS (sCMOS) sensors for measurement applications that simultaneously realize high sensitivity, fast readout, high dynamic range, and ultra-low noise in 4K resolution.
 
Marc Thacher, CEO of Fairchild Imaging, said:
“Joining the Hamamatsu family represents a great opportunity for Fairchild Imaging. Building upon decades of imaging excellence, we look forward to bringing new innovations and technologies to challenging imaging applications like scientific, space, low-light, machine vision, inspection, and medical diagnostics. The acquisition by Hamamatsu will help drive growth and agility as we continue as a design leader for our customers, partners, and employees.”
 
As part of this new chapter, Fairchild Imaging is unveiling its latest evolution of sCMOS sensors: sCMOS 3.1. These patented, groundbreaking imagers redefine the limits of what is possible in CMOS sensors for the most demanding of imaging applications.

Monday, November 04, 2024

Lynred announces 8.5um pitch thermal sensor

Link: https://ala.associates/wp-content/uploads/2024/09/241001-Lynred-8.5-micron-EN-.pdf

Lynred demonstrates smallest thermal imaging sensor for future Automatic Emergency Braking Systems (AEB) at AutoSens Europe 

Prototype 8.5 µm pixel pitch technology that shrinks by 50% the volume size of thermal cameras is designed to help automotive OEMs meet tougher future AEB system requirements, particularly at night.

Grenoble, France, October 1, 2024 – Lynred, a leading global provider of high-quality infrared sensors for the aerospace, defense and commercial markets, today announces it will demonstrate a prototype 8.5 µm pixel pitch sensor during AutoSens Europe, a major international event for automotive engineers, in Barcelona, Spain, October 8 – 10, 2024. The 8.5 µm pixel pitch technology is the smallest infrared sensor candidate for future Automatic Emergency Braking (AEB) and Advanced Driver Assistance Systems (ADAS).

The prototype, featuring half the surface of current 12 µm thermal imaging sensors for automotive applications, will enable system developers to build much smaller cameras for integration in AEB systems.

Following a recent ruling by the US National Highway Traffic Safety Administration (NHTSA), AEB systems will be mandatory in all light vehicles by 2029. It sets tougher rules for road safety at night.

The NHTSA sees driver assistance technologies and the deployment of sensors and subsystems as holding the potential to reduce traffic crashes and save thousands of lives per year. The European Traffic Safety Council (ETSC) also recognizes that AEB systems need to work better in wet, foggy and low-light conditions.

Thermal imaging sensors can detect and identify objects in total darkness. As automotive OEMs need to upgrade the performance of AEB systems within all light vehicles, Lynred is preparing a full roadmap of solutions set to help achieve this compliance. Currently gearing up for high volume production of its automotive qualified 12µm product offer, Lynred is ready to deliver the key component enabling Pedestrian Automatic Emergency Braking (PAEB) systems to work in adverse conditions, particularly at night, when more than 75% of pedestrian fatalities occur.

Lynred is among the first companies to demonstrate a longwave infrared (LWIR) pixel pitch technology for ADAS and PAEB systems that will optimize the size to performance ratio of future generation cameras. The 8.5µm pixel pitch technology will divide by two the volume of a thermal imaging camera, resulting in easier integration for OEMs, while successfully maintaining the same performance standards as larger-sized LWIR models.

Friday, November 01, 2024

Pixelplus new product videos

 

PKA210 Seamless RGB-IR Image Sensor

PG7130KA Global shutter Image Sensor


Thursday, October 31, 2024

IISW 2025 Final Call for Papers is out

The 2025 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. The event is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is the tradition, the 2025 workshop will emphasize an open exchange of information among participants in an informal, secluded setting beside the Awaji Island in Hyōgo, Japan.

The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.

Submission of abstracts:
An abstract should consist of a single page of maximum 500-words text with up to two pages of illustrations (3 pages maximum), and include authors’ name(s), affiliation, mailing address, telephone number, and e-mail address.


The deadline for abstract submission is 11:59pm, Thursday Dec 19, 2024 (GMT).
To submit an abstract, please go to: https://cmt3.research.microsoft.com/IISW2025 

 

Wednesday, October 30, 2024

Space & Scientific CMOS Image Sensors Workshop

The preliminary program for Space & Scientific CMOS Image Sensors Workshop to be held on 26th & 27th November in Toulouse Labège is available.

Registration: https://evenium.events/space-and-scientific-cmos-image-sensors-2024/








Tuesday, October 29, 2024

Call for Nominations for the 2025 Walter Kosonocky Award

International Image Sensor Society calls for nominations for the 2025 Walter Kosonocky Award for Significant Advancement in Solid-State Image Sensors.
 
The Walter Kosonocky Award is presented biennially for THE BEST PAPER presented in any venue during the prior two years representing significant advancement in solid-state image sensors. The award commemorates the many important contributions made by the late Dr. Walter Kosonocky to the field of solid-state image sensors. Personal tributes to Dr. Kosonocky appeared in the IEEE Transactions on Electron Devices in 1997. Founded in 1997 by his colleagues in industry, government and academia, the award is also funded by proceeds from the International Image Sensor Workshop.
 
The award is selected from nominated papers by the Walter Kosonocky Award Committee, announced and presented at the International Image Sensor Workshop (IISW), and sponsored by the International Image Sensor Society (IISS). The winner is presented with a certificate, complementary registration to the IISW, and an honorarium.
 
Please send us an email nomination for this year's award, with a pdf file of the nominated paper (that you judge is the best paper published/ presented in calendar years 2023 and 2024) as well as a brief description (less than 100 words) of your reason nominating the paper. Nomination of a paper from your company/ institute is also welcome.
 
The deadline for receiving nominations is January 15th, 2025.
 
Your nominations should be sent to Yusuke Oike (2025nominations@imagesensors.org), Secretary of the IISS Award Committee.

Monday, October 28, 2024

Single Photon Workshop 2024 Program Available

The 11th Single Photon Workshop will be held at the Edinburgh International Conference Centre (EICC) over the five-day period, 18-22nd November 2024.

The full program is available here: https://fitwise.eventsair.com/2024singlephotonworkshop/programme

Here are some image-sensor specific sessions and talks:

Wednesday Nov 20, 2024 Session Title: Superconducting Photon Detectors 1
Chair: Dmitry Morozov
4:40 PM - 5:10 PM
Demonstration of a 400,000 pixel superconducting single-photon camera
Invited Speaker - Adam McCaughan - National Institute of Standards and Technology (NIST)
5:10 PM - 5:15 PM
Company Symposium: Photon Spot Platinum Sponsor Speaker: Vikas Anant
5:15 PM - 5:30 PM
Development of Superconducting Wide Strip Photon Detector Paper Number: 112 Speaker: Shigehito Miki - National Institute of Information and Communications Technology (NICT)
5:30 PM - 5:45 PM
Superconducting nanowire single photon detectors arrays for quantum optics Paper Number: 34 Speaker: Val Zwiller - KTH Royal Institute of Technology
5:45 PM - 6:00 PM
Single photon detection up to 2 µm in pair of parallel microstrips based on NbRe ultrathin films
Paper Number: 80 Speaker: Loredana Parlato - University of Naples Federico II
6:00 PM - 6:15 PM
Reading out SNSPDs with Opto-Electronic Converters Paper Number: 87 Speaker: Frederik Thiele - Paderborn Univeristy
6:15 PM - 6:30 PM
Development of Mid to Far-Infrared Superconducting Nanowire Single Photon Detectors Paper Number: 195 Speaker: Sahil Patel - California Institute Of Technology

Thursday Nov 21, 2024 Session Title: Superconducting Photon Detectors 2
Chair: Martin J Stevens
8:30 AM - 8:45 AM
Opportunities and challenges for photon-number resolution with SNSPDs Paper Number: 148 Speaker: Giovanni V Resta - ID Quantique
8:45 AM - 9:00 AM
Detecting molecules at the quantum yield limit for mass spectroscopy with arrays of NbTiN superconducting nanowire detectors Paper Number: 61 Speaker: Ronan Gourgues - Single Quantum
9:00 AM - 9:30 AM
Current state of SNSPD arrays for deep space optical communication Invited Speaker - Emma E Wollman - California Institute Of Technology
9:30 AM - 9:35 AM
Company Symposium: Quantum Opus/MPD presentation Platinum Sponsors
9:35 AM - 9:50 AM
Novel kinetic inductance current sensor for transition-edge sensor readout Paper Number:238 Speaker: Paul Szypryt - National Institute of Standards and Technology (NIST)
9:50 AM - 10:05 AM
Quantum detector tomography for high-Tc SNSPDs Paper Number: 117 Speaker: Mariia Sidorova - Humboldt University of Berlin
10:05 AM - 10:20 AM
Enhanced sensitivity and system integration for infrared waveguide-integrated superconducting nanowire single-photon detectors Paper Number: 197 Speaker: Adan Azem - University Of British Columbia

 

Thursday Nov 21, 2024 Session Title: SPADs 1
Chair: Chee Hing Tan
11:00 AM - 11:30 AM
A 3D-stacked SPAD Imager with Pixel-parallel Computation for Diffuse Correlation Spectroscopy
Invited Speaker - Robert Henderson - University of Edinburgh
11:30 AM - 11:45 AM
High temporal resolution 32 x 1 SPAD array module with 8 on-chip 6 ps TDCs
Paper Number: 182 Speaker: Chiara Carnati - Politecnico Di Milano
11:45 AM - 12:00 PM
A 472 x 456 SPAD Array with In-Pixel Temporal Correlation Capability and Address-Based Readout for Quantum Ghost Imaging Applications
Paper Number: 186 Speaker :Massimo Gandola - Fondazione Bruno Kessler
12:00 PM - 12:15 PM
High Performance Time-to-Digital Converter for SPAD-based Single-Photon Counting applications
Paper Number: 181 Speaker: Davide Moschella - Politecnico Di Milano
12:15 PM - 12:30 PM
A femtosecond-laser-written programmable photonic circuit directly interfaced to a silicon SPAD array
Paper Number: 271 Speaker: Francesco Ceccarelli - The Istituto di Fotonica e Nanotecnologie (CNR-IFN)

Thursday Nov 21, 2024 Session Title: SPADs 2
Chair: Alberto Tosi
2:00 PM - 2:30 PM
Ge-on-Si Technology Enabled SWIR Single-Photon Detection
Invited Speaker - Neil Na - Artilux
2:30 PM - 2:45 PM
The development of pseudo-planar Ge-on-Si single-photon avalanche diode detectors for photon detection in the short-wave infrared spectral region
Paper Number: 254 Speaker: Lisa Saalbach - Heriot-Watt University
2:45 PM - 3:00 PM
Hybrid integration of InGaAs/InP single photon avalanche diodes array and silicon photonics chip
Paper Number: 64 Speaker: Xiaosong Ren - Tsinghua University
3:00 PM - 3:15 PM
Dark Current and Dark Count Rate Dependence on Anode Geometry of InGaAs/InP Single-Photon Avalanche Diodes
Paper Number: 248 Speaker: Rosemary Scowen - Toshiba Research Europe
3:15 PM - 3:30 PM
Compact SAG-based InGaAs/InP SPAD for 1550nm photon counting
Paper Number: 111 Speaker: Ekin Kizilkan - École Polytechnique Fédérale de Lausanne (EPFL)

Thursday Nov 21, 2024 Session Title: Single-photon Imaging and Sensing 1
Chair: Aurora Maccarone
4:15 PM - 4:45 PM
Single Photon LIDAR goes long Range
Invited Speaker - Feihu Xu - USTC China
4:45 PM - 5:00 PM
The Deep Space Optical Communication Photon Counting Camera
Paper Number: 11 Speaker: Alex McIntosh - MIT Lincoln Laboratory
5:00 PM - 5:15 PM
Human activity recognition with Single-Photon LiDAR at 300 m range
Paper Number: 232 Speaker: Sandor Plosz - Heriot-Watt University
5:15 PM - 5:30 PM
Detection Times Improve Reflectivity Estimation in Single-Photon Lidar
Paper Number: 273 Speaker: Joshua Rapp - Mitsubishi Electric Research Laboratories
5:30 PM - 5:45 PM
Bayesian Neuromorphic Imaging for Single-Photon LiDAR
Paper Number: 57 Speaker: Dan Yao - Heriot-Watt University
5:45 PM - 6:00 PM
Single Photon FMCW LIDAR for Vibrational Sensing and Imaging
Paper Number: 23 Speaker: Theodor Staffas - KTH Royal Institute of Technology

Friday Nov 22, 2024 Session Title: Single-photon Imaging 2
9:00 AM - 9:15 AM
Quantum-inspired Rangefinding for Daytime Noise Resistance
Paper Number:208 Speaker: Weijie Nie - University of Bristol
9:15 AM - 9:30 AM
High resolution long range 3D imaging with ultra-low timing jitter superconducting nanowire single-photon detectors
Paper Number: 296 Speaker: Aongus McCarthy - Heriot-Watt University
9:30 AM - 9:45 AM
A high-dimensional imaging system based on an SNSPD spectrometer and computational imaging
Paper Number: 62 Speaker: Mingzhong Hu - Tsinghua University
9:45 AM - 10:00 AM
Single-photon detection techniques for real-time underwater three-dimensional imaging
Paper Number: 289 Speaker: Aurora Maccarone - Heriot-Watt University
10:00 AM - 10:15 AM
Photon-counting measurement of singlet oxygen luminescence generated from PPIX photosensitizer in biological media
Paper Number: 249 Speaker: Vikas - University of Glasgow
10:15 AM - 10:30 AM
A Plug and Play Algorithm for 3D Video Super-Resolution of single-photon data
Paper Number:297 Speaker: Alice Ruget - Heriot-Watt University

Friday Nov 22, 2024 Session Title: Single-photon Imaging and Sensing 2
11:00 AM - 11:30 AM
Hyperspectral Imaging with Mid-IR Undetected Photons
Invited Speaker - Sven Ramelow - Humboldt University of Berlin
11:30 AM - 11:45 AM
16-band Single-photon imaging based on Fabry-Perot Resonance
Paper Number: 35 Speaker: Chufan Zhou - École Polytechnique Fédérale de Lausanne (EPFL)
11:45 AM - 12:00 PM
High-frame-rate fluorescence lifetime microscopy with megapixel resolution for dynamic cellular imaging
Paper Number: 79 Speaker: Euan Millar - University of Glasgow
12:00 PM - 12:15 PM
Beyond historical speed limitation in time correlated single photon counting without distortion: experimental measurements and future developments
Paper Number: 237 Speaker: Giulia Acconcia - Politecnico Di Milano
12:15 PM - 12:30 PM
Hyperspectral mid-infrared imaging with undetected photons
Paper Number: 268 Speaker: Emma Pearce - Humboldt University of Berlin
12:30 PM - 12:45 PM
Determination of scattering coefficients of brain tissues by wide-field time-of-flight measurements with single photon camera.
Paper Number: 199 Speaker: André Stefanov - University Of Bern

Wednesday, October 23, 2024

Image sensor basics

These lecture slides by Prof. Yuhao Zhu at U. Rochester are a great first introduction to how an image sensor works. A few selected slides are shown below. For the full slide deck visit: https://www.cs.rochester.edu/courses/572/fall2022/decks/lect10-sensor-basics.pdf

 













Monday, October 21, 2024

SLVS-EC IF Standard v3 released

Link: http://jiia.org/en/slvs-ec-if-standard-version-3-0-has-been-released/

Embedded Vision I/F WG has released "SLVS-EC IF Standard Version 3.0”.
Version 3.0 supports up to 10Gbps/lane, which is 2x faster than Version 2.0, and improved data transmission efficiency.

Link: https://www.m-pression.com/solutions/hardware/slvs-ec-rx-30-ip

SLVS-EC v3.0 Rx IP is an interface IP core that runs on Altera® FPGAs. Using this IP, you can quickly and easily implement products that support the latest SLVS-EC standard v3.0. You will also receive an "Evaluation kit" for early adoption.

  •  Altera® FPGAs can receive signals directly from the SLVS-EC Interface.
  •  Compatible with the latest SLVS-EC Specification Version 3.0.
  •  Supports powerful De-Skew function. Enables board design without considering Skew that occurs between lanes.
  •  "Evaluation kit”(see below) is available for speedy evaluation at the actual device level.

 About SLVS-EC:

SLVS-EC (Scalable Low Voltage Signaling with Embedded Clock) is an interface standard for high-speed & high-resolution image sensors developed by Sony Semiconductor Solutions Corporation. The SLVS-EC standard is standardized by JIIA (Japan Industrial Imaging Association).



Friday, October 18, 2024

Emberion 50 euro CQD SWIR imager


From: https://invision-news.de/allgemein/extrem-kostenguenstiger-swir-sensor/

Emberion is introducing an extremely cost-effective SWIR sensor that covers a range from 400 to 2,000 nm and whose manufacturing costs for large quantities are less than €50. The sensors are smaller and lighter, which expands the application possibilities of this technology in a wide range of applications. They combine Emberion's existing patented Quantom Dot technology with the patented wafer-level packaging.

Press release from Emberion: https://www.emberion.com/emberion-oy-introduces-groundbreaking-ultra-low-cost-swir-sensor/

The unique SWIR image sensor’s manufacturing cost is less than 50€ in large volume production.

Espoo, Finland — 1.10.2024 — The current cost level of SWIR imaging technology seriously limits the use of SWIR imaging in a variety of industrial, defense & surveillance, automotive and professional/consumer applications. Emberion Oy, a leading innovator in quantum dot based shortwave infrared sensing technology, is excited to announce its new ultra-low cost SWIR (Short-Wave Infrared) sensor that brings the sensor production cost down to €50 level in large volumes. This revolutionary product is set to deliver high-performance infrared imaging to truly mass-market applications such as automotive and consumer electronics as well as enabling much wider deployment of SWIR imaging in industrial, defence and surveillance applications. The revolutionary sensors are also smaller in size and weight, further extending the possibilities to use this technology in a variety of use cases. Emberion is already shipping extended range high speed SWIR cameras and will bring first ultra-low cost sensor based products to the market in 2025.

 

Bringing Advanced Imaging to Everyday Devices at a fraction of current cost

The new Emberion sensor family is designed to make advanced shortwave infrared technology accessible to wider markets, including large volume markets such as automotive sensing and consumer electronics. The new ultra-low cost SWIR sensor combines Emberion’s existing patented quantum dot sensor technology with Emberion’s patented wafer-level packaging to drastically reduce the manufacturing costs of packaged sensors. Current InGaAs and quantum dot based image sensors are typically packaged in metal or ceramic casings with a total production cost for packaged imagers in the range of several hundred euros to a few thousand euros depending on sensor technology, imager wavelength range, packaging choices and production volumes. Emberion’s sensors are manufactured and packaged on a full wafer with up to 100 imagers on a single 8” wafer, making the production cost of a single sensor to be a fraction of current alternatives. In addition to low cost, the sensor enables high integration of functionality into the in-house designed read-out IC, reduces size and weight, and provides stability in performance, enabling new functionalities in everyday technology that were once only available in high-end or niche markets.

Examples of applications that require low-cost, compact sensors:

  • Automotive Industry: Enhanced driver assistance systems (ADAS) with improved visibility in demanding weather conditions for increased safety and performance.
  • Consumer Electronics: Integrating SWIR sensors into smartphones and wearable devices, allowing for facial recognition in all lighting conditions, gesture control, and material identification.
  • Augmented and Virtual Reality (AR/VR): Enabling more accurate environmental sensing for immersive, real-world interaction in AR/VR environments.
  • Drones: Precision vision systems for navigation and object detection in both consumer and defence markets.

Some of the key benefits of the Emberion SWIR sensor include:

  • Cost Efficiency: Thanks to wafer-level packaging, the production process is streamlined, making this sensor by magnitude more affordable than any existing SWIR solution. Also, the high sensor integration level with image processing embedded into the sensor decreases the need for image post processing significantly and decreases the need for camera components on system level.
  • Size, weight and power (SWaP) optimization: The miniature and power efficient design is ideal for space-constrained applications like consumer electronics and automotive components. The high sensor integration level is also a significant contributor to the system SWaP optimization.
  • Stability: The wafer-level packaging improves the sensor stability and protection and makes it suitable for demanding environments like automotive and outdoor applications. It can also be integrated into external packaging if needed, e.g. LCC or metal packaging.
  • Extended Wavelength Sensitivity: Covering a range from 400 nm to 2000 nm, ideal for detecting objects and scenes extending the spectral range beyond traditional SWIR sensors.