Friday, September 26, 2025

ISSW 2026 call for papers


The International SPAD Sensor Workshop

1st-4th June 2026 / Yonsei University, Seoul, South Korea

The 2026 International SPAD Sensor Workshop (ISSW) is a biennial event focusing on Single-Photon Avalanche Diodes (SPAD), SPAD-based sensors, and related applications. The workshop welcomes all researchers (including PhD students, postdocs, and early-career researchers), practitioners, and educators interested in these topics.

This fifth edition of the workshop will take place in Seoul, South Korea, hosted at Yonsei University, in a venue suited to encourage interaction and a shared experience among the attendees. The workshop will follow a 1-day introductory school on SPAD sensor technology, which will be held in the same venue as the workshop on June 1st, 2026.

The workshop will include a mix of invited talks and peer-reviewed contributions. Accepted works will be published on the International Image Sensor Society website (https://imagesensors.org/). Submitted works may cover any of the aspects of SPAD technology, including device modeling, engineering and fabrication, SPAD characterization and measurements, pixel and sensor architectures and designs, and SPAD applications.

Topics
Papers on the following SPAD-related topics are solicited:

●      CMOS/CMOS-compatible technologies
●      SiPMs
●      III-V, Ge-on-Si
●      Modeling
●      Quenching and front-end circuits
●      Architectures
●      Time-to-digital converters
●      Smart data-processing techniques
●      Applications of SPAD single pixel and arrays, such as:
o   Depth sensing / ToF / LiDAR
o   Time-resolved imaging
o   Low-light imaging
o   Quantum imaging
o   High-dynamic-range imaging
o   Biophotonics
o   Computational imaging
o   Quantum RNG
o   High-energy physics
o   Quantum communications
●      Emerging technologies & applications

Draft paper submission
Submission portal TBD.

Paper format - Each submission should comprise a 1000-character abstract and a 3-page paper, equivalent to 1 page of text and 2 pages of images. The submission must include the authors' name(s) and affiliation, mailing address, and email address. The formatting can adhere to either a style that integrates text and figures, akin to the standard IEEE format, or a structure with a page of text followed by figures, mirroring the format of the International Solid-State Circuits Conference (ISSCC) or the IEEE Symposium on VLSI Technology and Circuits. Examples illustrating these formats can be accessed in the online database of the International Image Sensor Society.

The deadline for paper submission is 23:59 CET, January 11th, 2026.

Papers will be considered on the basis of originality and quality. High-quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee.

Accepted papers will be made freely available for download from the International Image Sensor Society website.

Poster submission
In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics.

If you wish to take up this opportunity, please submit a 1000-character abstract and a 1-page description (including figures) of the proposed research activity, along with the authors’ name(s) and affiliation, mailing address, and e-mail address.

The deadline for paper submission is 23:59 CET, January 11th, 2026.

Key dates
The deadline for paper submission is 23:59 CET, January 11th, 2026.

Authors will be notified of the acceptance of their papers & posters latest by February 22nd, 2026.

The final paper submission date is March 29th, 2026.

The presentation material submission date is May 22nd, 2026.

Location
ISSW 2026 will be held fully in-person in Seoul, S. Korea, at the Baekyang Nuri Grand Ballroom at Yonsei University. 

Tuesday, September 16, 2025

Conference List - March 2026

Electronic Imaging - 1-5 March 2026 - Burlingame, California, USA - Website

22nd Annual Device Packaging Conference - 2-5 March 2026 - Phoenix, Arizona, USA - Website

EDIT (Excellence in Detector and Instrumentation Technologies) - 3-13 March 2026 - Geneva, Switzerland - Website

Laser World of Photonics China - 18-20 March 2026 - Shanghai, China - Website

Image Sensors Europe - 17-18 March 2026 - London, UK - Website

MEMS & Sensors Executive Conference - 31 March - 2 April 2026 - Cambridge, Massachusetts, USA - Website

If you know about additional local conferences, please add them as comments.

Return to Conference List index

Happening Today: Swiss Photonics Lunch Chat

Link: https://www.swissphotonics.net/home?event_id=4480

Lunch Chat: SPAD arrays and cameras: a comparison with conventional image sensors and detectors

Tue, 16.09.2025, online

This talk will introduce single-photon avalanche diode (SPAD) arrays and cameras, highlighting how they differ from conventional imaging and photon-counting technologies. We will review the state-of-the-art in SPAD devices and compare their performance with established detectors such as photomultiplier tubes (PMTs), silicon photomultipliers (SiPMs), EMCCD cameras, as well as modern sCMOS and qCMOS image sensors. The discussion will focus on their working principles and on when SPAD-based systems provide unique advantages versus when conventional solutions may be more appropriate, depending on the application.

Speaker:
Milo Wu, PhD, Business Development Manager PI Imaging

Date
Tuesday, 16 September 2025

Time
12:00 - 12:45 (CEST)

Software
Zoom

Costs
free of charge

Registration only necessary once
This event series requires registration (see link above). We will send you the access information (Zoom-link and ID) by email after the registration. As the Zoom link remains the same every week, you do not need to register again for the following meetings.

Sunday, September 14, 2025

Job Postings - Week of 14 September 2025

Amazon

ASIC SoC Manager, Amazon Camera ASIC Team

Sunnyvale, California, USA

Link

GlobalFoundries

Principal Engineer Device Engineering (FRCMOS): SPAD/CIS Development

Singapore

Link

Vital Chemicals

IC Designer

Cupertino, California, USA

Link

Sony

European Graduate Program - Image Sensor Designer

Oslo, Norway

Link

Qualcomm

Camera AF/EIS Algorithm System Engineer

Shanghai, China

Link

Teledyne FLIR

Sr. Principal ASIC/Analog Design Engineer

Goleta, California, USA

Link

HEPHY

PHD STUDENT on semiconductor detector development and readout electronics

Vienna, Austria

Link

Fairchild Imaging

Senior Quality Engineer

San Jose, California, USA

Link

Johannes Gutenberg University

Particle Detectors for Future Experiments from Concept to Operation

Mainz, Germany

Link

Friday, September 12, 2025

Image sensors workshop at IEEE Sensors 2025

A workshop titled "Future CMOS Image Sensors for AI Era – AI or not" will be held alongside IEEE Sensors 2025 in Vancouver, Canada on Sunday Oct 19, 2025.

Artificial intelligence (AI) is becoming increasingly integrated into our daily lives. In particular, Deep Neural Networks (DNNs) are expected to merge with CMOS Image Sensors (CISs), which, soon, will open up a new era of smart, adaptive, and autonomous systems in various consumer electronics, such as smartphones, automotive technology, and augmented/virtual reality glasses.

Future CMOS Image Sensors for AI Era – AI or not
Artificial intelligence (AI) is becoming increasingly integrated into our daily lives. In particular, Deep Neural Networks (DNNs) are expected to merge with CMOS Image Sensors (CISs), which, soon, will open up a new era of smart, adaptive, and autonomous systems in various consumer electronics, such as smartphones, automotive technology, and augmented/virtual reality glasses.   

This workshop will focus on the future CISs that incorporate state-of-the-art computational image sensor processors (ISPs). These capabilities have evolved from traditional computation to DNNs in the AI era. Industry leaders and academic researchers will present invited talks covering two major topics: 
1. Trends in CISs technology focused on computation, including neural networks for future applications and associated software.
2. Sensor technologies and ISPs designed for the AI era, and Sensor simulations tailored for future CISs

We believe this workshop will pave the way for advancements in future imaging and sensing technology.   We encourage all attendees of the IEEE SENSORS 2025 conference to engage in discussions about "Future CMOS Image Sensors for AI Era – AI or not". 


Wednesday, September 10, 2025

International Image Sensor Workshop (IISW) 2025 proceedings available

IISW 2025 papers available in our public archive at https://imagesensors.org/2025-papers/.

Each article also got a DOI assigned for easy future references, just like all other papers published by IISS since 2007.

Thank you to the all the organizers and volunteers who made this workshop possible!

Monday, September 08, 2025

VoxelSensors Qualcomm collab

https://www.globenewswire.com/news-release/2025/08/28/3140996/0/en/VoxelSensors-to-Advance-Next-Generation-Depth-Sensing-Technology-with-10x-Power-Savings-for-XR-Applications.html 

 VoxelSensors to Advance Next-Generation Depth Sensing Technology with 10x Power Savings for XR Applications
VoxelSensors, a company developing novel intelligent sensing and data insights technology for Physical AI, today announced a collaboration with Qualcomm Technologies, Inc. to jointly optimize VoxelSensors’ sensing technology with Snapdragon® XR Platforms.

 Brussels, Aug. 28, 2025 (GLOBE NEWSWIRE) -- VoxelSensors, a company developing novel intelligent sensing and data insights technology for Physical AI, today announced a collaboration with Qualcomm Technologies, Inc. to jointly optimize VoxelSensors’ sensing technology with Snapdragon® XR Platforms.

Technology & Industry Challenges

VoxelSensors has developed Single Photon Active Event Sensor (SPAES™) 3D sensing, a breakthrough technology that solves current critical depth sensing performance limitations for robotics and XR. The SPAES™ architecture addresses them by delivering 10x power savings and lower latency, maintaining robust performance across varied lighting conditions. This innovation is set to enable machines to understand both the physical world and human behavior from user’s point-of-view, advancing Physical AI.

Physical AI processes data from human perspectives to learn about the world around us, predict needs, create personalized agents, and adapt continuously through user-centered learning. This enables new and exciting applications previously unattainable. At the same time, Physical AI pushes the boundaries of operation to wider environments posing challenging conditions like variable lighting and power constraints.

VoxelSensors’ technology addresses both challenges by offering a technology that expands the operative limits of current day sensors, while collecting human point-of-view data to better train physical AI models. Overcoming these challenges will define the future of human-machine interaction.
Collaboration

VoxelSensors is working with Qualcomm Technologies to jointly optimize VoxelSensors’ SPAES™ 3D sensing technology with Snapdragon AR2 Gen 1 Platform, allowing a low-latency and flexible 3D active event data stream. The optimized solution will be available to select customers and partners by December 2025.

“We are pleased to collaborate with Qualcomm Technologies,” said Johannes Peeters, CEO of VoxelSensors. “After five years of developing our technology, we see our vision being realized through optimizations with Snapdragon XR Platforms. With our sensors that are ideally suited for next-generation 3D sensing and eye-tracking systems, and our inference engine for capturing users’ egocentric data, we see great potential in enabling truly personal AI agent interactions only available on XR devices.”
“For the XR industry to expand, Qualcomm Technologies is committed to enabling smaller, faster, and more power-efficient devices,” said Ziad Asghar, SVP & GM of XR at Qualcomm Technologies, Inc. “We see great potential for small, lightweight AR smart glasses that consumers can wear all day. VoxelSensors’ technology offers the potential to deliver higher performance rates with significantly lower power consumption, which is needed to achieve this vision.”

Market Impact and Future Outlook

As VoxelSensors continues to miniaturize their technology, the integration into commercial products is expected to significantly enhance the value proposition of next-generation XR offerings. Collaborating with Qualcomm Technologies, a leader in XR chipsets, emphasizes VoxelSensors’ commitment to fostering innovation to advance the entire XR ecosystem, bringing the industry closer to mainstream adoption of all-day wearable AR devices.

Friday, September 05, 2025

SMPTE awards Dr. Peter Centen

https://www.smpte.org/about/awards-programs/camera-winners

2025 - Dr. Peter G. M. Centen

For pioneering innovations in image sensor technology that transformed electronic cinematography and broadcast imaging. Over a career spanning more than four decades, Dr. Centen played a pivotal role in the industry’s transition from CCD to CMOS image sensors, serving as chief architect of the Xensium family that enabled HD, 4K, HDR, and HFR imaging. During the transitions from SD to HD, narrow-screen to widescreen, and film to digital cinematography, his development of Dynamic Pixel Management—a groundbreaking sub-pixel-control technology—allowed a single sensor to support multiple resolutions and aspect ratios, including ultra-wide formats (~2.4:1), without compromise. This innovation, first implemented in the Viper FilmStream camera, eliminated the need for format-specific imaging systems and laid the foundation for today’s flexible, high-performance camera designs.

The Camera Origination and Imaging Medal, established in 2012, recognizes significant technical achievements related to inventions or advances in imaging technology, including sensors, imaging processing electronics, and the overall embodiment and application of image capture devices.​ 

Wednesday, September 03, 2025

Prophesee announces GenX320 starter kits for Raspberry Pi

https://www.prophesee.ai/2025/08/26/prophesee-brings-event-based-vision-to-raspberry-pi-5-with-genx320-starter-kit/

Prophesee Brings Event-Based Vision to Raspberry Pi 5 with GenX320 Starter Kit

New starter kit provides developers efficient, cost-effective way to leverage low-power, high-speed neuromorphic vision for IoT, drones, robotics, security and surveillance—with one of the world’s most popular embedded development platforms

PARIS, Aug 26, 2025

Prophesee, the inventor and leader in event-based neuromorphic vision systems, today announces the launch of the GenX320 Starter Kit for Raspberry Pi® 5, making its breakthrough frameless sensing technology available to the Raspberry Pi developer community for the first time. Built around Prophesee’s ultra-compact, ultra-efficient GenX320 event-based vision sensor, the kit connects directly to the Raspberry Pi 5 camera connector to allow development of real-time applications that leverage the advantages of event-based vision for drones, robotics, industrial automation, surveillance, and more. 

The kit enables efficient, cost-effective and easy-to-use access to develop solutions based on Prophesee’s advanced Metavision® event-based vision platform, through use of the company’s OpenEB, open-source core of its award-winning Metavision SDK. The Raspberry Pi ecosystem is one of the largest and most active hardware communities in the world, with more than 60 million units sold and millions of developers engaged across open-source and maker platforms.

Event-based vision is a paradigm shift from traditional frame-based approaches. It doesn’t capture entire images at once but instead detects changes in brightness, known as “events,” at each pixel. This makes sensors much faster (responding in microseconds), able to operate with much less data and processing power, and be more power-efficient than traditional sensors. 

The kit is purpose-built to enable real-world, real-time applications where traditional frame-based vision struggles:

  • Drones & Robotics: Obstacle avoidance, drone-to-drone tracking, real-time SLAM
  • Industrial IoT: 3D scanning, defect detection, and predictive maintenance
  • Surveillance & Safety: Intrusion detection, fall detection, and motion analytics 

ABOUT THE KIT

The GenX320 Starter Kit is built around the Prophesee GenX320 sensor, the smallest and most power-efficient event-based vision sensor available. With a 320×320 resolution, >140 dB dynamic range, event rate equivalent to ~10,000 fps, and sub-millisecond latency, the sensor provides the performance needed for demanding real-time applications on an embedded platform.

Key Features:

  •  Compact event-based camera module with MIPI CSI-2 interface
  •  Native integration with Raspberry Pi 5 (board sold separately)
  •  Power-efficient operation (<50 mW sensor-only consumption)
  •  OpenEB support with Python and C++ APIs

Software Resources:

  • Developers will be able to access drivers, data recording, replay and visualization tools on GitHub.
  •  Access to the Prophesee Knowledge Center, a centralized location for users to access various resources, including: a download repository, user guides, and FAQs; a community forum to share ideas; a support ticket system; and additional resources such as application notes, product manuals, training videos, and more than 200 academic papers.

AVAILABILITY

The Prophesee GenX320 Starter Kit for Raspberry Pi 5 is available for pre-order starting August 26, 2025, through Prophesee’s website and authorized distributors. For more information or to order, visit: www.prophesee.ai/event-based-starter-kit-genx320-raspberry-pi-5/ 

Monday, September 01, 2025

Galaxycore 50MP 0.61um CIS

Translated from Baidu news: https://baijiahao-baidu-com.translate.goog/s?id=1839605263838551524&wfr=spider&for=pc&_x_tr_sl=zh-CN&_x_tr_tl=de&_x_tr_hl=de&_x_tr_pto=wapp

GLOBAL HUI, August 5th | GalaxyCore (688728.SH) announced that it has recently achieved mass production and shipment of its 0.61-micron 50-megapixel image sensor product. This product, the world's first single-chip 0.61-micron pixel image sensor, is based on the company's unique Galaxy Cell 2.0 process platform and manufactured in the company's own wafer fab, significantly improving small-pixel performance. This product utilizes a 1/2.88 optical size, reducing the thickness of the camera module and making it widely applicable to smartphone rear-mounted main cameras, ultra-wide-angle cameras, and front-facing cameras. Furthermore, this product integrates single-frame high dynamic range (DAGHDR) technology, achieving wider dynamic range coverage in a single exposure, effectively addressing overexposure and underexposure issues in backlit scenes. It also supports PDAF phase autofocus, ensuring a fast and accurate shooting experience.

The company's 0.61-micron 50-megapixel image sensor has entered mass production and shipment, successfully entering the rear-mounted main camera market for branded mobile phones. This marks further market recognition of the company's innovative high-pixel single-chip integration technology and fully demonstrates the efficiency of its Fab-Lite model. To date, the company has achieved mass production of 0.7-micron 50-megapixel, 1.0-micron 50-megapixel, and 0.61-micron 50-megapixel image sensors based on single-chip integration technology. The company will subsequently leverage this technology platform to further enhance the performance of high-pixel products such as 32-megapixel and 50-megapixel, while also launching products with specifications exceeding 100 megapixels. This will continuously strengthen the company's core competitiveness, increase market share, and expand its leading position.