Tuesday, April 21, 2026

Conference List - October 2026

VISION - 6-8 October 2026 - Stuttgart, Germany - Website

Photonics Spectra Sensing Technologies Summit 2026 - 7 October 2026 - Online - Website

Optica Laser Congress and Exhibition - 11-15 October 2026 - Vilnius, Lithuania - Website

ASNT Annual Conference - 12-15 October 2026 - Columbus, Ohio, USA - Website

CPAD 2026 (Coordinating Panel for Advanced Detectors) - 20-23 October 2026 - Seattle, Washington, USA - Website

SPIE/COS Photonics Asia - 24-26 October 2026 - Nantong, Jiangsu, China - Website

IEEE Sensors Conference - 25-28 October 2026 - Rotterdam, The Netherlands, -  Website

Image Sensors Asia - 28-29 October 2026 - Seoul, South Korea/Hybrid - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Saturday, April 11, 2026

Photonics article on single-photon detectors industry use-cases

Link: https://www.photonics.com/Articles/Single-Photon-Detection-Bridges-the-Gap-Between/p7/a71989

Single-Photon Detection Bridges the Gap Between Quantum Tech and Industrial Users

The article covers the following companies and startups:

  • NovoViz: Integrating SPAD sensors with on-chip digital processing for industrial applications.
  • VTEC Lasers & Sensors: Offering "Quspads"  InP-based SPAD chips with high efficiency and room-temperature operation.
  • Ubicept: Software platforms for real-time reconstruction with megapixel color SPAD sensors.
  • Photon Force: High-throughput SPAD array camera with 50-picosecond temporal resolution.
  • Quantum Computing Inc. (QCi): Quantum lidar and quantum photonic vibrometer systems.
  • ID Quantique (IDQ): SNSPD systems for integrated circuit inspection and Ariane 6 rocket monitoring.
  • Sony: Manufacturer of SPAD-based lidar modules. 
 

Thursday, April 09, 2026

AlpsenTek raises another round of funding for its hybrid vision sensor

AlpsenTek Completes Series B+ as Hybrid Vision Sensors Become a New Gateway to the AI-Powered Physical World

SHENZHEN, China — March 17, 2026AlpsenTek, a pioneer in hybrid vision sensor technology, today announced the completion of a Series B+ financing round and the total fund-rasing has surpassed 100 million USD.

The B+ round was jointly backed by BEIDM, Guangdong Finance Fund Management, GAC Capital, Circumference Capital, Changjiang Capital, Bluetrum, UNICC Capital, Zhichen Investment, Wofo Venture Capital, and Sunyes.

The new funding will support continued core technology development, large-scale product manufacturing, and global market expansion, accelerating the industrial adoption of next-generation AI vision sensing technologies.

The Growing Need for Real-Time Perception in Physical AI

As artificial intelligence moves beyond the digital world and increasingly interacts with the physical environment, real-time environmental perception is becoming a fundamental infrastructure for intelligent systems.

Traditional vision sensors, which rely on fixed-frame-rate image capture and full-pixel data acquisition, are gradually struggling to meet the emerging requirements of intelligent perception systems that demand high speed, low latency, and high dynamic range.

Hybrid Vision Sensing: A New Path for AI Machine Vision

AlpsenTek’s Hybrid Vision Sensor (HVS) technology introduces a new technical paradigm for machine vision systems.

The technology integrates frame-based image sensing and event-based sensing mechanisms on a single sensor chip, enabling devices to simultaneously capture both image information and brightness change signals within a scene. This provides AI systems with visual inputs that are both more efficient and more representative of real-world dynamics.

If traditional image sensors record “what the world looks like,” hybrid vision sensors capture both “what the world looks like” and “how the world is changing.”

Dual-Modality Perception for Next-Generation AI

Compared with vision systems that rely solely on frame-based images, hybrid vision sensors can detect scene changes with much higher temporal resolution while maintaining full image output capability.

This dual-modality perception approach allows AI systems to achieve more stable and efficient visual perception in high-speed motion, high dynamic range, and complex lighting environments.

For rapidly developing AI applications—including robotics, autonomous driving, and intelligent devices—machines must not only see two-dimensional image details, spatial structure, and color, but also understand how environments evolve over time.

By introducing the temporal dimension alongside traditional visual information, hybrid vision sensors enable machines to more effectively perceive object motion, interactions, and environmental changes, significantly enhancing a system’s ability to understand the real world.

Reducing Data Redundancy for Efficient Edge AI

At the same time, traditional visual systems generate large amounts of redundant data during video capture, requiring substantial computational resources for processing.

Hybrid vision sensors adopt an event-driven sensing mechanism, outputting key information only when changes occur in a scene. This reduces redundant data generation at the source and provides more efficient data input for edge AI systems.

In the AI era, vision sensors are evolving from simple imaging devices into core interfaces through which machines perceive and understand the physical world.

CEO Perspective

Deng Jian, founder and CEO of AlpsenTek, said the rapid transition of AI from digital environments into the real world is reshaping the role of perception technologies.

“Artificial intelligence is rapidly moving from digital space into the real world,” Deng said. “Future AI systems—whether robots, intelligent devices, or automated systems—will require continuous, real-time perception of the physical environment. Hybrid vision sensors were developed to meet this demand. By simultaneously providing image information and motion-change data, we aim to build a more efficient visual perception foundation for the next generation of intelligent systems.”

Building a Hybrid Vision Product Ecosystem

As a key innovator in hybrid vision technology, AlpsenTek has established a complete proprietary technology stack spanning pixel architecture, chip design, and vision algorithms, and has been among the first globally to achieve large-scale production of hybrid vision sensors.

In 2025, the company introduced the APX014 (ALPIX-Pizol) hybrid vision sensor designed for edge AI perception applications, along with the APX002 (ALPIX-Maloja) pure event-based vision sensor.

Together with the previously released APX003 series and APX004 series, the company has formed a growing product portfolio targeting applications across robotics, wearables, smart home devices, automotive electronics, and consumer electronics.

Accelerating Industry Adoption

AlpsenTek is currently collaborating with several leading global technology companies to advance the large-scale adoption of hybrid vision sensors in intelligent devices and AI systems.

Deng said the company is entering a stage of acceleration as AI vision technologies move toward mass deployment.

“We are now at a pivotal moment for AI vision technologies to move into large-scale applications,” Deng said. “Over the next decade, countless intelligent systems will enter the real world, and visual perception will be one of their most fundamental technologies. Our goal is to make hybrid vision sensors one of the key perception interfaces for next-generation intelligent devices.”

As AI and intelligent hardware continue to evolve, new visual perception technologies are entering an unprecedented phase of opportunity. AlpsenTek said it will continue advancing core technological innovation and product commercialization to expand hybrid vision sensing into more real-world applications.

In an era where AI is moving into the physical world, machines must first learn to see the world—and see its changes—efficiently.

 

Tuesday, April 07, 2026

SmartSens unveils 1" 50MP HDR CIS

Link: https://www.gizmochina.com/2026/03/27/smartsens-sc5a6xs-1inch-50mp-sensor-launch/

SmartSens unveils SC5A6XS 1-inch 50MP sensor, brings advanced HDR tech, 4K 120fps support

Chinese image sensor maker SmartSens has introduced a new camera sensor aimed at flagship smartphones. The SC5A6XS brings a 50-megapixel 1-inch format and focuses on improving dynamic range and video capabilities. The announcement highlights upgrades in HDR processing, low-light imaging, and power efficiency, setting the stage for next-generation mobile photography.

The SC5A6XS is built on a 22nm stacked process and integrates the brand’s upgraded Lofic HDR 3.0 technology. This system enhances image quality in challenging lighting by capturing a wider range of brightness levels. With a peak dynamic range of 115dB, the sensor aims to preserve highlight details while retaining shadow information in high-contrast scenes.

The HDR system works through multi-frame fusion within a single exposure, which also helps reduce motion artefacts. This is particularly useful in video scenarios where subjects or the camera are in motion. The sensor supports 4K video at 120fps, along with 4K 60fps recording in HDR mode, making it suitable for advanced video use cases on smartphones.

In terms of hardware, the sensor features a 1.6μm pixel size and incorporates SFCPixel technology to improve light sensitivity. With higher sensitivity and reduced read noise, it is designed to produce clearer images in dim conditions without excessive grain.

Autofocus is handled through a combination of full-pixel AllPix ADAF and partial pixel phase detection, allowing faster and more reliable focusing across different lighting environments. Additionally, the company has worked on reducing power consumption, with an approximate 11 percent improvement in HDR mode, which may help control device heating during extended video recording.

The SC5A6XS has already entered the sampling phase and is expected to move into mass production in the second quarter of 2026. It is likely to appear in upcoming flagship smartphones, probably the Huawei Pura 90 series, being the likely candidate, focusing heavily on camera performance. 

Thursday, April 02, 2026

Sony announces new image sensor IMX908

[Update Apr 6, 2026: a previous version of this post incorrectly said "global shutter" in the title.] 

Sensor specs: https://www.sony-semicon.com/en/products/is/security/security/IMX908.html

News: https://www.sony-semicon.com/en/news/2026/2026031701.html 

Sony Semiconductor Solutions to Release 4K Image Sensor for Security Cameras with the Industry’s Smallest 1.45 µm LOFIC Pixels
Contributing to improved recognition precision with high image quality in high-contrast environments and dark scenes

Atsugi, Japan — Sony Semiconductor Solutions Corporation (Sony) today announced the upcoming release of the IMX908, a 4K CMOS image sensor for security cameras with the industry’s smallest 1.45 µm LOFIC pixels.

The new sensor uses the newly developed LOFIC pixels to achieve 96 dB high dynamic range imaging at 4K resolution with a single exposure. Building on this, improved low-light performance delivers high-quality imaging with reduced highlight blowout, loss of shadow detail, and noise in both high contrast environments and dark locations compared to conventional products.

The new sensor will expand Sony’s lineup of products with both high-resolution and high dynamic range for security camera applications, which require high-precision image recognition in a wide range of indoor and outdoor environments, thereby contributing to a safer and more secure society.

Security cameras have been widely used not just for security surveillance, but also in broad applications including monitoring public spaces such as urban areas and other facilities. As AI-based image recognition becomes a standard feature in cameras, the demand for image sensors that can provide stable and high-quality imaging in conditions from bright to dark continues to grow.

The IMX908 employs STARVIS 3™, Sony’s proprietary LOFIC pixel technology developed for security cameras. It enables nearly 20x the amount of saturated charge as conventional products and delivers an approximately 27% improvement in low-light performance, which makes for a dynamic range of 96 dB. Not using multiple exposures, the more common method for HDR imaging, this sensor also provides high dynamic range imaging with a single exposure to deliver high-definition images with fewer artifacts, even of scenes with moving subjects. Furthermore, Sony’s original pixel design has enabled all these features to be provided at the industry’s smallest LOFIC pixel size of 1.45 µm. By offering higher-quality 4K imaging even in high-contrast scenes and dark environments, the new product will contribute to improved recognition accuracy and multifunctionality in security cameras.