Friday, June 20, 2025

Sony IMX479 520-pixel SPAD LiDAR sensor

Press release: https://www.sony-semicon.com/en/news/2025/2025061001.html

Sony Semiconductor Solutions to Release Stacked SPAD Depth Sensor for Automotive LiDAR Applications, Delivering High-Resolution, High-Speed Performance High-resolution, high-speed distance measuring performance contributes to safer, more reliable future mobility

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX479 stacked, direct Time of Flight (dToF) SPAD depth sensor for automotive LiDAR systems, delivering both high-resolution and high-speed performance.

The new sensor product employs a dToF pixel unit composed of 3×3 (horizontal × vertical) SPAD pixels as a minimum element to enhance measurement accuracy using a line scan methodology. In addition, SSSs proprietary device structure enables a frame rate of up to 20 fps, which is the fastest for such a high-resolution SPAD depth sensor having 520 dToF pixels. 

The new product enables the high-resolution and high-speed distance measuring performance demanded for an automotive LiDAR required in advanced driver assistance systems (ADAS) and automated driving (AD), contributing to safer and more reliable future mobility. 

LiDAR technology is crucial for the high-precision detection and recognition of road conditions and the position and shape of the objects, such as vehicles, pedestrians. There is a growing demand for further technical advancements and developments progress in LiDAR toward Level 3 automated driving, which allows for autonomous control. SPAD depth sensors use the dToF measurement method, one of the LiDAR ranging methods, that measures the distance to an object by detecting the time of flight (time difference) of light emitted from a source until it returns to the sensor after being reflected by the object.

The new sensor harnesses SSS’s proprietary technologies acquired in the development of CMOS image sensors, including the back-side illuminated, stacked structure and Cu-Cu (copper-copper) connections. By integrating the newly developed distance measurement circuits and dToF pixels on a single chip, the new product has achieved a high-speed frame rate of up to 20 fps while delivering a high resolution of 520 dToF pixels with a small pixel size of 10 μm square.

Main Features
■ Up to 20 fps frame rate, the fastest for a 520 dToF pixel SPAD depth sensor

This product consists of a pixel chip (top) with back-illuminated dToF pixels and a logic chip equipped with newly developed distance measurement circuits (bottom) using a Cu-Cu connection on a single chip. This design enables a small pixel size of 10 μm square, achieving high resolution of 520 dToF pixels. The new distance measurement circuits handle multiple processes in parallel for even better high-speed processing.

These technologies achieve a frame rate of up to 20 fps, the fastest for a 520 dToF pixel SPAD depth sensor. They also deliver capabilities equivalent to 0.05 degrees vertical angular resolution, improving the vertical detection accuracy by 2.7 times that of conventional products. These elements allow detection of three-dimensional objects that are vital to automotive LiDAR, including objects as high as 25 cm (such as a tire or other objects in the road) at a distance of 250 m.

■ Excellent distance resolution of 5 cm intervals
The proprietary circuits SSS developed to enhance the distance resolution of this product individually processes each SPAD pixel data and calculates the distance. Doing so successfully improved the LiDAR distance resolution to 5 cm intervals.

■ High, 37% photon detection efficiency enabling detection of objects up to a distance of 300 m
This product features an uneven texture on both the incident plane and the bottom of the pixels, along with an optimized on-chip lens shape. Incident light is diffracted to enhance the absorption rate to achieve a high, 37% photon detection efficiency for the 940 nm wavelength, which is commonly used on automotive LiDAR laser light sources. It allows the system to detect and recognize objects with high precision up to 300 m away even in bright light conditions where the background light is at 100,000 lux or higher.

 


 


Wednesday, June 18, 2025

Artilux and VisEra metalens collaboration

News release: https://www.artiluxtech.com/resources/news/1023

Artilux, the leader of GeSi (germanium-silicon) photonics technology and pioneer of CMOS (complementary metal-oxide-semiconductor) based SWIR (short-wavelength infrared) optical sensing, imaging and communication, today announced its collaboration with VisEra Technologies (TWSE: 6789) on the latest Metalens technology. The newly unveiled Metalens technology differs from traditional curved lens designs by directly fabricating, on a 12” silicon substrate, fully-planar and high-precision nanostructures for precise control of light waves. By synergizing Artilux’s core GeSi technology with VisEra’s advanced processing capabilities, the demonstrated mass-production-ready Metalens technology significantly enhances optical system performance, production efficiency and yield. This cutting-edge technology is versatile and can be broadly applied in areas such as optical sensing, optical imaging, optical communication, and AI-driven commercial applications.

Scaling the Future: Opportunities and Breakthroughs in Metalens Technology
With the rise of artificial intelligence, robotics, and silicon photonics applications, silicon chips implemented for optical sensing, imaging, and communication are set to play a pivotal role in advancing these industries. As an example, smartphones and wearables having built-in image sensing, physiological signal monitoring, and AI-assistant capabilities will become increasingly prevalent. Moreover, with high bandwidth, long reach, and power efficiency advantages, silicon photonics is poised to become a critical component for supporting future AI model training and inference in AI data centers. As hardware designs require greater miniaturization at the chip-level, silicon-based "Metalens” technology will lead and accelerate the deployment of these applications.

Metalens technology offers the benefits of single-wafer process integration and compact optical module design, paving the way for silicon chips to gain growth momentum in the optical field. According to the Global Valuates Reports, the global market for Metalens was valued at US$ 41.8 million in the year 2024 and is projected to reach a revised size of US$ 2.4 billion by 2031, growing at a CAGR up to 80% during the forecast period 2025-2031.

Currently, most optical systems rely on traditional optical lenses, which utilize parabolic or spherical surface structures to focus light and control its amplitude, phase, and polarization properties. However, this approach is constrained by physical limitations, and requires precise mechanical alignment. Additionally, the curved designs of complex optical components demand highly accurate coating and lens-formation processes. These challenges make it difficult to achieve wafer-level integration with CMOS-based semiconductor processes and optical sensors, posing a significant hurdle to the miniaturization and integration of optical systems.

Innovative GeSi and SWIR Sensing Technology Set to Drive Application Deployment via Ultra-Thin Optical Modules
Meta-Surface technology is redefining optical innovation by replacing traditional curved microlenses with ultra-thin, fully planar optical components. This advancement significantly reduces chip size and thickness, increases design freedom for optical modules, minimizes signal interference, and enables precise optical wavefront control. Unlike the emitter-end DOE (Diffraction Optical Element) technology, Artilux’s innovative Metalens technology directly manufactures silicon-based nanostructures on 12” silicon substrates with ultra-high precision. By seamlessly integrating CMOS processes and core GeSi technology on a silicon wafer, this pioneering work enhances production efficiency and yield rates, supporting SWIR wavelengths. With increased optical coupling efficiency, this technology offers versatile solutions for AI applications in optical sensing, imaging, and communication, catering to a wide range of industries such as wearables, biomedical, LiDAR, mixed reality, aerospace, and defense.

Neil Na, Co-Founder and Chief Technology Officer of Artilux, stated, "Artilux has gained international recognitions for its innovations in semiconductor technology. We are delighted to once again share our independently designed Meta-Surface solution, integrating VisEra's leading expertise in 12” wafer-level optical manufacturing processes. This collaboration successfully creates ultra-thin optical components that can precisely control light waves, and enables applications across SWIR wavelength for optical sensing, optical imaging, optical communication and artificial intelligence. We believe this technology not only holds groundbreaking value in the optical field but will also accelerate the development and realization of next-generation optical technologies."

JC Hsieh, Vice President in Research and Development Organization of VisEra, emphasized, "At VisEra, we continuously engage in global CMOS imaging and optical sensor industry developments while utilizing our semiconductor manufacturing strengths and key technologies R&D and partnerships to enhance productivity and efficiency. We are pleased that our business partner, Artilux, has incorporated VisEra’s silicon-based Metalens process technology to advance micro-optical elements integration. This collaboration allows us to break through conventional form factor limitations in design and manufacturing. We look forward to our collaboration driving more innovative applications in the optical sensing industry and accelerating the adoption of Metalens technology."

Metalens technology demonstrates critical potential in industries related to silicon photonics, particularly in enabling miniaturization, improved integration, and enhanced performance of optical components. As advancements in materials and manufacturing processes continue to refine the technology, many existing challenges are gradually being overcome. Looking ahead, Metalens are expected to become standard optical components in silicon photonics and sensing applications, driving the next wave of innovation in optical chips and expanding market opportunities.

 

Monday, June 16, 2025

Zaber application note on image sensors for microscopy

Full article link: https://www.zaber.com/articles/machine-vision-cameras-in-automated-microscopy

When to Use Machine Vision Cameras in Microscope
Situation #1: High Throughput Microscopy Applications with Automated Image Analysis Software
Machine vision cameras are ideally suited to applications which require high throughput, are not limited by low light, and where a human will not look at the raw data. Designers of systems where the acquisition and analysis of images will be automated must change their perspective of what makes a “good” image. Rather than optimizing for images that look good to humans, the goal should be to capture the “worst” quality images which can still yield unambiguous results as quickly as possible when analyzed by software. If you are using “AI”, a machine vision camera is worth considering.
A common example is imaging consumables to which fluorescent markers will hybridize to specific sites. To read these consumables, one must check each possible hybridization site for the presence or absence of a fluorescent signal.

Situation #2: When a Small Footprint is Important
The small size, integration-friendly features and cost effectiveness of machine vision cameras make them an attractive option for OEM devices where minimizing the device footprint and retail price are important considerations. How are machine vision cameras different from scientific cameras? The distinction between machine vision and scientific cameras is not as clear as it once was. The term “Scientific CMOS” (sCMOS) was introduced in the mid 2010’s as advancements of CMOS image sensor technology lead to the development of the first CMOS image sensor cameras that could challenge the performance of then-dominant CCD image sensor technology. These new “sCMOS” sensors delivered improved performance relative to the CMOS sensors that were prevalent in MV cameras of the time. Since then, thanks to the rapid pace of CMOS image sensor development, the current generation of MV oriented CMOS sensors boast impressive performance. There are now many scientific cameras with MV sensors, and many MV cameras with scientific sensors.

 




Sunday, June 15, 2025

Conference List - December 2025

18th International Conference on Sensing Technology (ICST2025) - 1-3 December 2025 - Utsunomiya City, Japan - Website

International Technical Exhibition on Image Technology and Equipment (ITE) - 3-5 December 2025 - Yokohama, Japan - Website

7th International Workshop on New Photon-Detectors (PD2025) - 3-5 December 2025 - Bologna,, Italy - Website

IEEE International Electron Devices Meeting - 6-10 December 2025 - San Francisco, CA, USA - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Friday, June 13, 2025

Videos of the day: UArizona and KAIST

 

UArizona Imaging Technology Laboratory's sensor processing capabilities

 


KAIST  Design parameters of freeform color splitters for image sensors

Thursday, June 12, 2025

Panasonic single-photon vertical APD pixel design

In a paper titled "Robust Pixel Design Methodologies for a Vertical Avalanche Photodiode (VAPD)-Based CMOS Image Sensor" Inoue et al. from Panasonic Japan write:

We present robust pixel design methodologies for a vertical avalanche photodiode-based CMOS image sensor, taking account of three critical practical factors: (i) “guard-ring-free” pixel isolation layout, (ii) device characteristics “insensitive” to applied voltage and temperature, and (iii) stable operation subject to intense light exposure. The “guard-ring-free” pixel design is established by resolving the tradeoff relationship between electric field concentration and pixel isolation. The effectiveness of the optimization strategy is validated both by simulation and experiment. To realize insensitivity to voltage and temperature variations, a global feedback resistor is shown to effectively suppress variations in device characteristics such as photon detection efficiency and dark count rate. An in-pixel overflow transistor is also introduced to enhance the resistance to strong illumination. The robustness of the fabricated VAPD-CIS is verified by characterization of 122 different chips and through a high-temperature and intense-light-illumination operation test with 5 chips, conducted at 125 °C for 1000 h subject to 940 nm light exposure equivalent to 10 kLux. 

 

Open access link to full paper:  https://www.mdpi.com/1424-8220/24/16/5414

Cross-sectional views of a pixel: (a) a conventional SPAD and (b) a VAPD-CIS. N-type and P-type regions are drawn by blue and red, respectively.
 

(a) A chip photograph of VAPD-CIS overlaid with circuit block diagrams. (b) A circuit diagram of the VAPD pixel array. (c) A schematic timing diagram of the pixel circuit illustrated in (b).
 
(a) An illustrative time-lapsed image of the sun. (b) Actual images of the sun taken at each time after starting the experiment. The test lasted for three hours, and as time passed, the sun, initially visible on the left edge of the screen, moved to the right.

Monday, June 09, 2025

Image Sensor Opening at Apple in Japan

Apple Japan

Image Sensor Technical Program Manager - Minato, Tokyo-to, Japan - Link

Friday, June 06, 2025

Friday, May 30, 2025

Photonic color-splitting image sensor startup Eyeo raises €15mn

Eyeo raises €15 million seed round to give cameras perfect eyesight

  • Eyeo replaces traditional filters with advanced color-splitting technology originating from imec, world-leading research and innovation hub in nanoelectronics and digital technologies. For the first time, photons are not filtered but guided to single pixels, delivering maximum light sensitivity and unprecedented native color fidelity, even in challenging lighting conditions.
  • Compatible with any sensor, eyeo’s single photon guiding technology breaks resolution limits - enabling truly effective sub-0.5-micron pixels for ultra-compact, high-resolution imaging in XR, industrial, security, and mobile applications - where image quality is the top purchasing driver.

Eindhoven (Netherlands), May 7, 2025 – eyeo today announced it has raised €15 million in seed funding, co-led by imec.xpand, Invest-NL, joined by QBIC fund, High-Tech Gründerfonds (HTGF) and Brabant Development Agency (BOM). Eyeo revolutionizes the imaging market for consumer, industrial, XR and security applications by drastically increasing the light sensitivity of image sensors. This breakthrough unlocks picture quality, color accuracy, resolution, and cost efficiency, which was never before possible in smartphones and beyond.

The €15 million raised will drive evaluation kit development, prepare for scale manufacturing of a first sensor product, and expand commercial partnerships to bring this breakthrough imaging technology to market.

The Problem: Decades-old color filter technology throws away 70% of light, crippling sensor performance
For decades, image sensors have relied on the application of red, green, and blue color filters on pixels to make your everyday color picture or video. Color filters, however, block a large portion of the incoming light, and thereby limit the sensitivity of the camera. Furthermore, they limit the scaling of the pixel size below ~0.5 micron. These longstanding issues have stalled advancements in camera technology, constraining both image quality and sensor efficiency. In smartphone cameras, manufacturers have compensated for this limitation by increasing the sensor -and thus camera- size, to capture more light. While this improves low-light performance, it also leads to larger, bulkier cameras. Compact, high-sensitivity image sensors are essential for slimmer smartphones and emerging applications such as robotics and AR/VR devices, where size, power efficiency, and image quality are crucial.

The Breakthrough: Color-splitting via vertical waveguides
Eyeo introduces a novel image sensor architecture that eliminates the need for traditional color filters, making it possible to maximize sensitivity without increasing sensor size. Leveraging breakthrough vertical waveguide-based technology that splits light into colors, eyeo develops sensors that efficiently capture and utilize all incoming light, tripling sensitivity compared to existing technologies. This is particularly valuable in low-light environments, where current sensors struggle to gather enough light for clear, reliable imaging. Additionally, unlike traditional filters that block certain colors (information that is then interpolated through software processing), eyeo’s waveguide technology allows pixels to receive complete color data. This approach instantly doubles resolution, delivering sharper, more detailed images for applications that demand precision, such as computational photography, machine vision, and spatial computing. 

Jeroen Hoet, CEO of eyeo: “Eyeo is fundamentally redefining image sensing by eliminating decades-old limitations. Capturing all incoming light and drastically improving resolution is just the start—this technology paves the way for entirely new applications in imaging, from ultra-compact sensors to enhanced low-light performance, ultra-high resolution, and maximum image quality. We’re not just improving existing systems; we’re creating a new standard for the future of imaging.”

Market Readiness and Roadmap
Eyeo has already established partnerships with leading image sensor manufacturers and foundries to ensure the successful commercialization of its technology. The €15M seed funding will be used to improve its current camera sensor designs further, optimizing the waveguide technology for production scalability and accelerating the development of prototypes for evaluation. By working closely with industry leaders, eyeo aims to bring its advanced camera sensors to a wide range of applications, from smartphones and VR glasses to any compact device that uses color cameras. The first evaluation kits are expected to be available for selected customers within the next two years. 

Eyeo is headquartered in Eindhoven (NL), with an R&D office in Leuven (BE).

Friday, May 23, 2025

Glass Imaging raises $20mn

PR Newswire: https://www.prnewswire.com/news-releases/glass-imaging-raises-20-million-funding-round-to-expand-ai-imaging-technologies-302451849.html

Glass Imaging Raises $20 Million Funding Round To Expand AI Imaging Technologies

LOS ALTOS, Calif., May 12, 2025 /PRNewswire/ -- Glass Imaging, a company harnessing the power of artificial intelligence to revolutionize digital image quality, today unveiled a Series A funding round led by global software investor Insight Partners. The $20 million round will allow Glass Imaging to continue to refine and implement their proprietary GlassAI technologies across a wide range of camera platforms - from smartphones to drones to wearables and more. The Series A round was joined by previous Glass Imaging investors GV (Google Ventures), Future Ventures and Abstract Ventures.

Glass Imaging uses artificial intelligence to extract the full image quality potential on current and future cameras by reversing lens aberrations and sensor imperfections. Glass works with manufacturers to integrate GlassAI software to boost camera performance 10x resulting in sharper, more detailed images under various conditions that remain true to life with no hallucinations or optical distortions.

"At Glass Imaging we are building the future of imaging technology," said Ziv Attar, Founder and CEO, Glass Imaging. "GlassAI can unlock the full potential of all cameras to deliver stunning ultra-detailed results and razor sharp imagery. The range of use cases and opportunities across industry verticals are huge."

"GlassAI leverages edge AI to transform Raw burst image data from any camera into stunning, high-fidelity visuals," said Tom Bishop, Ph.D., Founder and CTO, Glass Imaging. "Our advanced image restoration networks go beyond what is possible on other solutions: swiftly correcting optical aberrations and sensor imperfections while efficiently reducing noise, delivering fine texture and real image content recovery that outperforms traditional ISP pipelines."

"We're incredibly proud to lead Glass Imaging's Series A round and look forward to what the team will build next as they seek to redefine just how great digital image quality can be," said Praveen Akkiraju, Managing Director, Insight Partners. "The ceiling for GlassAI integration across any number of platforms and use cases is massive. We're excited to see this technology expand what we thought cameras and imaging devices were capable of." Akkiraju will join Glass Imaging's board and Insight's Jonah Waldman will join Glass Imaging as a board observer.

Glass Imaging previously announced a $9.3M extended Seed funding round in 2024 led by GV and joined by Future Ventures, Abstract and LDV Capital. That funding round followed an initial Seed investment in 2021 led by LDV Capital along with GroundUP Ventures.

For more information on Glass Imaging and GlassAI visit https://www.glass-imaging.com/