Thursday, January 22, 2026

Synthetic aperture imager

Link: scitechdaily.com/this-breakthrough-image-sensor-lets-scientists-see-tiny-details-from-far-away/

Open-access paper: Multiscale aperture synthesis imager  https://www.nature.com/articles/s41467-025-65661-8

A new lens-free imaging system uses software to see finer details from farther away than optical systems ever could before.

Imaging technology has reshaped how scientists explore the universe – from charting distant galaxies using radio telescope arrays to revealing tiny structures inside living cells. Despite this progress, one major limitation has remained unresolved. Capturing images that are both highly detailed and wide in scope at optical wavelengths has required bulky lenses and extremely precise physical alignment, making many applications difficult or impractical.

Researchers at the University of Connecticut may have found a way around this obstacle. A new study led by Guoan Zheng, a biomedical engineering professor and director of the UConn Center for Biomedical and Bioengineering Innovation (CBBI), along with his team at the University of Connecticut College of Engineering, was published in Nature Communications. The work introduces a new imaging strategy that could significantly expand what optical systems can do in scientific research, medicine, and industrial settings.

Why Synthetic Aperture Imaging Breaks Down at Visible Light

“At the heart of this breakthrough is a longstanding technical problem,” said Zheng. “Synthetic aperture imaging – the method that allowed the Event Horizon Telescope to image a black hole – works by coherently combining measurements from multiple separated sensors to simulate a much larger imaging aperture.”

This approach works well in radio astronomy because radio waves have long wavelengths, which makes precise coordination between sensors achievable. Visible light operates on a much smaller scale. At those wavelengths, the physical accuracy needed to keep multiple sensors synchronized becomes extremely difficult to maintain, placing strict limits on traditional optical synthetic aperture systems.

Letting Software Do the Synchronizing

The Multiscale Aperture Synthesis Imager (MASI) addresses this challenge in a fundamentally different way. Instead of requiring sensors to remain perfectly synchronized during measurement, MASI allows each optical sensor to collect light on its own. Computational algorithms are then used to align and synchronize the data after it has been captured.

Zheng describes the concept as similar to several photographers observing the same scene. Rather than taking standard photographs, each one records raw information about the behavior of light waves. Software later combines these independent measurements into a single image with exceptionally high detail.

This computational approach to phase synchronization removes the need for rigid interferometric setups, which have historically prevented optical synthetic aperture imaging from being widely used in real-world applications.

How MASI Captures and Rebuilds Light

MASI differs from conventional optical systems in two major ways. First, it does not rely on lenses to focus light. Instead, it uses an array of coded sensors placed at different locations within a diffraction plane. Each sensor records diffraction patterns, which describe how light waves spread after interacting with an object. These patterns contain both amplitude and phase information that can later be recovered using computational methods.

After the complex wavefield from each sensor is reconstructed, the system digitally extends the data and mathematically propagates the wavefields back to the object plane. A computational phase synchronization process then adjusts the relative phase differences between sensors. This iterative process increases coherence and concentrates energy in the combined image.

This software-based optimization is the central advance. By aligning data computationally rather than physically, MASI overcomes the diffraction limit and other restrictions that have traditionally governed optical imaging.

A Virtual Aperture With Fine Detail

The final result is a virtual synthetic aperture that is larger than any single sensor. This allows the system to achieve sub-micron resolution while still covering a wide field of view, all without using lenses.
Traditional lenses used in microscopes, cameras, and telescopes force engineers to balance resolution against working distance. To see finer details, lenses usually must be placed very close to the object, sometimes just millimeters away. That requirement can limit access, reduce flexibility, or make certain imaging tasks invasive.

MASI removes this constraint by capturing diffraction patterns from distances measured in centimeters and reconstructing images with sub-micron detail. Zheng compares this to being able to examine the fine ridges of a human hair from across a desk rather than holding it just inches from your eye.

Scalable Applications Across Many Fields

“The potential applications for MASI span multiple fields, from forensic science and medical diagnostics to industrial inspection and remote sensing,” said Zheng, “But what’s most exciting is the scalability – unlike traditional optics that become exponentially more complex as they grow, our system scales linearly, potentially enabling large arrays for applications we haven’t even imagined yet.”

The Multiscale Aperture Synthesis Imager represents a shift in how optical imaging systems can be designed. By separating data collection from synchronization and replacing bulky optical components with software-controlled sensor arrays, MASI shows how computation can overcome long-standing physical limits. The approach opens the door to imaging systems that are highly detailed, adaptable, and capable of scaling to sizes that were previously out of reach.

Tuesday, January 20, 2026

Eric Fossum receives 2026 Draper Prize for Engineering

Link: https://home.dartmouth.edu/news/2026/01/eric-fossum-awarded-draper-prize-engineering

Eric R. Fossum, the John H. Krehbiel Sr. Professor for Emerging Technologies, has been awarded the 2026 Charles Stark Draper Prize for Engineering, which is granted every two years by the National Academy of Engineering and is one of the world’s preeminent honors for engineering achievement.
The NAE recognized Fossum “for innovation, development, and commercialization of the complementary metal-oxide semiconductor active pixel image sensor,” an invention that remains the core technology behind roughly 7 billion cameras produced each year.

“Eric Fossum is a pioneering semiconductor device physicist and engineer whose invention of the CMOS active pixel image sensor, or ‘camera on a chip,’ has transformed imaging across everyday life, industry, and scientific discovery,” the NAE said in announcing the prize, which includes a $500,000 cash award.
The honor is the latest in a string of accolades for Fossum, who in addition to his role as a professor at Thayer School of Engineering also serves as vice provost for entrepreneurship and technology transfer and directs the PhD Innovation Program.

His other honors include the Queen Elizabeth Prize for Engineering, the National Medal for Technology and Innovation awarded at a White House ceremony last year, and a Technical Emmy Award recognizing the transformative impact of Fossum’s invention. 
Today, CMOS image sensors, which were intended to make digital cameras for space faster, better, and cheaper, are behind billions of captures in a vast variety of settings—selfies, high-definition videos, dental X-rays, and space images.

“Eric Fossum’s inventions have revolutionized digital imaging across industries,” says President Sian Leah Beilock. “His work is a prime example of how the applied research our faculty foster and undertake can drive innovation and improve our world.” 

Research for NASA

Tasked with creating smaller cameras for NASA spacecraft that would use less energy, Fossum led the team that invented and developed the CMOS image sensor technology at the Jet Propulsion Laboratory at the California Institute of Technology in the 1990s. The CMOS image sensor integrated all the essential camera functions on a single piece of silicon—each chip contained arrays of light-sensitive pixels, each with its own amplifier.

Fossum recalls the moment when their first image sensor worked flawlessly in testing. It was a eureka moment, but only in hindsight. His initial reaction was tempered by caution. “It seemed so straightforward that I figured others must have tried this before, and there must be a fatal flaw somewhere. So, it was exhilarating to see that it was working,” he says.

The CMOS sensor was commercialized through Photobit, the company he co-founded and helped lead until its acquisition by Micron. 

As the CMOS sensor grew in sophistication, so too did its impact, finding applications in both predictable and surprising ways, such as swallowable pill cameras that can take images inside the body and the explosion of smartphone cameras, which forever changed how we capture and share our lives.
“The impact it has had on social justice has been huge, which I did not anticipate at all, and is truly gratifying. It protects people that might otherwise be powerless, and those with power from false accusations,” Fossum says.

Fossum, a Connecticut native, received a bachelor of science degree in physics and engineering from Trinity College, and a PhD in engineering and applied science from Yale in 1984. Prior to his work at the Jet Propulsion Lab, he was a faculty member at Columbia University. After leading several startups, consulting, and co-founding the International Image Sensor Society, he joined Dartmouth in 2010.
Fossum’s many other honors include the NASA Exceptional Achievement Medal, the IEEE Jun-ichi Nishizawa Medal, and induction into the U.S. Space Foundation Technology Hall of Fame in 1999 and the National Inventors Hall of Fame in 2011. He also served as CEO of Siimpel, developing MEMS devices for autofocus in smartphone camera modules, and worked as a consultant for Samsung on time-of-flight sensor development. He is a member of the National Academy of Engineering and a fellow of the National Academy of Inventors, the Institute of Electrical and Electronics Engineers, and Optica.

Counting photons: The future of imaging

Fossum continues to push the boundaries of imaging. His more recent invention, the quanta image sensor, was developed at Dartmouth and enables high-resolution imaging in extremely low-light conditions.

“We’re working on sensors that can count photons, one at a time,” he says. “Imagine being able to take a photo in almost complete darkness or measuring extremely faint signals in biology. It’s like turning the lights on in a place that was previously invisible to us.” 

Fossum and two of his former Dartmouth students co-founded Gigajot to commercialize the technology.
“Eric’s achievements are not the result of a single breakthrough, but of sustained curiosity and a focus on real-world impact,” says Douglas Van Citters ’99, Thayer ’03, ’06, interim Thayer dean. “To this day, he brings exceptional dedication to teaching and research, along with a passion for entrepreneurship that permeates Dartmouth, especially Thayer. And that spirit has inspired generations of engineers at Dartmouth who, like Eric, are committed to improving lives through the technologies they create.”

When asked about where he sees the field of imaging in the next decade, Fossum imagines a world where great images can be captured using a handful of photons and where computational imaging allows humans to see the world in ways eyes themselves never could. 

“The ability to capture images in low light will continue to improve,” he predicts. “And we’re likely to see a proliferation of augmented reality technologies that will change the way we experience the world around us.”

 In his mind, the grand challenge ahead is miniaturization—creating sensors with pixels so tiny that they become smaller than the wavelength of light itself. With this breakthrough, imaging technology could scale to the point where a single chip contains billions of pixels, opening new possibilities for everything from medical diagnostics to space exploration.

Along with his continuing work on sensors, Fossum draws from his extensive experience in innovation and entrepreneurship in his role as vice provost and in overseeing the PhD Innovation Program.
He says that the program trains students not just to think creatively but to apply their research in ways that have a meaningful impact.

“It is just so much more satisfying to make a real impact with the work that you do,” he says.
The awards ceremony is scheduled for Feb. 18 in Washington, D.C. As he did with the Queen Elizabeth prize, Fossum plans to donate the majority of the Draper Prize funds to STEM-related charities.

Monday, January 19, 2026

Mythic image sensor

Link: https://www.eetimes.com/mythic-rises-from-the-ashes-with-125-million-funding-round/

Mythic Rises from the Ashes with $125 Million Funding Round 

Excerpt: 

A separate product family, dubbed “Starlight,” will use a Mythic compute chiplet hybrid-bonded under a vision sensor’s photodiode array. The two dies will use less than 1 W between them.
Ozcelik said he noticed a gap in the market for this type of device while previously working at OnSemiconductor.

“One of the biggest challenges for image sensors is low light performance,” he said. “Dynamic range is another major problem, especially in mission critical applications.”

A Mythic AI accelerator could run a neural network to improve low-light performance and dynamic range directly next to the sensor. Image sensors made for applications like cellphones are very small (one-third of an inch), and performance suffers as they get smaller, Ozcelik said. Mythic has a unique opportunity here as its technology is compact, and crucially, it uses very little power, according to Ozcelik (photodiode arrays are extremely thermally sensitive, meaning even a small DSP couldn’t be placed directly under the photodiode array).

Mythic is going to build this sensor and AI accelerator combination itself, and both the accelerator chiplet and the image sensor product will tape out this year, Ozcelik said.

Overall, Ozcelik is pragmatic about the scale of the challenges ahead, particularly given the company’s move into the data center where it will compete with Nvidia.

“[Our advantage] has to be incredibly material,” he said. “It has to be at least one hundred times, hopefully more.”

Saturday, January 17, 2026

Voyant releases solid-state FMCW LiDAR

Press release: https://voyantphotonics.com/news/1075/

New York, NY – December 17, 2025 – Voyant Photonics, the leader in chip-scale frequency-modulated continuous-wave (FMCW) LiDAR, today announced its Helium™ Platform of fully solid-state LiDAR sensors and modules. The solution is built on a silicon photonics chip, enabling a breakthrough architecture designed to deliver unprecedented reliability, integration, and performance for industrial automation, robotics, and mobile autonomy.

Leveraging Voyant’s proprietary Photonic Integrated Circuit (PIC), Helium offers camera-like simplicity and unmatched flexibility. Helium uses a dense two-dimensional photonic focal plane array with fully integrated 2D on-chip beam steering — eliminating all unreliable scanning methods: MEMS, mirrors, and resulting in no moving parts. The FMCW LiDAR chip leverages a two-dimensional array of surface emitters to create a fully solid-state LiDAR in an ultra-compact, rugged design. Helium also supports multi-sensor configurations, combining for instance a wide-FoV short-range and narrow-FoV long-range sensing in one system — delivering the most versatile and cost-effective LiDAR solution for advanced perception applications.

Helium first prototype release will be demonstrated at Voyant’s booth (LVCC, West Hall, Booth #4875) at CES 2026 in Las Vegas, January 6-9, marking a major milestone in advancing silicon-photonics LiDAR from R&D into high-volume systems that are proliferating Physical AI.

“Helium represents the next step in our mission to deliver the most affordable high performance LiDAR sensor ever,” said Voyant CEO ClĂ©ment Nouvel. “Industrial and consumer markets demand sensors that are small, cost efficient, and highly reliable. Helium provides all of that while delivering performance that unlocks new classes for intelligent machines.”

A Flexible Platform to Move Solid-State LiDAR Forward

Helium extends the technology foundation proven in Voyant’s Carbon™ product line, bringing full two-dimensional beam steering to a silicon-photonics platform for the first time. The result is a compact, high-precision 4D sensor that meets the highest industry standards for safety and reliability.

Key advantages include:

  •  True solid-state — no MEMS, polygon scanners, or rotating assemblies
  •  High-resolution FPA architecture spanning from 12,000 pixels to over 100,000 pixels
  •  Long-range FMCW performance, per-pixel radial velocity
  •  Software-defined LiDAR (SDL) enabling adaptive scan patterns and region of interest
  •  Ultra Compact Size -as small as a matchbox (<150 g mass and <50 cm³ volume), ideal for drones, mobile robots, and compact industrial systems

Field of view and range can be tailored with different lenses, and the platform scales from core module options to fully enclosed sensor. Helium is built on a 2D array of surface-emitting photonic antennas combined with a fixed lens and integrated electronics, forming a rugged module ideal for embedded perception.

With no moving parts and monolithic photonic integration, Helium offers an estimated 20× improvement in MTBF over legacy ToF LiDAR architectures —a critical reliability requirement for high-duty-cycle industrial fleets.

Engineered for Scalable Manufacturing 

As with the Carbon family, Helium is built entirely on Voyant’s leading proprietary silicon-photonics platform, enabling new levels of performance and integration. This deep integration eliminates the unreliable optical alignments that limit traditional TOF LiDAR manufacturability. Helium leverages the same mature photonics foundry ecosystem as the optical datacom industry — allowing Voyant to scale production toward semiconductor-class cost structures.

From Carbon to Helium —Voyant Advances a Modular LiDAR Platform for Broader Adoption
Voyant established the company’s leadership in compact, cost-optimized FMCW sensing for compute-constrained platforms with its first-generation Carbon™ family, extended last week with the new Carbon 32 and Carbon 64 variants. Helium builds directly on these advances, expanding the architecture from 1D to 2D on-chip beam steering, with higher resolution and a fully solid-state scan engine. Voyant now enables OEMs to integrate its sensing technology directly into their machines by offering module-only access along with full design-in support. This allows partners to build customized, high-performance sensor solutions tailored to their exact requirements.

Helium sensors and modules will be available with multiple resolution and range configurations, supporting a wide choice of field-of-view options—from ultra-wide coverage approaching 180° down to narrower, long-range targeting optics. These modular variants enable OEMs and developers to select and integrate lenses that best suit their application, allowing LiDAR architectures to be tailored for mobile robots, material-handling systems, smart infrastructure, and emerging edge-compute platforms. 

Wednesday, January 14, 2026

Leica image sensor development?

There are some recent news reports that Leica is developing its own image sensor.

Petapixel: https://petapixel.com/2026/01/02/leica-is-developing-its-own-image-sensors-again/

Lecia rumors: https://leicarumors.com/2026/01/01/leica-is-developing-its-own-camera-sensor-again-most-likely-for-the-leica-m12-camera.aspx/ 

Excerpt:

In a recent podcast, Dr. Andreas Kaufmann (Chairman of the Supervisory Board and majority shareholder of Leica Camera AG) confirmed that Leica is again developing their own sensor, most likely for the next Leica M12 camera (Google translation):

Furthermore, as has already become somewhat known, we are also developing our own sensor again. […] Up until the M10, we had a sensor of European origin. It was manufactured by AMS in Graz, or rather, developed by their Dutch development office. And the foundry itself was in Grenoble, a French company. And then there was the transition with the M11 to Sony sensors. It’s no secret that they’re in there. At the same time, we started developing our own sensor again, in a more advanced version. I think we’ve made significant progress with that. We can’t say more at the moment. 

Monday, January 05, 2026

Eric Fossum receives 2026 IEEE Nishizawa Medal

Link: https://engineering.dartmouth.edu/news/eric-fossum-to-receive-2026-ieee-jun-ichi-nishizawa-medal

Eric Fossum Named 2026 Recipient of IEEE Jun-ichi Nishizawa Medal
Dec 17, 2025

Eric R. Fossum, the John H. Krehbiel Sr. Professor for Emerging Technologies and vice provost for entrepreneurship and technology transfer at Dartmouth, has been named the 2026 recipient of the Institute of Electrical and Electronics Engineers' (IEEE) Jun-ichi Nishizawa Medal for the "invention, development, and commercialization of the CMOS image sensor" that revolutionized digital imaging around the world.

Fossum joins a distinguished group of some of the world's most renowned engineers and scientists selected by IEEE to receive the organization's highest honors for their contributions to technology, society, and the engineering profession. 

The prize is awarded annually by IEEE, the largest technical professional organization in the world dedicated to advancing technology for humanity.

Eric Fossum and the team that invented the CMOS image sensor, at NASA's Jet Propulsion Laboratory. (Photo courtesy of NASA/JPL-Caltech)

Fossum led the team at NASA's Jet Propulsion Laboratory that developed the complementary metal-oxide-semiconductor (CMOS) sensor during the early 1990s, an innovation that dramatically miniaturized cameras used in space missions onto a single chip. The "camera on a chip" sensor subsequently made digital photography and imaging widely accessible worldwide. 

Today, the CMOS sensor is integrated in nearly every smartphone, as well as in well as countless other devices including webcams, medical imaging devices, and automobile cameras.

Fossum will formally receive the medal at a ceremony in New York City in April 2026. Named in honor "father of Japanese microelectronics," the Nishizawa prize also comes with an honorarium, which Fossum plans to donate to STEM-related charities. 

Fossum co-founded Photobit Corporation to commercialize the CMOS sensor, serving as CEO, before the company was acquired by Micron. He also served as CEO of Siimpel Corporation, which developed MEMS-based camera modules with autofocus and shutter functions for cell phones. More recently, he served as chairman of Gigajot Technology Inc., which he co-founded with two former Dartmouth PhD students to develop and commercialize quanta image sensors, which they developed at Dartmouth.

Fossum joined Dartmouth's engineering faculty in 2010 and helped launch the PhD Innovation Program, the nation's first doctoral level program focused on research translation and entrepreneurship.

Fossum is a member of the National Academy of Engineering. He was inducted in the National Inventors Hall of Fame in 2011, and to date, holds 185 US patents. He is a fellow of the National Academy of Inventors, an IEEE life fellow, an Optica fellow, and a member of the Society of Motion Picture and Television Engineers and the American Association for the Advancement of Science.

Throughout his career, Fossum has earned numerous accolades for his work, including the Queen Elizabeth Prize for Engineering in 2017, the Emmy for technology and engineering from the National Academy of Television Arts and Sciences in 2021, and most recently the National Medal of Technology and Innovation from President Biden in 2025.

Thursday, January 01, 2026

Conference List - June 2026

The International SPAD Sensor Workshop - 1-4 June 2026 - Seoul, South Korea - Website

SPIE Photonics for Quantum - 8-11 June 2026 - Waterloo, Ontario, Canada - Website

AutoSens USA 2026 - 9-11 June 2026 - Detroit, Michigan, USA - Website

Sensor+Test - 9-11 June 2026 - Nuremberg, Germany - Website

Smart Sensing - 10-12 June 2026 - Tokyo, Japan - Website

IEEE/JSAP Symposium on VLSI Technology and Circuits - 14-18 June 2026 - Honolulu, Hawaii, USA - Website

Quantum Structure Infrared Photodetector - 14-19 June 2026 - Sète, France - Website

International Conference on Sensors and Sensing Technology (ICCST2026)- 15-17 June 2026 - Florence, Italy - Website

International Conference on IC Design and Technology (ICICDT) - 22-24 June 2026 - Dresden, Germany - Website

Automate 2026 - 22-25 June 2026 - Chicago, Illinois, USA - Website

27th International Workshop on Radiation Imaging Detectors - 28 June-2 July 2026 - Ghent, Belgium - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Friday, December 26, 2025

Prophesee leadership change

Prophesee Appoints Jean Ferré as Chief Executive Officer to Lead Event-based Vision Sensing Pioneer in Next Stage of Growth

Paris, France – December 23, 2025 – Prophesee, a pioneer and global leader in event-based vision technology, today announced the appointment of Jean FerrĂ© as Chief Executive Officer. He has been designated by the board to succeed Luca Verre, Prophesee’s co-founder and former CEO, who is leaving the company. This leadership transition comes as the company enters a new phase of commercialization and growth, building on a strong technological and organizational foundation and welcoming new investors. The company is sharpening its near-term focus on sectors with high value use cases demonstrating today the strongest demand and adoption momentum such as security, defense and aerospace, as well as industrial automation. Prophesee will continue to support volume vision-enabled applications markets where it has achieved initial commercial success such as IoT, AR/VR, consumer electronics.

[...

Full press release is available here: https://www.prophesee.ai/2025/12/23/prophesee-appoints-jean-ferre-as-chief-executive-officer-to-lead-event-based-vision-sensing-pioneer-in-next-stage-of-growth/ 

Wednesday, December 24, 2025

MagikEye's real-time 3D system at CES

MagikEye to Showcase New High-Resolution Real-Time 3D Evaluation System at CES

Reference platform delivers with >8000 points in a 3D cloud at 30 FPS for robotics, low-cost LiDAR, and automotive in-cabin deployments

STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc (www.magik-eye.com), a developer of advanced 3D depth sensing based on its ILT™ (Invertible Light Technology), will be showcasing a new high-resolution, real-time ILT evaluation system at the upcoming Consumer Electronics Show. The system is designed to help customers evaluate ILT performance, validate configurations, and begin application development for robotics, low-cost LiDAR-class replacement, and automotive in-cabin applications.

The new evaluation system is a reference implementation, not a commercial sensor product. It delivers an approximately over 8,600-point 3D point cloud per frame at 30 frames per second, corresponding to more than 259,000 depth-points per second, while maintaining real-time operation and low latency (~33 ms). This represents roughly 2× the spatial point density of MagikEye’s prior evaluation platform without sacrificing frame rate.

“Customers evaluating depth sensing technologies want realistic, real-time data they can actually build on,” said Skanda Visvanathan, VP of Business Development at MagikEye. “This reference system is designed to shorten the path from evaluation to application development by delivering higher-resolution ILT depth at a full 30 FPS, in a form factor and performance envelope aligned with embedded systems.”

Designed for real-world evaluation and development, the evaluation system enables customers to evaluate ILT depth sensing in their own environments, begin application software development using live 3D point cloud output, and validate specific ILT configurations—including field of view, operating range, optical setup, and processing pipeline—prior to custom module design.

Key characteristics of the evaluation platform include a wide 105° × 79° field of view, a wide operating range of 0.3 m to 2 m (with support for near-field proximity use cases), and operation in bright indoor lighting conditions of up to ~50,000 lux, dependent on distance and target reflectance.

Unlike depth solutions that increase point density by reducing frame rate, MagikEye’s ILT evaluation system maintains a full 30 FPS, enabling depth perception suitable for dynamic, real-time environments. ILT™ can scale to even higher frame rates with increased processing performance.

At CES, MagikEye will demonstrate how the evaluation system supports development and prototyping across robotics applications such as real-time perception and navigation, low-cost LiDAR-class embedded sensing, and automotive in-cabin occupancy and interior monitoring.

The evaluation system integrates with MagikEye’s MKE API, allowing customers to stream point clouds and integrate ILT depth data into existing software stacks.

MagikEye will be showcasing the new evaluation system at CES in Las Vegas. To schedule a meeting or request a demonstration, please contact ces2026@magik-eye.com. 

Monday, December 22, 2025

AZO Sensors interview article on Teledyne e2v CCD imagers

The Enduring Relevance of CCD Sensors in Scientific and Space Imaging

(Inteview with Marc Watkins, Teledyne e2v)

While CMOS technology has become the dominant force in many imaging markets, Charge-Coupled Devices (CCDs) continue to hold an essential place in scientific and space imaging. From the Euclid Space Telescope to cutting-edge microscopy and spectroscopy systems, CCDs remain the benchmark for precision, low-noise performance, and reliability in mission-critical environments.
In this interview, Marc Watkins from Teledyne e2v, discusses why CCD technology continues to thrive, the company’s long-standing heritage in space missions and scientific discovery, and how ongoing innovation is ensuring CCDs remain a trusted solution for the most demanding imaging applications. 

To begin, could you provide an overview of your role at Teledyne e2v and the types of imaging applications your team typically supports?

I manage the CCD product portfolio and associated sales globally. Our CCDs are mostly used in scientific applications such as astronomy, microscopy, spectroscopy, in vivo imaging, X-ray imaging, and space imaging. Almost every large telescope worldwide uses our CCDs for their visible light instruments.

CCDs are vital for medical research, especially for in vivo preclinical trials in areas such as cancer research. Advanced microscopy techniques such as Super Resolution Microscopy require the extreme sensitivity of EMCCDs. Not all CCDs are hidden in labs, on top of mountains, or in space; you’ll likely have passed a CCD in airport security without realising it.

In a time when CMOS technology has become dominant in most imaging markets, what are the primary reasons CCD sensors still maintain relevance in scientific, astronomical, and space-based applications?

We observe that in many markets, CMOS has made significant advances; however, CCDs remain the best overall solution for many niche applications, such as the ones I just described. The technical advantages vary greatly between applications.

Could you elaborate on some of the technical advantages CCD sensors offer over CMOS in high-performance or mission-critical imaging environments?

CCDs are great for long integrations where larger charge capacities, higher linearity, and low noise provide the best performance. They can be deeply cooled, making dark noise negligible. CCDs can be manufactured on thicker silicon, which gives better Red/near-infrared sensitivity. CCD pixels can be combined or “binned” together noiselessly, a technique widely used in spectroscopy. Specialized “Electron Multiplying” CCDs are sensitive enough to count individual photons.

What are some of the unique requirements in space or astronomy applications that make CCDs a more suitable choice than CMOS?

Most astronomy applications use very long integration times, require excellent Red/NIR response, and have no problem cooling to -100 °C, making CCDs a much better solution.

For space, the answer can be as simple as our mission heritage, making them a low-risk option. Since 1986, Teledyne’s sensors have unlocked countless scientific discoveries from over 160 flown missions. Our CCDs can be found exploring the deep expanses of space with the Hubble and Euclid Space Telescopes, imaging the sun from solar observatories, navigating Mars with rovers, and monitoring the environment with the Copernicus Earth observation Sentinel satellites.

As CMOS technology continues to advance, are you seeing any significant closing of the performance gap in areas where CCDs have traditionally been stronger, such as low noise, uniformity, or quantum efficiency?

For most of our applications, recent advances in CMOS technology have had little impact on the CCD business. An example of this might be the development of improved high-speed CMOS. If high speed is critical, then CMOS is already the incumbent technology. Where quantum efficiency is concerned, we can offer the same backthinning and AR coatings for both CCD and CMOS technologies, with a peak QE of up to 95 %.

One area of transition for us is in space applications, such as Earth observation, where improvements in areas such as radiation hardness, frame rate, and TDI are steering many of our customers from our CCD to our CMOS solutions.

How has Teledyne e2v continued to innovate or evolve its CCD product lines to meet the demands of modern applications while CMOS continues to gain market share?

Our CCD product lines have a long development heritage. In general, we aim to optimize existing designs by tailoring specifications, such as anti-reflective coatings, to benefit specific applications. With in-house sensor design, manufacture, assembly, and testing, all our CCDs can be supplied partially or fully customized to fit the application and achieve the best possible performance.

Our CCD wafer fab and processing facility in England was established in 1985 and quickly became the world’s major supplier for space imaging missions and large ground-based astronomical telescopes. We continue to develop a vertically integrated, dedicated CCD fab and are committed to the development of high-performance, customized CCD detectors.

The CCD fabrication facility is critical to the success and quality of future space and science projects. At Teledyne, we remain committed to being the long-term supplier of high-specification and high-quality devices for the world’s major space agencies and scientific instrument producers.

Are there particular missions or projects, either current or upcoming, where CCD technology remains critical? What makes CCDs indispensable in those scenarios?

A prototype for a new intraoperative imaging technique incorporates CCDs, which we hope will have a significant impact on cancer treatments in the future.

In astronomy, one example is the Vera C. Rubin Observatory, which utilizes an enormous 3.2 Gigapixel camera composed of an array of HiRho CCDs, offering NIR sensitivity and close butting, features not currently available in CMOS technology.

In space, ESA’s recently completed Gaia missions relied completely on the functionality (TDI) and performance of our CCDs. The second Aeolus mission, that will continue to measure the Earth’s wind profiles to improve weather forecasting, uses a unique ‘Accumulation CCD’ which allows noiseless summing of many LIDAR signals to achieve measurable signal levels.

How do you address customer questions or misconceptions around CCDs being considered legacy technology in an industry that often pushes toward the latest advancements?

Consider what is best for your application; it may well be a CCD. You can find our range of available CCDs and their performance on our website, or I would be happy to discuss your application directly. If you would like to speak with me in person, I’ll be attending SPIE Astronomical Telescopes + Instrumentation in July 2026.

Looking ahead, what do you see as the long-term future of CCD sensors within the broader imaging ecosystem? Will they continue to coexist with CMOS, or is the industry moving toward complete CMOS dominance?

The sheer variety of imaging requirements, combined with the continued advantages of CCDs, suggests a long-term demand. We continue to see instruments baselining CCD products into 2030 and beyond.
How does Teledyne e2v position itself within this evolving landscape, and what message would you give to organizations evaluating sensor technologies for specialized imaging applications?
Teledyne e2v solutions are technology agnostic and will recommend what's best for the application, be it CMOS, MCT, or of course CCD.