Thursday, June 30, 2022

NIT SWIR camera based on HgTe quantum cascade detector

The Institute of Nano Sciences from CNRS-Sorbonne University is currently researching and producing quantum dot materials of HgTe, sensitive in the extended SWIR wavelength range. Through a partnership with NIT, a first sensor-camera was produced.

This technology is promising to design low-cost and small pixel pitch focal plane array, as well as to expand the spectral range of the SWIR camera up to 2.5 µm.

This collaborative program is funded by the French National Research Agency.

This video  presents the technology of quantum cascade detector (QCD) deposition with response up to 2um on NIT ROIC’s, and sample results images in various conditions.

A related paper titled "Photoconductive focal plane array based on HgTe quantum dots for fast and cost-effective short-wave infrared imaging" is in the June 2022 issue of Nanoscale journal.

Abstract: HgTe nanocrystals, thanks to quantum confinement, present a broadly tunable band gap all over the infrared spectral range. In addition, significant efforts have been dedicated to the design of infrared sensors with an absorbing layer made of nanocrystals. However, most efforts have been focused on single pixel sensors. Nanocrystals offer an appealing alternative to epitaxially grown semiconductors for infrared imaging by reducing the material growth cost and easing the coupling to the readout circuit. Here we propose a strategy to design an infrared focal plane array from a single fabrication step. The focal plane array (FPA) relies on a specifically designed readout circuit enabling in plane electric field application and operation in photoconductive mode. We demonstrate a VGA format focal plane array with a 15 μm pixel pitch presenting an external quantum efficiency of 4-5% (15 % internal quantum efficiency) for a cut-off around 1.8 μm and operation using Peltier cooling only. The FPA is compatible with 200 fps imaging full frame and imaging up to 340 fps is demonstrated by driving a reduced area of the FPA. In the last part of the paper, we discuss the cost of such sensors and show that the latter is only driven by labor costs while we estimate the cost of the NC film to be in the 10-20 € range.

Monday, June 27, 2022

Reports argue that mobile phone camera market is slowing

According to a market report from China "global mobile phone image sensor market is declining, and it is recommended to deploy multi-level product lines, combined with production capacity advantages, to impact market share." [English translation]

In the first quarter of 2022, the global mobile phone image sensor shipments will be approximately 1.13 billion units, a year-on-year decrease of approximately 27.0%.
Market demand in Europe and mainland China has suffered multiple blows, and the Latin America market may be able to stand out
The upgrade of pixel specifications shows polarization, "main camera up, sub camera down"
Leading manufacturers will use production capacity advantages to control supply or change the adversity, and local manufacturers will seek opportunities through differentiation
Suggestion: Improve product functional value, upgrade product structure, and be cautious in large-scale expansion

A recent report by TrendForce forecasts that the relative share of quad-camera module market will not grow much between 2021 and 2022. One explanation might be that there is diminishing returns beyond a certain number of cameras and smartphone companies are focusing on algorithmic enhancements to image/video quality which may give similar results.

TrendForce indicates that mobile phone brands are currently curtailing competition in the hardware specifications of mobile phone camera modules but remain focused on photographic and video performance as promotional features of their mobile phones and will emphasize dynamic photography, night photography and other scenarios to highlight product advantages. This can be achieved not only by strengthening the optical performance of the camera module itself but also through algorithms and software, thereby increasing the enthusiasm of mobile phone brands to invest in self-developed chips.

Friday, June 24, 2022

Videos du jour - June 24, 2022

CASS Talks 2022 - Jose Lipovetzky, CNEA, Argentina - April 8, 2022. Viewing ionizing radiation with CMOS image sensors.

Distributed On-Sensor Compute System for AR/VR Devices: A Semi-Analytical Simulation Framework for Power Estimation (Jorge GOMEZ, Research Scientist, Reality Labs, Meta)

tinyML Applications and Systems Session: Millimeter-Scale Ultra-Low-Power Imaging System for Intelligent Edge Monitoring (Andrea BEJARNO-CARBO, PhD Student, University of Michigan, Ann Arbor MI)

This video briefly introduces the Global Shutter product line of PixArt. It provides insights into the key competitiveness of PixArt's Global Shutter products by comparing their ultra-low-power consumption rates and advanced built-ins with other similar products in the market.

Thursday, June 23, 2022

Chronoptics compares depth sensing methods

In a blog post titled "Comparing Depth Cameras: iToF Versus Active Stereo" Refael Whyte of Chronoptics compares depth reconstructions from their indirect time-of-flight (iToF) "KEA" camera with active stereo using an Intel RealSense D435 sensor.


Setup used for comparisons

Bin picking

Pallet picking

Depth data can also be overlaid on RGB to get colored point cloud visualizations. KEA provides much cleaner-looking results:



They show some limitations too. In this scene the floor has very low reflectivity in IR so the KEA camera struggles to collect enough photons there:


[PS: I wish all companies showed "failure cases" as part of their promotional materials!]

Full article here:

Wednesday, June 22, 2022

BrainChip + Prophesee partnership

Laguna Hills, Calif. – June 14, 2022 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, and Prophesee, the inventor of the world’s most advanced neuromorphic vision systems, today announced a technology partnership that delivers next-generation platforms for OEMs looking to integrate event-based vision systems with high levels of AI performance coupled with ultra-low power technologies.

Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology. Prophesee’s computer vision systems open new potential in areas such as autonomous vehicles, industrial automation, IoT, security and surveillance, and AR/VR.

BrainChip’s first-to-market neuromorphic processor, Akida, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping AI/ML local to the chip, independent of the cloud, also dramatically reduces latency.

“We’ve successfully ported the data from Prophesee’s neuromorphic-based camera sensor to process inference on Akida with impressive performance,” said Anil Mankar, Co-Founder and CDO of BrainChip. “This combination of intelligent vision sensors with Akida’s ability to process data with unparalleled efficiency, precision and economy of energy at the point of acquisition truly advances state-of-the-art AI enablement and offers manufacturers a ready-to-implement solution.”

“By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings, said Luca Verre, CEO and co-founder of Prophesee.”

For additional information about the BrainChip/Prophesee partnership contact

“We’ve successfully ported the data from Prophesee’s neuromorphic-based camera sensor to process inference on Akida with impressive performance,” said Anil Mankar, Co-Founder and CDO of BrainChip. “This combination of intelligent vision sensors with Akida’s ability to process data with unparalleled efficiency, precision and economy of energy at the point of acquisition truly advances state-of-the-art AI enablement and offers manufacturers a ready-to-implement solution.”

“By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings, said Luca Verre, CEO and co-founder of Prophesee.”


BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company’s first-to-market neuromorphic processor, AkidaTM, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping machine learning local to the chip, independent of the cloud, also dramatically reduces latency while improving privacy and data security. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet.

Explore the benefits of Essential AI at 

For additional information about the BrainChip/Prophesee partnership, contact


Prophesee is the inventor of the world’s most advanced neuromorphic vision systems.

The company developed a breakthrough Event-based Vision approach to machine vision. This new vision category allows for significant reductions of power, latency and data processing requirements to reveal what was invisible to traditional frame-based sensors until now. Prophesee’s patented Metavision® sensors and algorithms mimic how the human eye and brain work to dramatically improve efficiency in areas such as autonomous vehicles, industrial automation, IoT, security and surveillance, and AR/VR.

Prophesee is based in Paris, with local offices in Grenoble, Shanghai, Tokyo and Silicon Valley. The company is driven by a team of more than 100 visionary engineers, holds more than 50 international patents and is backed by leading international equity and corporate investors including 360 Capital Partners, European Investment Bank, iBionext, Intel Capital, Robert Bosch Ventures, Sinovation, Supernova Invest, Will Semiconductor, Xiaomi.

Tuesday, June 21, 2022

New image sensor in ARRI's latest Alexa 35 cine camera

From NewsShooter:

ARRI has officially unveiled the ALEXA 35, a new Super 35 digital cinema camera with 17 stops of dynamic range and a host of new features that are all aimed to provide the best possible image quality.

The ALEXA 35 has big shoes to fill as it is the first ARRI camera to feature a sensor that isn’t based on the ALEV-III. The ALEV-III has been used in various forms in every single ALEXA camera since 2010. The ALEXA 35 represents the next big step for ARRI in the evolution of the ALEXA family.


With its new image sensor with ~6 micron pixel pitch, Arri claims to provide 17 stop of dynamic range but details about the sensor manufacturer were not available on their website. Earlier models of their camera used the Alev-III sensor which was a CMOS sensors from onsemi.

A couple of other videos about the new Alexa 35:


Monday, June 20, 2022

Recent Image Sensor Videos

Sony presents "Advantages of Large Format Global Shutter and Rolling Shutter Image Sensor"

onsemi presents their CMOS image sensor layer structure consisting of a microlens array, color filter array, photodiode, pixel transistors, bond layer and ASIC:

Newsight presents its enhanced time-of-flight technology for depth sensing:

And finally a cute cat video to wrap it up: Samsung's new 200 megapixel ISOCELL image sensor promotional video:

Friday, June 17, 2022

PhD Thesis on Dynamic Range Improvements

A PhD thesis titled "Proposal of Architecture and Circuits for Dynamic Range Enhancement of Vision Systems on Chip designed in Deep Submicron Technologies" by from Universidad de Sevilla is now available to the public. The thesis is by Sonia Vargas Sierra who did this work at the Image Sensor group of Microelectronic Institute of Seville.

Although the thesis is from a few years ago, some of the content in the thesis may be of interest now due to recent developments in vertical integrated technologies.

From the Preface:

The work presented in this thesis proposes new techniques for dynamic range expansion in electronic image sensors. Since Dynamic Range (DR) is defined as the ratio between the maximum and the minimum measurable illuminations, the options for improvement seem obvious; first, to reduce the minimum measurable signal by diminishing the noise floor of the sensor, and second, to increase the maximum measurable light by increasing the sensor saturation limit.

In our case, we focus our studies to the possibility of providing DR enhancement functionality in a single chip, without requiring any external software/hardware support, composing what is called a Vision-System-on-Chip (VSoC). In order to do so, this thesis covers two approaches. Chronologically, our first option to improve the DR relied on reducing the noise by using a fabrication technology that is specially devoted to image sensor fabrication, a so-called CMOS Image Sensor (CIS) technology. However, measurements from a test chip indicated that the dynamic range improvement was not sufficient to our purposes (beyond the 100dB limit). Additionally, the technology had some important limitations on what kind of circuitry can be placed next to the photosensor in order to improve its performance. Our second approach has consisted in, first, designing a tone mapping algorithm for DR expansion whose computational needs can be easily mapped onto simple signal conditioning and processing circuitry around the photosensor, and second, designing a test chip implementing this algorithm in a standard CMOS technology.

This thesis is organized in five chapters. Chapter 1 describes the main concepts involved in image sensors focusing in High Dynamic Range (HDR) operation. Chapter 2 presents the study of an image sensor optimized technology in order to be considered for dynamic range improvement techniques. Chapter 3 describes an innovative tone mapping algorithm used to optimize the compression of HDR scenes. Chapter 4 introduces the image sensor chip that has been designed and fabricated, which implements the new tone mapping algorithm. Chapter 5 shows the experimental results and evaluation of the performance of the chip. 

Link to download thesis pdf:

A couple of references related to the topic of this thesis: 
  1. S. Vargas-Sierra et al., "A 151 dB high dynamic range CMOS image sensor chip architecture with tone mapping compression embedded in-pixel", IEEE Sensors J. Jan. 2015. 
  2. Mori et al., "A 4.0 μm Stacked Digital Pixel Sensor Operating in a Dual Quantization Mode for High Dynamic Range," IEEE TED June 2022 issue.

Thursday, June 16, 2022

Evolution of Image Sensor Architectures With Stacked Device Technologies (IEEE TED June 2022)

In a paper titled "Evolution of Image Sensor Architectures With Stacked Device Technologies" in IEEE TED (June 2022) Y. Oike writes:

The evolution of CMOS image sensors and their prospects using advanced imaging technologies are promising candidates to improve the quality of life. With the rapid advent of parallel analog-to-digital converters (ADCs) and back-illuminated (BI) technology, CMOS image sensors currently dominate the market for digital cameras, and stacked CMOS image sensors continue to provide enhanced functionality and user experience in mobile devices. This article reviews the latest achievements in stacked image sensors with respect to the evolution of image sensor architecture for accelerating performance improvements, extending sensing capabilities, and integrating edge computing with various stacked device technologies.

[IEEE subscription required]

Wednesday, June 15, 2022

AlpsenTek vision sensor startup raises nearly $30 million

Chinese vision sensor startup AlpsenTek raised nearly $30 million in Series A funding

AlpsenTek(锐思智芯), a Chinese machine vision sensor startup, announced on June 6 that it raised nearly RMB200 million($30 million) in Series A funding earlier this year.

The investment was was jointly led by Xunxing Investment - an investment company of Chinese smartphone brand OPPO and Cowin Capital.  

AlpsenTek’s original investors ArcSoft Corp, Sunny Optical Industry Fund, Clory Ventures, Shenzhen Angel FOF, Lenovo Capital and Incubator Group, and Zero2IPO Group also participated in this round of funding.  

Founded in 2019, AlpsenTek is a company engaged in the research and development of machine vision sensors and algorithms. The company is headquartered in Beijing and has offices in Shenzhen, Nanjing, and Switzerland.

AlpsenTek employs an international team of professionals from elite research firms worldwide with extensive expertise in developing algorithms, software, hardware, and chips, according to the company.

The core products of AlpsenTek are the ALPIX series hybrid biomimetic vision chips and integrated machine vision solutions. The company said that it holds a complete core set of intellectual property rights and in-house development capabilities to fill technology gaps in machine vision. The company also has begun to cooperate with leading players in the industry. Its products can be widely used in robots, smartphones, unmanned driving, drones, security, and other fields, with a potential market size of over RMB1 trillion ($150 billion).

Original article:

Tuesday, June 14, 2022

Smartphone imaging trends webinar and whitepaper

From Counterpoint Research (

Over the last few years, steady upgrades in CMOS image sensor (CIS) technology combined with the evolution of chipsets – and the improvements in AI they enable – are bringing step-change improvements to smartphone camera performance.

Counterpoint Research would like to invite you to join our latest webinar Smartphone Imaging Trends: New Directions Capturing Magic Moments which will be attended by key executives from HONOR, Qualcomm, and DXOMARK as well as a renowned professional photographer and director Eugenio Recuenco.

The webinar is a complement to an upcoming Counterpoint Whitepaper (also to be released on June 8) which will cover smartphone imaging trends, OEM strategy comparisons, the key components of a great camera and show how technology is helping to unlock creative expression.

The accompanying whitepaper can be obtained here:

The camera has always been a major component of the smartphone and a key selling point among consumers. In the past, smartphone cameras lagged far behind even the most basic DSLRs as form factor and size constraints impacted picture and video quality. But technology has now advanced to the point where today’s top flagship devices are capable of delivering DSLR-like performance.

The rise of AI algorithms, advancements in multi-frame/multi-lens computational photography, more powerful processors, the addition of dedicated image signal and neural processing units and, of course, the compounding of R&D experience has resulted in today’s smartphone cameras rivalling dedicated imaging devices.

In fact, the smartphone’s comparatively compact form factor is an advantage, as clicking pictures and recording videos are becoming integrated into our daily lives through the growth of social media. The role of the camera has shifted to become a life tool, as end-users migrate from being simply consumers of content to creators.

This new direction that imaging has taken warrants further advancements in smartphone cameras, as we lean on technology to make the experience easier while allowing all of us to be more creative.

Table of Contents:

Smartphone Imaging Trends
Megapixels: More is not necessarily better
Multi-camera modules: Covering all scenarios
Image processing: Pushing the laws of physics
OEM Imaging Comparisons
As hardware slows, innovation grows
Where the magic happens
New magic, new directions
Measuring Quality
Components of an exceptional smartphone camera
Image processing innovation
Capturing Magic Moments
Powering art through technology

Monday, June 13, 2022

Lucid Vision Labs discusses EMVA 1288 specs for Sony IMX492

Lucid Vision Labs has released a new video overview of Sony's rolling shutter 47MP IMX492 sensor.

The video discusses quantum efficiency (time stamp 2:02), saturation capacity (3:04), temporal dark noise (3:15), and dynamic range (3:27). It also compares it to some of other higher resolution sensors (31.4MP IMX342, 24.5MP IMX530, 20MP IMX183)

Full EMVA 1288 data is also available on Lucid's product page under the "EMVA 1288 Data" Tab here:

Friday, June 10, 2022

Vayyar Raises Series-E

From TechCrunch:

Vayyar, a company developing radar-imaging sensor technologies, today announced that it raised $108 million in a Series E round led by Koch Disruptive Technologies, with participation from GLy Capital Management, Atreides Management LP, KDT, Battery Ventures, Bessemer Ventures, More VC, Regal Four and Claltech. The round brings Vayyar’s total raised to over $300 million, which CEO Raviv Melamed said is being put toward expanding across verticals and introducing a “family” of machine learning-powered sensor solutions for robotics, retail, public safety and “smart” building products.

“We are pleased and proud to progress our partnership with existing investors including KDT, as well as additional backers which are joining forces with us for the first time,” Melamed said in a statement. “During a challenging period for the global economy, this new funding round is a ringing endorsement of our mission and a clear vote of confidence in the strength of our technology and the strategic agility of our organization.”

Founded in 2011 by Miri Ratner, Naftali Chayat and Melamed, who was previously VP of Intel’s architecture group, Vayyar initially developed its sensor technology to provide an alternative means of screening for early-stage breast cancer. Leveraging MIMO antennas, short for “multiple input, multiple output,” Vayyar’s products can deliver a high-resolution mapping of their surroundings by sending and receiving signals from dozens of antennas.

Vayyar later expanded its “radar-on-chip” technology from healthtech to a number of other sectors, including automotive, senior care, retail, smart home and commercial property. Vayyar sells Vayyar Care, a fall detection system for monitoring people at higher risk of tripping and falling in bedrooms, bathrooms and other living spaces. In the automotive industry, Vayyar offers solutions for collision warnings, parking assistance, adaptive cruise control, seatbelt detection and automatic breaking. And in construction, Vayyar provides a handheld sensor called Walabot for detecting leaky pipes behind walls.

Vayyar competes with Entropix, Photonic Vision, Noitom Technology, Aquifi and ADI, among others, which offer their own flavors of MIMO-based sensors. But the company has long asserted that its software and algorithms set it apart from the competition. Evidently, they were impressive enough to convince Amazon to partner with Vayyar for fall detection on Alexa Together, a subscription service that remotely monitors and assists family members in their homes.

In recent years, Vayyar has entered into customer relationships with brands like Piaggio Group, which will deploy Vayyar’s sensors on some of its forthcoming motorbikes. The company also claims to have supply contracts with automakers from Japan and Vietnam as well as a joint venture agreement with Haier subsidiary HCH Ventures to leverage the latter’s “senior care technology” in China-based businesses.

Signaling ambitions in the Asia-Pacific market in particular, Vayyar noted in a press release that it engaged China International Capital Corporation Limited, a Beijing-based investment company, as its lead financial adviser for the Series E explicitly to “support investor outreach in China.” (One of Vayyar’s newer offices is in China.) Somewhat unusually, Vayyar’s Series E came just under its Seres D, which totaled $109 million. It’s unclear whether the valuation has changed — TechCrunch last reported that Vayyar was valued “north” of $600 million.

Thursday, June 09, 2022

In the News: Yole Webcast, Prophesee Software Suite

The CIS market is back to strong growth: Are we at the beginning of the much-awaited sensing era?

Yole will host a webcast on Thursday 16, June 2022

The image sensor market struggled through a mixed 2021 but appears to have emerged strongly positioned. Could 2022 be the beginning of strong growth on the back of sensing applications?
In this webcast we will explore what the next few quarters and years hold for the CMOS image sensor (CIS) industry, including market demand, industry revenue and capacity.

This webcast will take a quick look back at 2021 to review the recent history of CIS before focusing on the near and mid-term prospects for the CIS industry. It will also cover market dynamics, supply, and pricing, with particular focus on answering the question, “Are we at the beginning of strong growth on the back of sensing applications?”

Prophesee releases its entire event-based vision software suite for free, including commercial license, further enabling community of thousands of Engineers and Researchers Worldwide

New release of 5X award-winning suite includes a complete set of Machine Learning tools, new key Open-Source modules, ready-to-use applications, code samples and allows for completely free evaluation, development and release of products with the included commercial license.

With this advanced toolkit, engineers can easily develop computer vision applications on a PC for a wide range of markets, including industrial automation, IoT, surveillance, mobile, medical, automotive and more.

“We have seen a significant increase in interest and use of Event-Based Vision and we now have an active and fast-growing community of more than 4,500 inventors using Metavision Intelligence since its launch. As we are opening the event-based vision market across many segments, we decided to boost the adoption of MIS throughout the ecosystem targeting 40,000 users in the next two years. By offering these development aids, we can accelerate the evolution of event-based vision to a broader range of applications and use cases and allow for each player in the chain to add its own value,” said Luca Verre, co-founder and CEO of Prophesee.

Wednesday, June 08, 2022

Camera Arrays for Large Scale Surveillance

From the journal Light Science and Applications, in a paper titled "A modular hierarchical array camera" X. Yuan et al. write:

Abstract: Array cameras removed the optical limitations of a single camera and paved the way for high-performance imaging via the combination of micro-cameras and computation to fuse multiple aperture images. However, existing solutions use dense arrays of cameras that require laborious calibration and lack flexibility and practicality. Inspired by the cognition function principle of the human brain, we develop an unstructured array camera system that adopts a hierarchical modular design with multiscale hybrid cameras composing different modules. Intelligent computations are designed to collaboratively operate along both intra- and intermodule pathways. This system can adaptively allocate imagery resources to dramatically reduce the hardware cost and possesses unprecedented flexibility, robustness, and versatility. Large scenes of real-world data were acquired to perform human-centric studies for the assessment of human behaviours at the individual level and crowd behaviours at the population level requiring high-resolution long-term monitoring of dynamic wide-area scenes.

Given the potential applications shown (large scale surveillance), it is quite intriguing that the "Ethics Declaration" section of this paper is empty.

Open access link:

See also:

Tuesday, June 07, 2022