Friday, May 07, 2021

Chronoptics on iToF Camera Design Challenges

Chronoptics CTO Refael Whyte publishes an nice article "Indirect Time-of-Flight Depth Camera Systems Design" about different trade-offs and challenges in ToF cameras. Few quotes:

"The table below compares two image sensors the [Melexis] MLX75027 and [Espros] EPC635, both of which have publicly available datasheets.


The MLX75027 has 32 times more pixels than the EPC635, but that comes at a higher price. The application of the depth data dictates the image sensor resolution required.

The pixel size, demodulation contrast and quantum efficiency are all metrics relating to the efficiency of capture of reflected photons. The bigger the pixel active area the bigger the surface area that incoming photons can be collected over. The pixel’s active area is the fill factor multiplied by its size. Both the MLX7502 and EPC635 are back side illuminated (BSI), meaning 100% fill factor. The quantum efficiency is the ratio of electrons generated over the number of arriving photons. The higher the quantum efficiency the more photons are captured. The demodulation contrast is a measure of the number of captured photons that are used in the depth measurement.

Illumination sources should be designed for IEC 60825–1:2014, specification for eye safety. The other aspect of eye safety design is having no single point of failure that makes the illumination source non-eye safe. For example, if the diffuser cracks and exposes the laser elements, is it still eye safe? It not the crack needs to be detected and the laser turned off, or two barriers used incase one fails. Indium tin oxide (ITO) can be used as a coating, as it is electrically conductive and optically transparent, the impedance will change if the surface is damaged. Or a photodiode in the laser can be used to detect changes in the back reflection indicating damage. The same considerations around power supplies shorting and other failure modes need to be considered."

Assorted Videos: ams, Synopsys, ON Semi

Ams presents the use cases for its miniature NanEyeC camera module:

Synopsys presents its "Holistic Design Approach to LiDAR:"

ON Semi publishes a webinar about its low-power Event Triggered Imaging Using the RSL10 Smart Shot Camera:

Thursday, May 06, 2021

Gpixel and Tower Announce VGA iToF Sensor

GlobeNewswire: Gpixel and Tower announces Gpixel’s iToF sensor, GTOF0503, utilizing TOWER’s pixel on its 65nm pixel-level stacked BSI CIS technology fabricated in its Uozo, Japan facility. The GTOF0503 sensor features a 5um 3-tap iToF pixel incorporating a pixel array with a resolution of 640 x 480 pixels, aimed at vision-guided robotics, bin picking, automated guided vehicles, automotive and factory automation applications.

We are very proud to announce the release of our new iToF sensor, entering the 3D imaging market, made possible by our collaboration with Tower’s team. Tower’s vast expertise in development of iToF image sensor technology provided an outstanding platform for the design of this cutting-edge performing product,” said Wim Wuyts, Chief Commercial Officer, Gpixel.”This collaboration produced a unique sensor product that is perfectly suited to serve a wide variety of fast-growing applications and sets a roadmap for future successful developments.

A demodulation contrast of > 80% is achieved with modulation frequencies of up to 165 MHz at either 60 fps in Single Modulation Frequency (SMF) or 30 fps in Dual Modulation Frequency (DMF) depth mode.

Tower is excited to take an important role in this extraordinary project, collaborating with Gpixel’s talented team of experts in the field of sensor development and bringing to market this new, cutting-edge iToF sensor,” said Avi Strum, SVP and GM of Sensors & Displays Business Unit, Tower Semiconductor. “Gpxiel is a valuable and long-term partner, and we are confident that this partnership will continue to bring to market additional intriguing solutions.

GTOF0503 is  available as a bare die and in a 11 x 11 mm ceramic package. Samples (bare die) and evaluation kits are available as well.

AIStorm's AI-in-Imager Uses Tower's Hi-K VIA Capacitor Memory

GlobeNewswire, BusinessWire: AIStorm and Tower announce that AIStorm’s new AI-in-Imager products will use AIStorm’s electron multiplication architecture and Tower’s Hi-K VIA capacitor memory instead of digital calculations to perform AI computation at the pixel level. This saves the silicon real estate, multiple die packaging costs and power required of competitive digital systems including eliminating the need for input digitization. The Hi-K via capacitors reside in the metal layers and thus allow the AI to be built directly into the pixel matrix without any compromise on pixel density or size.

This new imager technology opens up a whole new avenue of “always on” functionality. Instead of periodically taking a picture and interfacing with an external AI processor through complex digitization, transport and memory schemes, AIStorm’s pixel matrix is itself the processor & memory. No other technology can do that,” said Avi Strum, SVP of Sensors and Displays BU at Tower Semiconductor.

AIStorm has built mobile models, under the MantisNet & Cheetah families, that use the direct pixel coupling of the AI matrix to offer sub-100uW “always on” operation with best-in-class latencies, and post wakeup processing of up to 200 TOPs/W.


Himax Reports 70% YoY CMOS Sensor Sales Growth

GlobeNewswire: Himax reports its image sensor sales grew by 70% YoY in Q1 2021. However, it appears that this spectacular growth does not continue into the Q2:

"The CIS revenue is expected to be flattish sequentially in the second quarter. The Company’s shipment has been badly capped by the foundry capacity despite surging customer demands for the CMOS image sensors for web camera and notebook. Nevertheless, a decent growth is expected in second half of 2021 thanks to a major engagement from a major existing customer.

Industry-first 2-in-1 CMOS image sensor of Himax supporting video conferencing and AI facial recognition on ultralow power has been designed into some of the most stylish, slim bezel notebook models of certain major notebook names. Small volume production has started in the fourth quarter of last year. Meaningful ramp-up volume is expected for the coming quarters.

Regarding ultralow power always-on CMOS image sensor that targets always-on AI applications, the Company is getting growing feedback and design adoptions from customers globally for various markets, such as car recorders, surveillance, smart electric meters, drones, smart home appliances, and consumer electronics. More progress will be reported in due course."

Samsung, UCSD, and University of Southern Mississippi Develop SWIR to Visible Image Converter

Phys.org, Newswise, UCSD: Advanced Functional Materials paper "Organic Upconversion Imager with Dual Electronic and Optical Readouts for Shortwave Infrared Light Detection" by Ning Li, Naresh Eedugurala, Dong-Seok Leem, Jason D. Azoulay, and Tse Nga Ng from Samsung Advanced Institute of Technology, UCSD, and University of Southern Mississippi presents a flat SWIR to visible converting device:

"...an organic upconversion imager that is efficient in both optical and electronic readouts, extending the capability of human and machine vision to 1400 nm, is designed and demonstrated. The imager structure incorporates interfacial layers to suppress non‐radiative recombination and provide enhanced optical upconversion efficiency and electronic detectivity. The photoresponse is comparable to state‐of‐the‐art organic infrared photodiodes exhibiting a high external quantum efficiency of ≤35% at a low bias of ≤3 V and 3 dB bandwidth of 10 kHz. The large active area of 2 cm2 enables demonstrations such as object inspection, imaging through smog, and concurrent recording of blood vessel location and blood flow pulses. These examples showcase the potential of the authors’ dual‐readout imager to directly upconvert infrared light for human visual perception and simultaneously yield electronic signals for automated monitoring applications."


Graphene to Revolutionize Automotive Imaging?

AZOsensorsAUTOVISION Spearhead Project from the Europe-based Graphene Flagship consortium is currently creating a new graphene-based, high-resolution image sensor. The new sensor can detect a broad light spectrum from UV to SWIR.

In 2020, member organizations under the Autovision umbrella announced a technique for the growth and transfer of wafer-scale graphene that uses standard semiconductor equipment. Project members collaborated to outline a suite of camera tests designed to make the Autovision sensor compete with cutting-edge visible cameras, SWIR cameras, and LiDAR systems.

The AUTOVISION project is led by Qurv in Barcelona, and counts on the collaboration of industrial partners such Aixtron in the UK and Veoneer in Sweden, this new project will help make safe deployment of autonomous vehicles possible. Over the course of three years, the project will produce CMOS graphene quantum dot image sensors in prototype sensor systems, ready for uptake in the automotive sector. Across the duration of the project, the developing image sensor is set to take huge leaps in sensitivity, operation speed and pixel size.

Omnivision Announces Automotive HDR ISP

Omnivision's new OAX4000 is a companion ISP for the company's HDR sensors providing a complete multicamera viewing application solution with fully processed YUV output. It is capable of processing up to four camera modules with 140 dB HDR, along with the leading LED flicker mitigation (LFM) performance in the industry and high 8MP resolution. It supports multiple CFA patterns, including Bayer, RCCB, RGB-IR and RYYCy. Additionally, the OAX4000 offers more than 30% power savings over the previous generation.


Wednesday, May 05, 2021

More about the First Event-Driven Sensor in Space

Zurich University publishes more info about DAVIS240, the first event driven sensor in space. The pair of DAVIS240 launched to space was included in the custom payload as part of the UNSW Canberra Space’s M2 CubeSat satellite. It was launched with Rocket Lab’s ‘They Go Up So Fast’ mission from New Zealand March 23, 2021. The article includes a very nice high resolution picture of the sensor's layout:

CVPR 2021 Workshop on Event-based Vision Papers On-Line

Computer Vision and Pattern Recognition Workshop on Event-based Vision to be held on June 19, 2021, has already published its papers in on-line access:

  • v2e: From Video Frames to Realistic DVS Events, and Suppl mat
  • Differentiable Event Stream Simulator for Non-Rigid 3D Tracking, and Suppl mat
  • Comparing Representations in Tracking for Event Camera-based SLAM
  • Image Reconstruction from Neuromorphic Event Cameras using Laplacian-Prediction and Poisson Integration with Spiking and Artificial Neural Networks
  • Detecting Stable Keypoints from Events through Image Gradient Prediction
  • EFI-Net: Video Frame Interpolation from Fusion of Events and Frames, and Suppl. mat
  • DVS-OUTLAB: A Neuromorphic Event-Based Long Time Monitoring Dataset for Real-World Outdoor Scenarios
  • N-ROD: a Neuromorphic Dataset for Synthetic-to-Real Domain Adaptation
  • Lifting Monocular Events to 3D Human Poses
  • A Cortically-inspired Architecture for Event-based Visual Motion Processing: From Design Principles to Real-world Applications
  • Spike timing-based unsupervised learning of orientation, disparity, and motion representations in a spiking neural network, and Suppl mat
  • Feedback control of event cameras
  • How to Calibrate Your Event Camera
  • Live Demonstration: Incremental Motion Estimation for Event-based Cameras by Dispersion Minimisation
Thanks to TD for the link!

Vivo's Imaging R&D Team is 700 People Strong

Baidu digital creator Lao Hu publishes an article about camera development team and investment of one of the largest smartphone brands, Vivo:

"...in terms of imaging, Li Zhuo, director of vivo imaging products, revealed last year that the research and development investment in vivo imaging exceeded 20 billion yuan two years ago. In addition, vivo has established global imaging research and development centers in San Diego, Japan, Tokyo, Hangzhou, Xi'an and other places, with a team of more than 700 research and development personnel.

According to media reports, vivo's imaging research and development center has a very clear division. The imaging team in San Diego, USA mainly focuses on the platform ISP level; the imaging R&D team in Japan mainly focuses on the customization and optimization of optics, image sensors, lens modules, etc.; the imaging team in Hangzhou focuses on imaging algorithms; the imaging team in Xi’an is mainly responsible for mobile Debugging and development of imaging field and pre-research of some algorithms.

Vivo not only insists on independent innovation, but also cooperates with powerful third-party forces. For example, last year, Vivo reached a global strategic partnership with Zeiss, the century-old German optical master, and the two parties also jointly established the Vivo Zeiss Joint Imaging Laboratory and organized their respective technologies. Experts use their respective advantages in optics and algorithms to conduct joint research and development, and are committed to solving a series of technical bottlenecks in mobile imaging, leading the continuous innovation of imaging technology, and bringing the world's top mobile phone shooting experience to consumers around the world."