Sunday, May 16, 2021

HRL Interposer Enables Pixel Fan-Out and Curved Sensor Stacking

HRL Laboratories' additive manufacturing team demonstrates 3D-printed interposers with previously impossible slanted and curved vias with diameters of less than 10µm. Vias are small openings in insulating layers of integrated circuits that allow conductive connections.

HRL is now 3D printing vias in polymer and ceramic materials with 2 µm resolution, allowing for complex routing. The vias are then metallized to electrically connect different devices and integrated circuits.

We have printed arrays of straight and curved vias with an aspect ratio of at least 200:1. There is still room to increase this ratio using the low-viscosity preceramic resin that we developed in house,” said HRL Lead Engineer Kayleigh Porter.

HRL Laboratories’ development effort is currently funded by DARPA’s Microsystems Technology Office under the Focal arrays for Curved Infrared Imagers (FOCII) program.

Saturday, May 15, 2021

SPAD Size Scaling Trade-offs

EPFL and Canon publish an MDPI paper "A Scaling Law for SPAD Pixel Miniaturization" by Kazuhiro Morimoto and Edoardo Charbon.

"The growing demands on compact and high-definition single-photon avalanche diode (SPAD) arrays have motivated researchers to explore pixel miniaturization techniques to achieve sub-10 um pixels. The scaling of the SPAD pixel size has an impact on key performance metrics, and it is, thereby, critical to conduct a systematic analysis of the underlying tradeoffs in miniaturized SPADs. On the basis of the general assumptions and constraints for layout geometry, we performed an analytical formulation of the scaling laws for the key metrics, such as the fill factor (FF), photon detection probability (PDP), dark count rate (DCR), correlated noise, and power consumption. Numerical calculations for various parameter sets indicated that some of the metrics, such as the DCR and power consumption, were improved by pixel miniaturization, whereas other metrics, such as the FF and PDP, were degraded. Comparison of the theoretically estimated scaling trends with previously published experimental results suggests that the scaling law analysis is in good agreement with practical SPAD devices. Our scaling law analysis could provide a useful tool to conduct a detailed performance comparison between various process, device, and layout configurations, which is essential for pushing the limit of SPAD pixel miniaturization toward sub-2 um-pitch SPADs."

Friday, May 14, 2021

Sony Thailand Image Sensor Production Center to Run on 100% Renewable Energy

Sony Device Technology Center in Thailand is taking on a challenge of becoming the first Sony Group manufacturing site in Southeast Asia to operate its facilities using 100% renewable energy. The Center is mainly responsible for the assembly of image sensors.

Sony Device Technology has a roof area of about 50,000 square meters, and plans to cover this surface almost entirely with solar panels by October of this year. The solar panels will be installed on an area approximately 2.54 times the ground area of the Tokyo Dome, which would make it the largest area covered by solar panels at any facility managed by the Sony Group. Some of the panels have already started generating electricity, and by the end of the fiscal year, all of the large panels will be generating electricity to supply the plant. Although these panels alone will not be able to supply all of the electricity required by the facility, they will be able to reduce CO2 emissions by purchasing Renewable Energy Certificates, and by combining solar panels with the purchase of Renewable Energy Certificates, Sony Device Technology expects to become the first Sony Group manufacturing site in Asia to achieve 100% renewable energy operation by the end of this fiscal year.

APD with Photon Trapping

University of California – Davis and W&WSens Devices publish a paper "Avalanche Photodetectors with Photon Trapping Structures for Biomedical Imaging Applications" by Cesar Bartolo-Perez, Soroush Ghandiparsi, Ahmed S. Mayet, Hilal Cansizoglu, Yang Gao, Wayesh Qarony, Ahasan Ahamed, Shih-Yuan Wang, Simon R. Cherry, M. Saif Islam, and Gerard Arino-Estrada.

"In this work, we evaluate the gain, detection efficiency, and timing performance of avalanche photodiodes (APD) with photon trapping nanostructures for photons with 450 and 850 nm wavelengths. At 850 nm wavelength, our photon trapping avalanche photodiodes showed 30 times higher gain, an increase from 16% to >60% enhanced absorption efficiency, and a 50% reduction in the full width at half maximum (FWHM) pulse response time close to the breakdown voltage. At 450 nm wavelength, the external quantum efficiency increased from 54% to 82%, while the gain was enhanced more than 20-fold. Therefore, silicon APDs with photon trapping structures exhibited a dramatic increase in absorption compared to control devices. Results suggest very thin devices with fast timing properties and high absorption between the near-ultraviolet and the near infrared region can be manufactured for high-speed applications in biomedical imaging. This study paves the way towards obtaining single photon detectors with photon trapping structures with gains above 10^6 for the entire visible range."

Thursday, May 13, 2021

Microsoft Expands its iToF Alliances with Orbbec, LG Innotek, Tempo, SpeedCargo, AgentFactory

Microsoft Azure Depth Platform keeps expanding its iToF platform and announces more customers and adopters:
Orbbec: Orbbec is partnering with Microsoft to develop a new series of high-performance 3D cameras that can run advanced depth vision algorithms using onboard computing to convert raw data into precise depth images. The camera will connect to Azure cloud, and leverage device management, data streaming and AI analytics capabilities. Designed for advanced human/machine interface, robotics, 3D scanning and surveillance use as well as gaming and other consumer applications, the first camera is expected to debut in early 2022. (PRNewswire)
LG Innotek: Daniel Bar, head of business incubation for the Silicon & Sensor's group at Microsoft, said, "LG Innotek brings world class manufacturing expertise in complex optoelectronic systems. We are excited to welcome LG Innotek to our ecosystem and accelerate time to market for 3D cameras. This is a key step towards providing easy access for computer vision developers to create 3D vision applications."
Tempo — the first home fitness system that uses 3D sensors and A.I. to analyze your motion and provide real-time rep counting.
SPEEDCARGO with Microsoft’s Azure Depth Platform program aims to disrupt the air cargo and logistics industry with advanced 3D vision technologies.
Agent Factory leverages 3D sensing technologies by Microsoft in its AI solutions to advance automation, digitalization and optimization of logistics operations.


Wednesday, May 12, 2021

ResearchInChina Reports about LiDAR Adoption in Chinese Cars

ResearchInChina publishes "Global and China Automotive LiDAR Industry Report, 2021." Few quotes:

"At the 2021 Shanghai Auto Show, Huawei shocked all automakers and suppliers. Huawei has directly and indirectly supported more than 10,000 engineers in the R&D of intelligent automobiles. Except production, Huawei covers almost all of aspects required for digital transformation of automobiles: automotive perception and decision-making, network communications, electric drive, batteries, electric control, cloud-road networks outside vehicles, R&D and marketing.

Compared with the previous R&D investment involved with 1,000 persons (such as Baidu), Huawei's R&D team consisting of 10,000 persons has greatly accelerated the upgrade pace of intelligent networking of China's auto industry. When other countries around the world are still worrying about the pandemic, China will enter the era of leading the development of global automotive intelligent networking from 2021.

Take LiDAR installation as an example, the new models with LiDAR mainly come from domestic automakers."


"As L4 technical solutions are gradually applied to L2-L3 models, LiDAR has been installed widely. LiDAR is currently available on production cars, and is mainly used to enhance ADAS functions and make new cars more appealing.

So far, domestic OEMs prefer to adopt hybrid solid-state LiDAR (including rotating mirror, prism, MEMS) solutions, mainly because:

First, it is easier to reduce the costs of hybrid solid-state LiDAR than mechanical LiDAR. Compared with pure solid-state (OPA, Flash) LiDAR, hybrid solid-state LiDAR technology is relatively mature and easier to commercialize.

Second, the Rotating Mirror Solution (represented by Valeo) is the first technical solution that meets National Automotive Standards and the performance requirements of automakers, and can be supplied in batches with controllable costs."

Note: The first two pictures are taken from SystemPlus.fr reports without permission:

Depth from Chromatic Aberrations

OSA Optics Express publishes KAIST paper "Compact and fast depth sensor based on a liquid lens using chromatic aberration to improve accuracy" by Gyu Suk Jung and Yong Hyub Won.

"Depth from defocus (DFD) obtains depth information using two defocused images, making it possible to obtain a depth map with high resolution equal to that of the RGB image. However, it is difficult to change the focus mechanically in real-time applications, and the depth range is narrow because it is inversely proportional to the depth accuracy. This paper presents a compact DFD system based on a liquid lens that uses chromatic aberration for real-time application and depth accuracy improvement. The electrical focus changing of a liquid lens greatly shortens the image-capturing time, making it suitable for real-time applications as well as helping with compact lens design. Depth accuracy can be improved by dividing the depth range into three channels using chromatic aberration. This work demonstrated the improvement of depth accuracy through theory and simulation and verified it through DFD system design and depth measurement experiments of real 3D objects. Our depth measurement system showed a root mean square error (RMSE) of 0.7 mm to 4.98 mm compared to 2.275 mm to 12.3 mm in the conventional method, for the depth measurement range of 30 cm to 70 cm. Only three lenses are required in the total optical system. The response time of changing focus by the liquid lens is 10 ms, so two defocused images for DFD can be acquired within a single frame period of real-time operations."

IC Insights: CMOS Sensor Sales to Grow 2x from 2020 to 2025

IC Insights: The high-flying market for CMOS image sensors hit a speed bump in 2020 with the global outbreak of the Covid-19 virus crisis significantly cutting sales growth in this large optoelectronics product category to 3% last year compared to an annual average of nearly 16% since 2010.

When the Covid-19 virus pandemic accelerated in the first half of 2020, businesses, schools, travel, and most public activities were shut down worldwide, causing a nosedive in CMOS image sensor applications—including smartphones, automobiles, and a wide range of embedded cameras, which are increasingly used in commercial and industrial systems.  It ended up that 2020 was Sony’s worst year for image sensor growth since it began emphasizing CMOS technology over CCDs in 2006.

With the global economy expected to regain momentum in 2021 and more digital cameras being designed into systems—including new 5G smartphones and machine-vision applications, sales of CMOS image sensors are projected to increase by a compound annual growth rate (CAGR) of 12.0%, reaching $33.6 billion in 2025.  Total shipments of CMOS image sensors are forecast to grow by a CAGR of 14.9% to 13.5 billion units in 2025 compared to 6.7 billion in 2020.

The automotive systems are said to be the fastest growing application for CMOS image sensors in the next five years, with sales rising by CAGR of 33.8% to reach $5.1 billion in 2025.  After that, the highest sales growth rates in the five-year forecast are:  medical and scientific systems (a CAGR of 26.4% to $1.8 billion); security (a CAGR of 23.2% to $3.2 billion); and industrial, including robots and the Internet of Things (a CAGR of 21.8% to $3.5 billion).  CMOS image sensor sales for cellphones—the largest end-use application—are forecast to grow by a CAGR of 6.3% to $15.7 billion in 2025, or about 47% of the market total versus 61% in 2020 ($11.6 billion).

Tuesday, May 11, 2021

Monday, May 10, 2021

Omnivision to Announce 0.61um Pixel Sensor

SparrowNews quotes Chinese sources that Omnivision is about to announce OV60A sensor with 0.61um pixels.

The 1/2.8-inch 60MP OV60A is the world’s first 0.61um image sensor for mobile phone front and rear cameras. The four-in-one CFA allows near-pixel merging to deliver 15MP images with 4x the sensitivity, providing the equivalent performance of 1.22 microns for preview and native 4K video, with the additional pixels needed for EIS.

The sensor also supports a low-power mode for “always-on” sensing, which saves battery life when used in conjunction with the phone’s artificial intelligence features. The “always-on” low power modes include ambient light sensing for wake-up and low power streaming mode. The sensor also supports dual I/O voltage rails (1.8V and 1.2V) and a CPHY interface.


Here is a timeline for the renewed pixel size race:

Sunday, May 09, 2021

Artilux Dramatically Reduces Ge-on-Si PD Dark Current

Artilux whitepaper presents the company's latest progress in the dark current reduction:

"In this white paper, we proudly announce Artilux Halcyon GeSi Technology, which reduces dark current and DCR by more than 3 orders of magnitude compared to what was commonly known in past literature. Moreover, this breakthrough can be adopted in a wide variety of photodetectors with customized pixel arrays. With such unprecedented performance and attribute, we expect Artilux Halcyon GeSi Technology will soon be applied to multiple growing market segments ranging from NIR and SWIR image sensors, hyperspectral image sensor, 3D and 4D LiDAR sensors and beyond by working with our partners. These markets are estimated to have strong growth with double-digit CAGR (compound annual growth rate) between 2021 to 2025.

To provide a fair comparison to past literature, we fabricated a series of normal incidence photodetectors at various sizes and measured their dark currents. The resulting data with the use of Artilux Halcyon GeSi Technology are shown in Fig. 1.

To evaluate the noise performance of these photodetectors when being used in linear mode or in Geiger-mode photodetection, it’s standard to define the so-called bulk dark current density (unit: mA/cm2) and surface dark current density (unit: μA/cm) and extract them from the data shown in Fig. 1. In past literature, these two numbers were reported roughly in the order of 10 mA/cm2 and 10 μA/cm, respectively. With the use of Artilux Halcyon GeSi Technology, these two numbers can be drastically reduced to roughly a few μA/cm2 and a few nA/cm, respectively, which translates into more than 3 orders
of magnitude improvement!

Halcyon GeSi Technology in conjunction with Artilux proprietary scaling design. For SWIR image sensor with less than 5 μm pixel pitch at low bias voltage typical for this application, the expected performance in pilot run is in the order of a few to tens of fA dark current (uncooled). For direct ToF (time of flight) 3D sensor with slightly larger pixel pitch and around 15V breakdown voltage, the expected performance in pilot run is in the order of tens to hundreds of KHz DCR (uncooled). We will continue to perfect these performances in future Artilux products."

Apparently, the data on the graph shows the previous generation Ge-on-Si dark current:

Saturday, May 08, 2021

A Low Dark Current 160 dB DR Logarithmic Pixel

MDPI paper "A Low Dark Current 160 dB Logarithmic Pixel with Low Voltage Photodiode Biasing" is written by 2 authors with 4 affiliations: Alessandro Michel Brunetti and Bhaskar Choubey from University of Oxford, Universität Siegen, Fraunhofer Institute of Microelectronics Circuits and Systems, and Absensing.

"A typical logarithmic pixels suffer from poor performance under low light conditions due to a leakage current, usually referred to as the dark current. In this paper, we propose a logarithmic pixel design capable of reducing the dark current through low-voltage photodiode biasing, without introducing any process modifications. The proposed pixel combines a high dynamic range with a significant improvement in the dark response compared to a standard logarithmic pixel. The reported experimental results show this architecture to achieve an almost 35 dB improvement at the expense of three additional transistors, thereby achieving an unprecedented dynamic range higher than 160 dB."

Friday, May 07, 2021

Goodix Engineers Become Finalists for the European Inventor Award 2021

ChinaDailyEuropean Patent Office announces finalists for the European Inventor Award 2021. Goodix engineers Bo Pi and Yi He have have been nominated for the world's first fingerprint sensor able to check both fingerprint patterns and the presence of blood flow in the patent EP3072083.

Based on their combined expertise - Pi as a physicist and technologist with extensive electrical sensing knowledge and He as a former optoelectronics professor with experience in fibreoptics and on optical devices - the pair made two key discoveries that would later form the basis of their invention. First, that infrared light sensors - typically used by doctors for medical diagnoses - could be used to measure a finger pulse. Second, that a finger pressed against a sensor forces blood out of the capillaries. These findings led to the development of a new kind of optical sensor capable of capturing these changes, while simultaneously tracing a map of the user's fingerprint. The combination of these multiple technologies makes the world's first integrated Live Finger Detection (LFD) sensor developed by Pi and He almost impossible to deceive, setting a new benchmark for smartphone security.

Today, Pi is Chief Technology Officer at Goodix while He is R&D Director.


Chronoptics on iToF Camera Design Challenges

Chronoptics CTO Refael Whyte publishes an nice article "Indirect Time-of-Flight Depth Camera Systems Design" about different trade-offs and challenges in ToF cameras. Few quotes:

"The table below compares two image sensors the [Melexis] MLX75027 and [Espros] EPC635, both of which have publicly available datasheets.


The MLX75027 has 32 times more pixels than the EPC635, but that comes at a higher price. The application of the depth data dictates the image sensor resolution required.

The pixel size, demodulation contrast and quantum efficiency are all metrics relating to the efficiency of capture of reflected photons. The bigger the pixel active area the bigger the surface area that incoming photons can be collected over. The pixel’s active area is the fill factor multiplied by its size. Both the MLX7502 and EPC635 are back side illuminated (BSI), meaning 100% fill factor. The quantum efficiency is the ratio of electrons generated over the number of arriving photons. The higher the quantum efficiency the more photons are captured. The demodulation contrast is a measure of the number of captured photons that are used in the depth measurement.

Illumination sources should be designed for IEC 60825–1:2014, specification for eye safety. The other aspect of eye safety design is having no single point of failure that makes the illumination source non-eye safe. For example, if the diffuser cracks and exposes the laser elements, is it still eye safe? It not the crack needs to be detected and the laser turned off, or two barriers used incase one fails. Indium tin oxide (ITO) can be used as a coating, as it is electrically conductive and optically transparent, the impedance will change if the surface is damaged. Or a photodiode in the laser can be used to detect changes in the back reflection indicating damage. The same considerations around power supplies shorting and other failure modes need to be considered."

Assorted Videos: ams, Synopsys, ON Semi

Ams presents the use cases for its miniature NanEyeC camera module:

Synopsys presents its "Holistic Design Approach to LiDAR:"

ON Semi publishes a webinar about its low-power Event Triggered Imaging Using the RSL10 Smart Shot Camera:

Thursday, May 06, 2021

Gpixel and Tower Announce VGA iToF Sensor

GlobeNewswire: Gpixel and Tower announces Gpixel’s iToF sensor, GTOF0503, utilizing TOWER’s pixel on its 65nm pixel-level stacked BSI CIS technology fabricated in its Uozo, Japan facility. The GTOF0503 sensor features a 5um 3-tap iToF pixel incorporating a pixel array with a resolution of 640 x 480 pixels, aimed at vision-guided robotics, bin picking, automated guided vehicles, automotive and factory automation applications.

We are very proud to announce the release of our new iToF sensor, entering the 3D imaging market, made possible by our collaboration with Tower’s team. Tower’s vast expertise in development of iToF image sensor technology provided an outstanding platform for the design of this cutting-edge performing product,” said Wim Wuyts, Chief Commercial Officer, Gpixel.”This collaboration produced a unique sensor product that is perfectly suited to serve a wide variety of fast-growing applications and sets a roadmap for future successful developments.

A demodulation contrast of > 80% is achieved with modulation frequencies of up to 165 MHz at either 60 fps in Single Modulation Frequency (SMF) or 30 fps in Dual Modulation Frequency (DMF) depth mode.

Tower is excited to take an important role in this extraordinary project, collaborating with Gpixel’s talented team of experts in the field of sensor development and bringing to market this new, cutting-edge iToF sensor,” said Avi Strum, SVP and GM of Sensors & Displays Business Unit, Tower Semiconductor. “Gpxiel is a valuable and long-term partner, and we are confident that this partnership will continue to bring to market additional intriguing solutions.

GTOF0503 is  available as a bare die and in a 11 x 11 mm ceramic package. Samples (bare die) and evaluation kits are available as well.

AIStorm's AI-in-Imager Uses Tower's Hi-K VIA Capacitor Memory

GlobeNewswire, BusinessWire: AIStorm and Tower announce that AIStorm’s new AI-in-Imager products will use AIStorm’s electron multiplication architecture and Tower’s Hi-K VIA capacitor memory instead of digital calculations to perform AI computation at the pixel level. This saves the silicon real estate, multiple die packaging costs and power required of competitive digital systems including eliminating the need for input digitization. The Hi-K via capacitors reside in the metal layers and thus allow the AI to be built directly into the pixel matrix without any compromise on pixel density or size.

This new imager technology opens up a whole new avenue of “always on” functionality. Instead of periodically taking a picture and interfacing with an external AI processor through complex digitization, transport and memory schemes, AIStorm’s pixel matrix is itself the processor & memory. No other technology can do that,” said Avi Strum, SVP of Sensors and Displays BU at Tower Semiconductor.

AIStorm has built mobile models, under the MantisNet & Cheetah families, that use the direct pixel coupling of the AI matrix to offer sub-100uW “always on” operation with best-in-class latencies, and post wakeup processing of up to 200 TOPs/W.


Himax Reports 70% YoY CMOS Sensor Sales Growth

GlobeNewswire: Himax reports its image sensor sales grew by 70% YoY in Q1 2021. However, it appears that this spectacular growth does not continue into the Q2:

"The CIS revenue is expected to be flattish sequentially in the second quarter. The Company’s shipment has been badly capped by the foundry capacity despite surging customer demands for the CMOS image sensors for web camera and notebook. Nevertheless, a decent growth is expected in second half of 2021 thanks to a major engagement from a major existing customer.

Industry-first 2-in-1 CMOS image sensor of Himax supporting video conferencing and AI facial recognition on ultralow power has been designed into some of the most stylish, slim bezel notebook models of certain major notebook names. Small volume production has started in the fourth quarter of last year. Meaningful ramp-up volume is expected for the coming quarters.

Regarding ultralow power always-on CMOS image sensor that targets always-on AI applications, the Company is getting growing feedback and design adoptions from customers globally for various markets, such as car recorders, surveillance, smart electric meters, drones, smart home appliances, and consumer electronics. More progress will be reported in due course."

Samsung, UCSD, and University of Southern Mississippi Develop SWIR to Visible Image Converter

Phys.org, Newswise, UCSD: Advanced Functional Materials paper "Organic Upconversion Imager with Dual Electronic and Optical Readouts for Shortwave Infrared Light Detection" by Ning Li, Naresh Eedugurala, Dong-Seok Leem, Jason D. Azoulay, and Tse Nga Ng from Samsung Advanced Institute of Technology, UCSD, and University of Southern Mississippi presents a flat SWIR to visible converting device:

"...an organic upconversion imager that is efficient in both optical and electronic readouts, extending the capability of human and machine vision to 1400 nm, is designed and demonstrated. The imager structure incorporates interfacial layers to suppress non‐radiative recombination and provide enhanced optical upconversion efficiency and electronic detectivity. The photoresponse is comparable to state‐of‐the‐art organic infrared photodiodes exhibiting a high external quantum efficiency of ≤35% at a low bias of ≤3 V and 3 dB bandwidth of 10 kHz. The large active area of 2 cm2 enables demonstrations such as object inspection, imaging through smog, and concurrent recording of blood vessel location and blood flow pulses. These examples showcase the potential of the authors’ dual‐readout imager to directly upconvert infrared light for human visual perception and simultaneously yield electronic signals for automated monitoring applications."