Wednesday, December 11, 2019

D-ToF LiDAR Model

A paper "Modeling and Analysis of a Direct Time-of-Flight Sensor Architecture for LiDAR Applications" by Preethi Padmanabhan, Chao Zhang, and Edoardo Charbon, EPFL and TU Delft, belongs to MDPI Special issue on the 2019 International Image Sensor Workshop.

"Direct time-of-flight (DTOF) is a prominent depth sensing method in light detection and ranging (LiDAR) applications. Single-photon avalanche diode (SPAD) arrays integrated in DTOF sensors have demonstrated excellent ranging and 3D imaging capabilities, making them promising candidates for LiDARs. However, high background noise due to solar exposure limits their performance and degrades the signal-to-background noise ratio (SBR). Noise-filtering techniques based on coincidence detection and time-gating have been implemented to mitigate this challenge but 3D imaging of a wide dynamic range scene is an ongoing issue. In this paper, we propose a coincidence-based DTOF sensor architecture to address the aforementioned challenges. The architecture is analyzed using a probabilistic model and simulation. A flash LiDAR setup is simulated with typical operating conditions of a wide angle field-of-view (FOV = 40 ∘ ) in a 50 klux ambient light assumption. Single-point ranging simulations are obtained for distances up to 150 m using the DTOF model. An activity-dependent coincidence is proposed as a way to improve imaging of wide dynamic range targets. An example scene with targets ranging between 8–60% reflectivity is used to simulate the proposed method. The model predicts that a single threshold cannot yield an accurate reconstruction and a higher (lower) reflective target requires a higher (lower) coincidence threshold. Further, a pixel-clustering scheme is introduced, capable of providing multiple simultaneous timing information as a means to enhance throughput and reduce timing uncertainty. Example scenes are reconstructed to distinguish up to 4 distinct target peaks simulated with a resolution of 500 ps. Alternatively, a time-gating mode is simulated where in the DTOF sensor performs target-selective ranging. Simulation results show reconstruction of a 10% reflective target at 20 m in the presence of a retro-reflective equivalent with a 60% reflectivity at 5 m within the same FOV."

Intel Unveils Indoor MEMS LiDAR

Intel announces the RealSense lidar camera L515 able to generate 23M depth points per second with mm accuracy. The LiDAR Camera L515 has a focus on indoor applications that require depth data at high resolution and high accuracy. The L515 uses a proprietary MEMS mirror scanner, enabling better laser power efficiency compared to other ToF technologies. The new camera has an internal vision processor, motion blur artifact reduction and short photon-to-depth latency.

The Intel RealSense lidar is priced at $349 and available for pre-order now.

The main features of the L515 indoor LiDAR:

  • Laser wavelength: 860nm
  • Technology: Laser scanning
  • Depth Field of View (FOV): 70° × 55° (±2°)
  • Maximum Distance: 9m
  • Minimum Depth Distance:0.25m
  • Depth Output Resolution & Frame Rate: Up to 1024 × 768 depth pixels, 30 fps
  • Ambient Temperature: 0-30 °C
  • Power consumption: less than 3.5W


Isorg to Demo Full-Screen Fingerprint Sensor for Smartphones

ALA News: Isorg will demonstrate its full-screen Fingerprint-on-Display (FoD) module for improved multi-finger smartphone authentication at CES 2020. It supports up to four fingers simultaneously touching a smartphone display.

Currently available solutions are restricted to single finger identification within a surface area of less than 10mm x 10mm. In contrast, Isorg’s FoD module supports one- to four-finger authentication across the entire dimensions of the 6-inch smartphone display (or even larger). In addition, the module is very thin, less than 0.3mm thick, so integration into smartphones is made easy for OEMs.

Isorg is excited to demonstrate what could be the future in multi-fingerprint-on-display security to strengthen authentication on smartphones and wearable devices,” said Jean-Yves Gomez, CEO at Isorg. “Our Fingerprint-on-Display module provides OEMs with a complete solution. In addition to the image sensor, it includes other hardware: optimized thin film optical filters developed in-house and driving electronics, as well as software from our industrial partners covering the interface with smartphone OS and the matching algorithm. Isorg has achieved a significant milestone in designing a scalable FoD solution that provides excellent performance results, it is compatible with foldable displays and easier to implement than existing technologies.

Smartphone OEMs will be able to sample Isorg’s Fingerprint-on-Display module on spring 2020.

Assorted News: CIS Fabs Capacity, Espros, Artilux

China Money Network: "As reported previously, the current CIS is mainly divided into mobile phones and security. Among them, mobile phones are basically manufactured using a 12-inch 55nm process, and security chips are manufactured using a 0.11um eight-inch process. In terms of domestic wafer foundries, SMIC, Huahong Grace and XMC are among the big players. Recently, the newly established 12-inch factory, Guangzhou-based Cansemi Technology, has also won the favor of large local customers in CIS, and the company is introducing related product production.

According to a reporter from the Semiinsight who learned from friends in the relevant supply chain, with the tight production capacity of these fabs, the wafer delivery date of related CIS chips has been extended to four months, and the time required for packaging increases two to three weeks.

In addition, the popularity of under-screen optical fingerprint solutions using the same process as CIS has exacerbated this phenomenon. "Because the Die Size of the under-screen optical sensor is relatively large, the number of Dies that are originally cut per wafer is limited. The increasing demand makes the supply of CIS more stretched". "To today’s CIS manufacturer, who has the factory capacity, who is the boss," a supply chain insider said to Semiinsight reporter in an interview.
"

Espros announces ToF Developers Conference to be held in San Francisco on January 28–30, 2020:

"Over the past four conferences, we have trained more than 130 engineers to successfully design TOF camera systems. Due to the high demand, we have decided to continue our TOF Developer Conference.

There is, at least to our knowledge, no engineering school which addresses TOF and LiDAR as an own discipline. We at ESPROS decided to fill the gap with a training program called TOF Developer Conference. The objective is to provide a solid theoretical background, a guideline to working implementations based on examples and practical work with TOF systems. Thus, the TOF Developer Conference shall become the enabler for electronics engineers (BS and MS in EE engineering) to design working TOF systems. It is ideally for engineers who are or will be, involved in the design of TOF system. We hope that our initiative helps to close the gap between the desire of TOF sensors to massively deployed TOF applications.
"


PRNewswire: Artilux unveils world's first GeSi wide spectrum ToF sensor at CES 2020. The demo, being shown live for the first time, will include a RGB-D camera for logistics applications and robot vision, and a 3D camera system that can operate at a longer wavelength. The sensor is projected to enter mass production in Q1 2020 and targets applications such as mobile devices, automotive LiDAR, and machine vision.

In contrast to existing 3D sensors, which typically operate at 850nm or 940nm, the GeSi sensor can cover the range from 850nm to 1550nm. By utilizing this capability, the new Explore Series sensor substantially reduces the potential risk of eye damage. According to the most recent findings, the power of the laser can safely be at least 10 times greater at 1200-1400nm than at 940nm, which improves performance without compromising on safety for long range and highly accurate 3D imaging; it also means that the safe minimum distance of the laser from the eye can be further reduced to sub-centimeter, following the international standards IEC 60825-1:2007 and IEC 60825-1:2014.

The use of longer NIR wavelengths also minimizes interference from sunlight and enables better performance in outdoor environments. All the breakthroughs are brought about by a new GeSi technology platform developed by Artilux in cooperation with TSMC, enabling it to be the first CMOS-based ToF solution to work with light wavelengths up to 1.55µm. A paper that addresses the sensor design based on a GeSi platform has recently been accepted by ISSCC 2020. Artilux has also updated its last year Arxiv.org paper with more recent data.

Tuesday, December 10, 2019

6 Types of Random Telegraph Noise

TSMC, French Atomique Energie Commission, and Institut supérieur de l’aéronautique et de l’espace, Toulouse, publish a joint MDPI paper "Random Telegraph Noises from the Source Follower, the Photodiode Dark Current, and the Gate-Induced Sense Node Leakage in CMOS Image Sensors" by Calvin Yi-Ping Chao, Shang-Fu Yeh, Meng-Hsu Wu, Kuo-Yu Chou, Honyih Tu, Chih-Lin Lee, Chin Yin, Philippe Paillet, and Vincent Goiffon. The paper is a part of MDPI Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"In this paper we present a systematic approach to sort out different types of random telegraph noises (RTN) in CMOS image sensors (CIS) by examining their dependencies on the transfer gate off-voltage, the reset gate off-voltage, the photodiode integration time, and the sense node charge retention time. Besides the well-known source follower RTN, we have identified the RTN caused by varying photodiode dark current, transfer-gate and reset-gate induced sense node leakage. These four types of RTN and the dark signal shot noises dominate the noise distribution tails of CIS and non-CIS chips under test, either with or without X-ray irradiation. The effect of correlated multiple sampling (CMS) on noise reduction is studied and a theoretical model is developed to account for the measurement results."


"Continued improvement of RTN is essential for enhancing CIS performance when the pixel scales down to 0.7 um pitch and beyond. Understanding the RTN behavior and classification of the RTN pixels into different types are the necessary first step in order to reduce RTN through pixel design and minimizing process-induced damage (PID). In this paper, we identified the SF-RTN, the DC-RTN, the TG GIDL-RTN, and the RST GIDL-RTN in active pixels according to their dependence on the PD integration time, the SN charge retention time, the 𝑉𝐷𝐺 across the TG device, and the 𝑉𝑆𝐺 across the RST device, in CIS and non-CIS chips, with and without X-ray irradiation.

We further studied the effect of CMS as a useful technique for RTN reduction through circuit design. A theoretical model was presented to account for the time-dependence of the effectiveness of CMS, which explained the measured data reasonably well. The process nodes used to manufacture the pixel-array and the ASIC layers in stacked CIS are expected to move down the path of the Moore’s Law gradually. Extending the study of RTN to highK metal gate and FinFET technologies is an important goal for our future investigation.
"

NHK Organic 8K Image Sensor

SMPTE publishes NHK presentation "8K Camera Recorder using Organic-photoconductive CMOS Image Sensor & High-quality Codec" by Shingo Sato:

Sony News: TSMC, Third Point, Automotive Sensors

Digitimes reports that TSMC received CIS orders from Sony "and will fabricate the chips using 40nm process technology at Fab 14A in Tainan, southern Taiwan." TSMC has placed equipment orders for additional 40nm process capacity at the fab to fulfill CIS Sony's orders. The new equipment is to be installed in Q2 2020 with pilot runs slated for August next year.

Taiwan TechNews adds: "in the case of Sony's current insufficient production capacity, Sony released the first order to TSMC for OEM production, which not only added orders for 5G related products to TSMC, but also boosted revenue momentum for its high-end image sensor supply chain.

...although Sony and TSMC had a cooperative relationship in the past, it was limited to the manufacturing of logic products and did not place an order for TSMC on high-end image sensors. This time, due to insufficient production capacity, the first release of orders also made TSMC actively prepare. This batch of orders is also expected to be built in the TSMC 14a factory with a 40-nanometer process. It is expected that after the expansion of TSMC's production line, mass production will occur in 2021, reaching a scale of 20,000 pieces per month. In the future, it will not even rule out that it will reach 28. Cooperate with processes below nanometer. In this regard, TSMC did not comment and explain.
"

SeekingAlpha publishes Third Point response on Sony refusal to spin-off its CIS business:

"Most investors expected that following a lengthy review, Sony would share some meaningful plans to close the yawning gap between its share price and intrinsic value.... While we did not expect that all our requests, such as the separation of the image sensor business, would be addressed immediately, we did expect that the Company would make some recommendations to address the structural impediments to long‐term value creation for Sony's shareholders.

Instead, Sony revealed that the review's conclusion was to maintain the status quo with no concrete proposals to improve the business. As students and practitioners of Japanese business principles like kaizen, it is difficult for us to imagine that a company of Sony's size and complexity could not find a single concrete action to improve its business and valuation.

We are committed to a continued constructive dialogue with the Company and to creating long‐term value at Sony for all stakeholders. Discussions are ongoing, guided by our view that Sony remains one of the most undervalued large capitalization stocks in the world.
"


Sony publishes an interview "Will Sony's automotive CMOS image sensor be a key to autonomous driving?" with its automotive image sensor designers Yuichi Motohashi, Satoko Iida, and Naoya Sato. Few interesting quotes:

"...automotive cameras are difficult to compare and evaluate. Although the performance is good, there is no method established to evaluate them and we can't emphasize our advantages. So, we always consider how we can create a yardstick to prove our superiority.

The image sensor development cycle is two to three years, but it takes longer than other applications for those image sensors to be actually integrated into cars in the market. In fact, the negotiations we're having right now are for cars that will hit the market in five years.

While we emphasize the "low illumination characteristics," the core competence Sony has cultivated over many years, we have developed Sony's original pixel architecture based on the "dynamic range expansion technology with single exposure," which is strongly demanded for automotive image sensors. I think this technology is unbeatable.

...process technologies have become commoditized today, and it has become difficult to differentiate them. It is necessary to make differentiation through pixel architecture and show superior characteristics.
"

Monday, December 09, 2019

Sony Announces 2x2 On-Chip Lens For Mobile Sensors

SonyAlphaRumors: Sony presents 2x2 On-Chip Lens (OCL) technology for high-speed focus, high-resolution, high-sensitivity, and HDR:


"In conventional technologies, the variance in sensitivity per pixel caused by the structure (described below), which places an on-chip lens that spans four pixels, was a major issue. However, we have successfully developed a high-performance image sensor with high image quality through optimization of the device structure and the development of a new signal processing technology."

The main features of the Sony lens structure:
  • Phase differences can be detected across all pixels
  • Improved phase difference detection performance (focus performance)
  • Focus performance at low light intensity
  • Focus performance that does not depend on the object shape or pattern
  • Real-time HDR output

Sony Quietly Acquires Insightness

As mentioned in comments, Zurich-based event-based sensor startup Insightness is a part of Sony Semiconductor Solutions Group now:


Few slides about Insightness:

Xiaomi Under-Display Selfie Camera Patent Application

CnTechPost noticed Xiaomi patent application US20190369422 "Display Structure and Electronic Equipment" by Zhihui Zeng, Anyu Liu, Lei Tang, Zhongsheng Jiang, Shaoxing Hu, and Chengfang Sun.

"...there is provided a display structure, which includes: a light adjusting component, where an operating state of the light adjusting component includes a light transmitting state and a polarization state, and the light adjusting component includes a first region and a second region which are independently controllable; and a display screen including a plurality of independently controllable pixels. The light adjusting component is located at a light emitting side of the display screen, and when the first region is in the light transmitting state, the pixels that are in the display screen and correspond to the first region are disabled to allow light emitted from the first region to penetrate through the display screen."

On the figures below,
  • the reference numeral 1 indicates a display structure;
  • the reference numeral 11 indicates a display screen;
  • the reference numeral 12 indicates a light adjusting component;
  • the reference numeral 2 indicates a lens.