Friday, May 25, 2018

Pulse-Based ToF Sensing

MPDI Special Issue Depth Sensors and 3D Vision publishes University of Siegen, Germany, paper "Pulse Based Time-of-Flight Range Sensing" by Hamed Sarbolandi, Markus Plack, and Andreas Kolb.

"Pulse-based Time-of-Flight (PB-ToF) cameras are an attractive alternative range imaging approach, compared to the widely commercialized Amplitude Modulated Continuous-Wave Time-of-Flight (AMCW-ToF) approach. This paper presents an in-depth evaluation of a PB-ToF camera prototype based on the Hamamatsu area sensor S11963-01CR. We evaluate different ToF-related effects, i.e., temperature drift, systematic error, depth inhomogeneity, multi-path effects, and motion artefacts. Furthermore, we evaluate the systematic error of the system in more detail, and introduce novel concepts to improve the quality of range measurements by modifying the mode of operation of the PB-ToF camera. Finally, we describe the means of measuring the gate response of the PB-ToF sensor and using this information for PB-ToF sensor simulation."

Olympus Multi-Storied Photodiode Sensor

MDPI Special Issue Special Issue on the 2017 International Image Sensor Workshop (IISW) publishes Olympus paper "Multiband Imaging CMOS Image Sensor with Multi-Storied Photodiode Structure" Yoshiaki Takemoto, Mitsuhiro Tsukimura, Hideki Kato, Shunsuke Suzuki, Jun Aoki, Toru Kondo, Haruhisa Saito, Yuichi Gomi, Seisuke Matsuda, and Yoshitaka Tadaki.

"We developed a multiband imaging CMOS image sensor (CIS) with a multi-storied photodiode structure, which comprises two photodiode (PD) arrays that capture two different images, visible red, green, and blue (RGB) and near infrared (NIR) images at the same time. The sensor enables us to capture a wide variety of multiband images which is not limited to conventional visible RGB images taken with a Bayer filter or to invisible NIR images. Its wiring layers between two PD arrays can have an optically optimized effect by modifying its material and thickness on the bottom PD array. The incident light angle on the bottom PD depends on the thickness and structure of the wiring and bonding layer, and the structure can act as an optical filter. Its wide-range sensitivity and optimized optical filtering structure enable us to create the images of specific bands of light waves in addition to visible RGB images without designated pixels for IR among same pixel arrays without additional optical components. Our sensor will push the envelope of capturing a wide variety of multiband images."

Concept of multi-storied photodiode CMOS sensor based on 3D stacked technology. There are two layers of PD arrays, one in the top and the other in the bottom semiconductor. The top PD array converts a part of incident light into signals and works as an optical filter for the bottom PD array. The bottom PD array converts light that penetrates through the top substrate into signals, which means the top substrate acts mainly as a visible light sensor and the bottom one is an invisible IR light sensor.

Black Phosphorus NIR Photodetectors

MDPI Sensors publishes a paper "Multilayer Black Phosphorus Near-Infrared Photodetectors" by Chaojian Hou, Lijun Yang, Bo Li, Qihan Zhang, Yuefeng Li, Qiuyang Yue, Yang Wang, Zhan Yang, and Lixin Dong from Harbin Institute of Technology (China), Michigan State University (USA), and Soochow University (China).

"Black phosphorus (BP), owing to its distinguished properties, has become one of the most competitive candidates for photodetectors. However, there has been little attention paid on photo-response performance of multilayer BP nanoflakes with large layer thickness. In fact, multilayer BP nanoflakes with large layer thickness have greater potential from the fabrication viewpoint as well as due to the physical properties than single or few layer ones. In this report, the thickness-dependence of the intrinsic property of BP photodetectors in the dark was initially investigated. Then the photo-response performance (including responsivity, photo-gain, photo-switching time, noise equivalent power, and specific detectivity) of BP photodetectors with relative thicker thickness was explored under a near-infrared laser beam (╬╗IR = 830 nm). Our experimental results reveal the impact of BP’s thickness on the current intensity of the channel and show degenerated p-type BP is beneficial for larger current intensity. More importantly, the photo-response of our thicker BP photodetectors exhibited a larger responsivity up to 2.42 A/W than the few-layer ones and a fast response photo-switching speed (response time is ~2.5 ms) comparable to thinner BP nanoflakes was obtained, indicating BP nanoflakes with larger layer thickness are also promising for application for ultra-fast and ultra-high near-infrared photodetectors."

Unfortunately, no spectrum response of QE measurements are published. The EQE at 830nm graphs show large internal gain of photosensitive FET structures:

Thursday, May 24, 2018

AEye Introduces Dynamic Vixels

PRNewswire: AEye introduces a new sensor data type called Dynamic Vixels. In simple terms, Dynamic Vixels combine pixels from digital 2D cameras with voxels from AEye's Agile 3D LiDAR sensor into a single super-resolution sensor data type.

"There is an ongoing argument about whether camera-based vision systems or LiDAR-based sensor systems are better," said Luis Dussan, Founder and CEO of AEye. "Our answer is that both are required – they complement each other and provide a more complete sensor array for artificial perception systems. We know from experience that when you fuse a camera and LiDAR mechanically at the sensor, the integration delivers data faster, more efficiently and more accurately than trying to register and align pixels and voxels in post-processing. The difference is significantly better performance."

"There are three best practices we have adopted at AEye," said Blair LaCorte, Chief of Staff. "First: never miss anything; second: not all objects are equal; and third: speed matters. Dynamic Vixels enables iDAR to acquire a target faster, assess a target more accurately and completely, and track a target more efficiently – at ranges of greater than 230m with 10% reflectivity."

Qualcomm Snapdragon 710 Supports 6 Cameras, ToF Sensing, More

Qualcomm 10nm Snapdragon 710 processors features a number of advanced imaging features:
  • Qualcomm Spectra 250 ISP
  • 2nd Generation Spectra architecture
  • 14-bit image signal processing
  • Up to 32MP single camera
  • Up to 20MP dual camera
  • Can connect up to 6 different cameras (many configurations possible)
  • Multi-Frame Noise Reduction (MFNR) with accelerated image stabilization
  • Hybrid Autofocus with support for dual phase detection (2PD) sensors
  • Ultra HD video capture (4K at 30 fps) with Motion Compensated Temporal Filtering (MCTF)
  • Takes 4K Ultra HD video at up to 40% lower power
  • 3D structured light and time of flight active depth sensing

Mobileye Autonomous Car Fails in Demo

EETimes Junko Yoshida publishes an explanation of Mobileye self-driving car demo where the car passes a junction on red light:

"The public AV demo in Jerusalem inadvertently allowed a local TV station’s video camera to capture Mobileye’s car running a red light. (Fast-forward the video to 4:28 for said scene.)

According to Mobileye, the incident was not a software bug in the car. Instead, it was triggered by electromagnetic interference (EMI) between a wireless camera used by the TV crew and the traffic light’s wireless transponder. Mobileye had equipped the traffic light with a wireless transponder — for extra safety — on the route that the AV was scheduled to drive in the demo. As a result, crossed signals from the two wireless sources befuddled the car. The AV actually slowed down at the sight of a red light, but then zipped on through.
"


On a similar theme, NTSB publishes a preliminary analysis of Uber self-driving car crash that killed a women in Arizona in March 2018:

According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

SystemPlus on iPhone X Color Sensor

SystemPlus reverse engineering shows a difference between iPhone X color sensors and other AMS spectral sensors:

Wednesday, May 23, 2018

Nice Animations

Lucid Vision Labs publishes nice animations explaining how Sony readout with dual analog/digital CDS works:

Sony Exmor rolling shutter sensor
Same pipeline but with global shutter (Pregius)
1st stage - analog domain CDS
2nd stage - digital domain CDS

There are few more animations on the company's site and also a pictorial image sensor tutorial.

Tuesday, May 22, 2018

NHK Presents 8K Selenium Sensor

NHK Open House to be held on May 24-27 exhibits an 8K avalanche-multiplying crystalline selenium image sensor:

"Electric charge generated by incident light are increased by avalanche multiplication phenomenon inside the photoelectric conversion film. The film can be overlaid on a CMOS circuit with a low breakdown voltage because avalanche multiplication occurs at low voltage in crystalline selenium, which can absorb a sufficient amount of light even when thin."

The paper on crystalline selenium-based image sensor has been published in 2015.

Pixelligent Raises $7.6M for Nanoparticle Microlens

BusinessWire: Baltimore, MD-based Pixelligent's Zirconium oxide capped nanoparticles (ZrO2), a high refractive index inorganic material, with a sub-10 nm diameter with functionalized surface, is said to have a potential to contribute to sensitivity of CMOS image sensors. The company announces $7.6M in new funding to help further drive product commercialization and accelerate global customer adoption.

Although Pixelligent lenses for image sensor applications have been announced a couple of years ago, there is no such product on the market yet, to the best of my knowledge. In 2013, the company President & CEO Craig Bandes said: "During the past 12 months we have seen a tremendous increase in demand for our nanocrystal dispersions spanning the CMOS Image Sensor, ITO, LED, OLED and Flat Panel Display markets. This demand is coming from customers around the globe with the fastest growth being realized in Asia. In the first quarter of 2013, we began shipping our first commercial orders and currently have more than 30 customers at various stages of product qualification."

Sony Image Sensor Business Strategy

Sony IR Day 2018 held on May 22 (today) has quite a detailed presentation on its semiconductor business targets and strategy. From the Sony official PR:

"In the area of CMOS image sensors that capture the real world in which we all live, and are vital to KANDO content creation, aim to maintain Sony’s global number one position in imaging applications, and become the global leader in sensing.

Through the key themes of KANDO - to move people emotionally - and "getting closer to people," Sony will aim to sustainably generate societal value and high profitability across its three primary business areas of electronics, entertainment, and financial services. It will pursue this strategy based on the following basic principles.

CMOS image sensors are key component devices in growth industries such as the Internet of Things, artificial intelligence, autonomous vehicles, and more. Sony's competitive strength in this area is based on its wealth of technological expertise in analog semiconductors, cultivated over many years from the charge-coupled device (CCD) era. Sony aims to maintain its global number one position in imaging and in the longer term become the number one in sensing applications. To this end, Sony will extend its development of sensing applications beyond the area of smartphones, into new domains such as automotive use.

...based on its desire to contribute to safety in the self-driving car era, Sony will work to further develop its imaging and sensing technologies.
"

Monday, May 21, 2018

Hamamatsu Sensors in Automotive Applications

Hamamatsu publishes a nice article "Photonics for advanced car technologies" showing many applications for its light sensors:

Samsung Presentation

Samsung System LSI Investor Presentation dated by April 30, 2018 shows the company success in image sensor business:

  • 1/3 Global Smartphones use ISOCELL image sensors
  • 4.6 out of 10 Chinese smartphones use ISOCELL sensors
  • 28nm image sensor process

Sunday, May 20, 2018

Anafocus Keynote at EI 2018

Electronic Imaging Symposium publishes a keynote "Sub-Electron Low-Noise CMOS Image Sensors: Large Format, Fast, 0.5e-rms CIS with Oversampled 2‐Stage ADCs" by J. A. Segovia, F. Medeiro, A. Gonzales, A. Vellegas, and A. Rodriguez-Vazquez, Teledyne-Anafocus and Universidad de Sevilla.

Saturday, May 19, 2018

Omnivision Keynote at EI 2018

Electronic Imaging publishes Omnivision keynote presentation "Automotive Image Sensors" by Boyd Fowler and Johannes Solhusvik. The presentation covers many areas from HDR to LiDARs: