Monday, May 28, 2018

Energias Market Research on CIS Market

Energias Market Research expects the image sensor market to grow significantly from $14.1b in 2017 to $25.6b in 2024 at a CAGR of 10.3%. However, high manufacturing cost may hamper the growth of the market.

"3D image sensor is a fastest growing market due to growing applications in computer vision and machine vision. On the basis of spectrum, non-visible spectrum is anticipated to grow with a highest CAGR owing to increasing usage in medical, automotive, consumer electronics and industrial applications. Based on application, consumer electronics segment is expected to account for highest market share in image sensor market."

Sunday, May 27, 2018

Pixart Q1 2018 Report

Pixart Q1 2018 report shows that despite its diversification efforts, the company remains mainly an optical mouse sensors provider:

Saturday, May 26, 2018

ESA Pays $47M to e2v to Supply 114 CCDs for Plato Mission

The European Space Agency (ESA) has awarded Teledyne e2v with the second phase of a €42M ($47M) contract to produce visible light CCDs for the PLATO (Planetary Transits and Oscillations of stars) mission. PLATO is a planet hunting spacecraft that will seek out and research Earth like exoplanets around Sun like stars.

Teledyne e2v completed the first manufacturing phase of the contract, including the production of CCD wafers and the procurement and production of other key items. After a successful review of the first phase, Teledyne e2v has been authorised to start work on phase two of this prestigious contract. This includes manufacturing the wafers and the assembly, test and delivery of 114 CCDs. Together, they will form the biggest optical array ever to be launched into space (currently planned for 2026).

PLATO will be made up of 26 telescopes mounted on a single satellite platform. Each telescope will contain four 20MP Teledyne e2v CCDs in both full-frame and frame-transfer variants, for a full satellite total of 2.12 Gpixels. This is over twice the equivalent number for GAIA, the largest camera currently in space. As with GAIA, all of the PLATO CCD image sensors will be designed and produced in Chelmsford, UK.

Friday, May 25, 2018

Pulse-Based ToF Sensing

MPDI Special Issue Depth Sensors and 3D Vision publishes University of Siegen, Germany, paper "Pulse Based Time-of-Flight Range Sensing" by Hamed Sarbolandi, Markus Plack, and Andreas Kolb.

"Pulse-based Time-of-Flight (PB-ToF) cameras are an attractive alternative range imaging approach, compared to the widely commercialized Amplitude Modulated Continuous-Wave Time-of-Flight (AMCW-ToF) approach. This paper presents an in-depth evaluation of a PB-ToF camera prototype based on the Hamamatsu area sensor S11963-01CR. We evaluate different ToF-related effects, i.e., temperature drift, systematic error, depth inhomogeneity, multi-path effects, and motion artefacts. Furthermore, we evaluate the systematic error of the system in more detail, and introduce novel concepts to improve the quality of range measurements by modifying the mode of operation of the PB-ToF camera. Finally, we describe the means of measuring the gate response of the PB-ToF sensor and using this information for PB-ToF sensor simulation."

Olympus Multi-Storied Photodiode Sensor

MDPI Special Issue Special Issue on the 2017 International Image Sensor Workshop (IISW) publishes Olympus paper "Multiband Imaging CMOS Image Sensor with Multi-Storied Photodiode Structure" Yoshiaki Takemoto, Mitsuhiro Tsukimura, Hideki Kato, Shunsuke Suzuki, Jun Aoki, Toru Kondo, Haruhisa Saito, Yuichi Gomi, Seisuke Matsuda, and Yoshitaka Tadaki.

"We developed a multiband imaging CMOS image sensor (CIS) with a multi-storied photodiode structure, which comprises two photodiode (PD) arrays that capture two different images, visible red, green, and blue (RGB) and near infrared (NIR) images at the same time. The sensor enables us to capture a wide variety of multiband images which is not limited to conventional visible RGB images taken with a Bayer filter or to invisible NIR images. Its wiring layers between two PD arrays can have an optically optimized effect by modifying its material and thickness on the bottom PD array. The incident light angle on the bottom PD depends on the thickness and structure of the wiring and bonding layer, and the structure can act as an optical filter. Its wide-range sensitivity and optimized optical filtering structure enable us to create the images of specific bands of light waves in addition to visible RGB images without designated pixels for IR among same pixel arrays without additional optical components. Our sensor will push the envelope of capturing a wide variety of multiband images."

Concept of multi-storied photodiode CMOS sensor based on 3D stacked technology. There are two layers of PD arrays, one in the top and the other in the bottom semiconductor. The top PD array converts a part of incident light into signals and works as an optical filter for the bottom PD array. The bottom PD array converts light that penetrates through the top substrate into signals, which means the top substrate acts mainly as a visible light sensor and the bottom one is an invisible IR light sensor.

Black Phosphorus NIR Photodetectors

MDPI Sensors publishes a paper "Multilayer Black Phosphorus Near-Infrared Photodetectors" by Chaojian Hou, Lijun Yang, Bo Li, Qihan Zhang, Yuefeng Li, Qiuyang Yue, Yang Wang, Zhan Yang, and Lixin Dong from Harbin Institute of Technology (China), Michigan State University (USA), and Soochow University (China).

"Black phosphorus (BP), owing to its distinguished properties, has become one of the most competitive candidates for photodetectors. However, there has been little attention paid on photo-response performance of multilayer BP nanoflakes with large layer thickness. In fact, multilayer BP nanoflakes with large layer thickness have greater potential from the fabrication viewpoint as well as due to the physical properties than single or few layer ones. In this report, the thickness-dependence of the intrinsic property of BP photodetectors in the dark was initially investigated. Then the photo-response performance (including responsivity, photo-gain, photo-switching time, noise equivalent power, and specific detectivity) of BP photodetectors with relative thicker thickness was explored under a near-infrared laser beam (╬╗IR = 830 nm). Our experimental results reveal the impact of BP’s thickness on the current intensity of the channel and show degenerated p-type BP is beneficial for larger current intensity. More importantly, the photo-response of our thicker BP photodetectors exhibited a larger responsivity up to 2.42 A/W than the few-layer ones and a fast response photo-switching speed (response time is ~2.5 ms) comparable to thinner BP nanoflakes was obtained, indicating BP nanoflakes with larger layer thickness are also promising for application for ultra-fast and ultra-high near-infrared photodetectors."

Unfortunately, no spectrum response of QE measurements are published. The EQE at 830nm graphs show large internal gain of photosensitive FET structures:

Thursday, May 24, 2018

AEye Introduces Dynamic Vixels

PRNewswire: AEye introduces a new sensor data type called Dynamic Vixels. In simple terms, Dynamic Vixels combine pixels from digital 2D cameras with voxels from AEye's Agile 3D LiDAR sensor into a single super-resolution sensor data type.

"There is an ongoing argument about whether camera-based vision systems or LiDAR-based sensor systems are better," said Luis Dussan, Founder and CEO of AEye. "Our answer is that both are required – they complement each other and provide a more complete sensor array for artificial perception systems. We know from experience that when you fuse a camera and LiDAR mechanically at the sensor, the integration delivers data faster, more efficiently and more accurately than trying to register and align pixels and voxels in post-processing. The difference is significantly better performance."

"There are three best practices we have adopted at AEye," said Blair LaCorte, Chief of Staff. "First: never miss anything; second: not all objects are equal; and third: speed matters. Dynamic Vixels enables iDAR to acquire a target faster, assess a target more accurately and completely, and track a target more efficiently – at ranges of greater than 230m with 10% reflectivity."

Qualcomm Snapdragon 710 Supports 6 Cameras, ToF Sensing, More

Qualcomm 10nm Snapdragon 710 processors features a number of advanced imaging features:
  • Qualcomm Spectra 250 ISP
  • 2nd Generation Spectra architecture
  • 14-bit image signal processing
  • Up to 32MP single camera
  • Up to 20MP dual camera
  • Can connect up to 6 different cameras (many configurations possible)
  • Multi-Frame Noise Reduction (MFNR) with accelerated image stabilization
  • Hybrid Autofocus with support for dual phase detection (2PD) sensors
  • Ultra HD video capture (4K at 30 fps) with Motion Compensated Temporal Filtering (MCTF)
  • Takes 4K Ultra HD video at up to 40% lower power
  • 3D structured light and time of flight active depth sensing

Mobileye Autonomous Car Fails in Demo

EETimes Junko Yoshida publishes an explanation of Mobileye self-driving car demo where the car passes a junction on red light:

"The public AV demo in Jerusalem inadvertently allowed a local TV station’s video camera to capture Mobileye’s car running a red light. (Fast-forward the video to 4:28 for said scene.)

According to Mobileye, the incident was not a software bug in the car. Instead, it was triggered by electromagnetic interference (EMI) between a wireless camera used by the TV crew and the traffic light’s wireless transponder. Mobileye had equipped the traffic light with a wireless transponder — for extra safety — on the route that the AV was scheduled to drive in the demo. As a result, crossed signals from the two wireless sources befuddled the car. The AV actually slowed down at the sight of a red light, but then zipped on through.
"


On a similar theme, NTSB publishes a preliminary analysis of Uber self-driving car crash that killed a women in Arizona in March 2018:

According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

SystemPlus on iPhone X Color Sensor

SystemPlus reverse engineering shows a difference between iPhone X color sensors and other AMS spectral sensors: