Woodside Capital Partners release November 2018 update of its "Autonomous Vehicles Technology Report." A few interesting slides with somewhat pessimistic view on the automotive LiDAR short term prospects:
Lists
▼
Friday, November 30, 2018
Yole Interviews XenomatiX CEO
Yole Developpement analyst Alexis Debray publishes an interview with Filip Geuens, CEO of LiDAR maker XenomatiX. Some interesting quotes:
"We see 2 main categories in the market: global illumination LiDARs (also called flash LiDAR) and scanning-beam LiDAR. The scanning can come from an optical phase array or from a rotating mirror, oscillating mirror or other mechanical device.
Our XenoLidar does not fit in any of these 2 categories. XenoLidar uses multi-beam. Just like global illumination, we measure the scene in one shot and with high resolution, but in a much more efficient way as we only need a fraction of the energy a flash system needs. This actually translates into the fact that we can cover a much larger range for the same energy.
In the end it is a balancing exercise. We believe we have the best mix of what is critical for automotive in terms of cost, reliability, resolution, efficiency and size (in order of importance).
Today, an important bottleneck is the lack of decision taking. Many people get confused by the diversity of make-believe solutions and by initiatives that failed to deliver on their promises. That is slowing down adoption. Too many parties are sitting on the fence and waiting for a leader to pick a solution.
We deal with this by putting evidence on the table. Being able to back-up performance statements with functional products is our response. However, early adopters are still needed to help the technology to mature further, moving from technology level to application level."
"We see 2 main categories in the market: global illumination LiDARs (also called flash LiDAR) and scanning-beam LiDAR. The scanning can come from an optical phase array or from a rotating mirror, oscillating mirror or other mechanical device.
Our XenoLidar does not fit in any of these 2 categories. XenoLidar uses multi-beam. Just like global illumination, we measure the scene in one shot and with high resolution, but in a much more efficient way as we only need a fraction of the energy a flash system needs. This actually translates into the fact that we can cover a much larger range for the same energy.
In the end it is a balancing exercise. We believe we have the best mix of what is critical for automotive in terms of cost, reliability, resolution, efficiency and size (in order of importance).
Today, an important bottleneck is the lack of decision taking. Many people get confused by the diversity of make-believe solutions and by initiatives that failed to deliver on their promises. That is slowing down adoption. Too many parties are sitting on the fence and waiting for a leader to pick a solution.
We deal with this by putting evidence on the table. Being able to back-up performance statements with functional products is our response. However, early adopters are still needed to help the technology to mature further, moving from technology level to application level."
Fraunhofer Vision SoC vs Event-Based Sensors
Fraunhofer presentation at the Vision Show held in Stuttgart, Germany, on Nov 6-8, 2018 offers a different approach to the data minimization in machine vision applications. To simplify the use, Fraunhofer embedded vision sensor even offers Python language-like scripting interface:
Prophesee too presented its Event-Driven Sensor at the Vision Show:
Thanks to TL for the pointers!
Prophesee too presented its Event-Driven Sensor at the Vision Show:
Thanks to TL for the pointers!
Thursday, November 29, 2018
ST Automotive HDR GS Sensor Presentation
ST presentation "Automotive In-cabin Sensing Solutions" by Nicolas Roux details the company's GS HDR sensor technology, including the dual ADCs and dual pixel memory:
Call for Nominations for 2019 Walter Kosonocky Award
International Image Sensors Society calls for nominations for the 2019 Walter Kosonocky Award for significant advancement in solid-state image sensors.
The Walter Kosonocky Award is presented biennially for THE BEST PAPER presented in any venue during the prior two years representing significant advancement in solid-state image sensors. The award commemorates the many
important contributions made by the late Dr. Walter Kosonocky to the field of solid-state image sensors.
Founded in 1997 by his colleagues in industry, government and academia, the award is also funded by proceeds from the International Image Sensor Workshop. The award is selected from nominated papers by the Walter Kosonocky Award Committee, announced and presented at the International Image Sensor Workshop (IISW), and sponsored by the International Image Sensor Society (IISS).
The nominations for 2019 award should be sent to Rihito Kuroda, Chair of the IISS Award Committee, with a pdf file of the nominated paper (that you judge is the best paper published/ presented in calendar years 2017 and 2018) as well as a brief description (less than 100 words) of your reason nominating the paper. Nomination of a paper from your company/ institute is also welcome.
The deadline for receiving nominations is February 18th, 2019.
The Walter Kosonocky Award is presented biennially for THE BEST PAPER presented in any venue during the prior two years representing significant advancement in solid-state image sensors. The award commemorates the many
important contributions made by the late Dr. Walter Kosonocky to the field of solid-state image sensors.
Founded in 1997 by his colleagues in industry, government and academia, the award is also funded by proceeds from the International Image Sensor Workshop. The award is selected from nominated papers by the Walter Kosonocky Award Committee, announced and presented at the International Image Sensor Workshop (IISW), and sponsored by the International Image Sensor Society (IISS).
The nominations for 2019 award should be sent to Rihito Kuroda, Chair of the IISS Award Committee, with a pdf file of the nominated paper (that you judge is the best paper published/ presented in calendar years 2017 and 2018) as well as a brief description (less than 100 words) of your reason nominating the paper. Nomination of a paper from your company/ institute is also welcome.
The deadline for receiving nominations is February 18th, 2019.
Wednesday, November 28, 2018
ZTE Nubia X Smartphone Reverses Multi-Camera Trend
Tom's Guide: ZTE Nubia X with dual display on the front and back eliminates the need in a front camera: the main rear camera is used for selfies too:
GSM Arena reports that one of the largest smartphone manufacturers Vivo is about to roll out a similar model with no separate selfie camera - NEX 2:
GSM Arena reports that one of the largest smartphone manufacturers Vivo is about to roll out a similar model with no separate selfie camera - NEX 2:
Plasma Dicing Benefits
Panasonic Industrial presentation teaches plasma dicing advantages for image sensors:
Veeco Ultratech promotes its IR alignment system:
Veeco Ultratech promotes its IR alignment system:
Yole on Consumer Biometrics
Yole Developpement report "Consumer Biometrics: Market and Technologies Trends 2018" forecasts:
"As anticipated by Yole Développement (Yole) in mid-2016, biometry’s “second wave” began with the introduction of the iPhone X in September 2017, when Apple set the standard for technological advancement (and use-cases) for 3D sensing in consumer. Apple conceived a complex assembly of camera modules and VCSEL light sources using structured light principles, along with an innovative NIR global shutter image sensor from STMicroelectronics to perform secure 3D facial recognition. This second wave, led by biometry with 3D sensing, is ongoing and will increase market value toward $17B by 2022.
But biometry is not only a matter of fingerprint or face detection but also iris and voice recognition, regarding the overall breakdown of biometry recognition, Yole estimates that the proportion of each type of detection will be quite unbalanced in the future, with 60% of biometric module in volume coming from face recognition module, while fingerprint (40%) will see a decrease over time of its value due to competition and alternative implementation leading to cost reduction."
"As anticipated by Yole Développement (Yole) in mid-2016, biometry’s “second wave” began with the introduction of the iPhone X in September 2017, when Apple set the standard for technological advancement (and use-cases) for 3D sensing in consumer. Apple conceived a complex assembly of camera modules and VCSEL light sources using structured light principles, along with an innovative NIR global shutter image sensor from STMicroelectronics to perform secure 3D facial recognition. This second wave, led by biometry with 3D sensing, is ongoing and will increase market value toward $17B by 2022.
But biometry is not only a matter of fingerprint or face detection but also iris and voice recognition, regarding the overall breakdown of biometry recognition, Yole estimates that the proportion of each type of detection will be quite unbalanced in the future, with 60% of biometric module in volume coming from face recognition module, while fingerprint (40%) will see a decrease over time of its value due to competition and alternative implementation leading to cost reduction."
Tuesday, November 27, 2018
RGB-IR CFA Optimizations
Tokyo Institute of Technology and Olympus publish a paper "Single-Sensor RGB-NIR Imaging: High-Quality System Design and Prototype Implementation" by Yusuke Monno, Hayato Teranaka, Kazunori Yoshizaki, Masayuki Tanaka, and Masatoshi Okutomi.
"In recent years, many applications using a set of RGB and near-infrared (NIR) images, also called an RGB-NIR image, have been proposed. However, RGB-NIR imaging, i.e., simultaneous acquisition of RGB and NIR images, is still a laborious task because existing acquisition systems typically require two sensors or shots. In contrast, single-sensor RGB-NIR imaging using an RGB-NIR sensor, which is composed of a mosaic of RGB and NIR pixels, provides a practical and low-cost way of one-shot RGB-NIR image acquisition. In this paper, we investigate high-quality system designs for single-sensor RGBNIR imaging. We first present a system evaluation framework using a new hyperspectral image dataset we constructed. Different from existing work, our framework takes both the RGB-NIR sensor characteristics and the RGB-NIR imaging pipeline into account. Based on the evaluation framework, we then design each imaging factor that affects the RGB-NIR imaging quality and propose the best-performed system design. We finally present the configuration of our developed prototype RGB-NIR camera, which was implemented based on the best system design, and demonstrate several potential applications using the prototype."
"In recent years, many applications using a set of RGB and near-infrared (NIR) images, also called an RGB-NIR image, have been proposed. However, RGB-NIR imaging, i.e., simultaneous acquisition of RGB and NIR images, is still a laborious task because existing acquisition systems typically require two sensors or shots. In contrast, single-sensor RGB-NIR imaging using an RGB-NIR sensor, which is composed of a mosaic of RGB and NIR pixels, provides a practical and low-cost way of one-shot RGB-NIR image acquisition. In this paper, we investigate high-quality system designs for single-sensor RGBNIR imaging. We first present a system evaluation framework using a new hyperspectral image dataset we constructed. Different from existing work, our framework takes both the RGB-NIR sensor characteristics and the RGB-NIR imaging pipeline into account. Based on the evaluation framework, we then design each imaging factor that affects the RGB-NIR imaging quality and propose the best-performed system design. We finally present the configuration of our developed prototype RGB-NIR camera, which was implemented based on the best system design, and demonstrate several potential applications using the prototype."
Espros ToF Face ID Module
Espros November 2018 Newsletter shows the company's ToF module for face recognition in smartphones:
"The USPs of the epc660 chip - very high NIR sensitivity (>80% @ 850nm) as well as the capability of suppressing strong ambient light in the charge domain - make it in a favorite choice for miniaturized mobile applications. High sensitivity means saving battery power and allows eye-safe operation due to fact that the active illumination can be designed to be less powerful. Ambient light acceptance is a key factor and a challenge for devices although they are used outdoor in a full sunlight environment.
The slim bare-die chip-scale package with an overall thickness of 0.23mm with solder balls (CSP) allows to design modules for the thinnest mobile applications. The package allows to scale down the whole the complete module not just in size but also in cost."
"The USPs of the epc660 chip - very high NIR sensitivity (>80% @ 850nm) as well as the capability of suppressing strong ambient light in the charge domain - make it in a favorite choice for miniaturized mobile applications. High sensitivity means saving battery power and allows eye-safe operation due to fact that the active illumination can be designed to be less powerful. Ambient light acceptance is a key factor and a challenge for devices although they are used outdoor in a full sunlight environment.
The slim bare-die chip-scale package with an overall thickness of 0.23mm with solder balls (CSP) allows to design modules for the thinnest mobile applications. The package allows to scale down the whole the complete module not just in size but also in cost."
Monday, November 26, 2018
Four Generations of Camera Module Testers
Pamtek presents 4 generations of its testing systems for camera modules:
Sunday, November 25, 2018
SiOnyx Camera Review
DPReview publishes a review of SiOnyx Aurora night vision camera with Black Silicon sensor. The conclusion is:
"Does the SiOnyx Aurora let me see things in the dark that I can't see with the unaided eye? Absolutely: the infrared sensitivity makes a big difference and, hence, my stress on the night vision capability of this device. The fact that you can also capture what you see is a plus. For me it was capturing Northern Lights, but I'm also looking forward to capturing surface lava flows in Hawaii, bioluminescence in Puerto Rico, as well as other phenomena around the world."
"Does the SiOnyx Aurora let me see things in the dark that I can't see with the unaided eye? Absolutely: the infrared sensitivity makes a big difference and, hence, my stress on the night vision capability of this device. The fact that you can also capture what you see is a plus. For me it was capturing Northern Lights, but I'm also looking forward to capturing surface lava flows in Hawaii, bioluminescence in Puerto Rico, as well as other phenomena around the world."
Teledyne IR Sensors for Space Missions
Teledyne presentation on IR detectors for space missions by Paul Jerram and James Beletic shows the company's project examples:
Thursday, November 22, 2018
3D Stacked SPAD Array in 45nm Process
IEEE Journal of Selected Topics in Quantum Electronics publishes an open access paper "High-Performance Back-Illuminated Three-Dimensional Stacked Single-Photon Avalanche Diode Implemented in 45-nm CMOS Technology" by Myung-Jae Lee, Augusto Ronchini Ximenes, Preethi Padmanabhan, Tzu-Jui Wang, Kuo-Chin Huang, Yuichiro Yamashita, Dun-Nian Yaung, and Edoardo Charbon from EPFL, Delft University of Technology, and TSMC.
"We present a high-performance back-illuminated three-dimensional stacked single-photon avalanche diode (SPAD), which is implemented in 45-nm CMOS technology for the first time. The SPAD is based on a P + /Deep N-well junction with a circular shape, for which N-well is intentionally excluded to achieve a wide depletion region, thus enabling lower tunneling noise and better timing jitter as well as a higher photon detection efficiency and a wider spectrum. In order to prevent premature edge breakdown, a P-type guard ring is formed at the edge of the junction, and it is optimized to achieve a wider photon-sensitive area. In addition, metal-1 is used as a light reflector to improve the detection efficiency further in backside illumination. With the optimized 3-D stacked 45-nm CMOS technology for back-illuminated image sensors, the proposed SPAD achieves a dark count rate of 55.4 cps/μm 2 and a photon detection probability of 31.8% at 600 nm and over 5% in the 420-920 nm wavelength range. The jitter is 107.7 ps full width at half-maximum with negligible exponential diffusion tail at 2.5 V excess bias voltage at room temperature. To the best of our knowledge, these are the best results ever reported for any back-illuminated 3-D stacked SPAD technologies."
"We present a high-performance back-illuminated three-dimensional stacked single-photon avalanche diode (SPAD), which is implemented in 45-nm CMOS technology for the first time. The SPAD is based on a P + /Deep N-well junction with a circular shape, for which N-well is intentionally excluded to achieve a wide depletion region, thus enabling lower tunneling noise and better timing jitter as well as a higher photon detection efficiency and a wider spectrum. In order to prevent premature edge breakdown, a P-type guard ring is formed at the edge of the junction, and it is optimized to achieve a wider photon-sensitive area. In addition, metal-1 is used as a light reflector to improve the detection efficiency further in backside illumination. With the optimized 3-D stacked 45-nm CMOS technology for back-illuminated image sensors, the proposed SPAD achieves a dark count rate of 55.4 cps/μm 2 and a photon detection probability of 31.8% at 600 nm and over 5% in the 420-920 nm wavelength range. The jitter is 107.7 ps full width at half-maximum with negligible exponential diffusion tail at 2.5 V excess bias voltage at room temperature. To the best of our knowledge, these are the best results ever reported for any back-illuminated 3-D stacked SPAD technologies."
SOI ToF Sensor for LiDAR
MDPI publishes a paper "A Back-Illuminated Time-of-Flight Image Sensor with SOI-Based Fully Depleted Detector Technology for LiDAR Application" by Sanggwon Lee, Keita Yasutomi, Ho Hai Nam, Masato Morita, and Shoji Kawahito from Shizuoka University.
"A back-illuminated time-of-flight (ToF) image sensor based on a 0.2 µm silicon-on-insulator (SOI) CMOS detector technology using fully-depleted substrate is developed for the light detection and ranging (LiDAR) applications. A fully-depleted 200 µm-thick bulk silicon is used for the higher quantum efficiency (QE) in a near-infrared (NIR) region. The developed SOI pixel structure has a 4-tapped charge modulator with a draining function to achieve a higher range resolution and to cancel background light signal. A distance is measured up to 27 m with a range resolution of 12 cm at the outdoor and average light power density is 150 mW/m2@30 m."
"A back-illuminated time-of-flight (ToF) image sensor based on a 0.2 µm silicon-on-insulator (SOI) CMOS detector technology using fully-depleted substrate is developed for the light detection and ranging (LiDAR) applications. A fully-depleted 200 µm-thick bulk silicon is used for the higher quantum efficiency (QE) in a near-infrared (NIR) region. The developed SOI pixel structure has a 4-tapped charge modulator with a draining function to achieve a higher range resolution and to cancel background light signal. A distance is measured up to 27 m with a range resolution of 12 cm at the outdoor and average light power density is 150 mW/m2@30 m."
Isaiah Research Forecasts Triple Camera Adoption in Smartphones
IFNewsflash: Isaiah Research increases its previous forecast of double and triple cameras adoption on 2019 smartphone market: