Sunday, July 21, 2019

ToF News: Broadcom, Renesas, Opnous

ToF market becomes rather crowded. Many companies enter it anticipating a fast growth.

Broadcom AFBR-S50MV85G is APD pixel-based distance and motion measurement ToF sensor. It supports up to 3000 frames per second with up to 16 illuminated pixels. The sensor is aimed to industrial applications and gesture sensing and is said to have best-in-class ambient light suppression of up to 200k Lux. So, the use in outside environments should not be a problem.

  • Integrated 850 nm laser light source
  • Between 7-16 illuminated pixels
  • FoV of up to 12.4°x 6.2°
  • Very fast measurement rates of up to 3 kHz
  • Variable distance range up to 10m
  • Operation up to 200k Lux ambient light
  • Works well on all surface conditions
  • Laser Class 1 eye safe ready
  • Accuracy better than 1%
  • Drop-in compatible within the AFBR-S50 sensor platform

Renesas ISL29501 (Intersil) ToF processor external emitter and detector. The sensor operates on i-ToF in-phase/out-phase priciple:

Shanghai, China-based Opnous offers a number of ToF sensors with different resolutions:

Saturday, July 20, 2019

Recent Image Sensor Videos: Omnivision, Prophesee, Intel

Omnivision publishes its CEO Boyd Fowler explanation of HALE technology:

Prophesee CMO Guillaume Butin presents another use case of its event-driven sensors, vibration monitoring:

Other Prophesee videos explain differences between event-driven and frame-based sensors:

Intel explains how its coded light 3D camera works:

Friday, July 19, 2019

LiDAR News: Voyant Photonics, Aeye

Techcrunch: NYC-based LiDAR startup Voyant Photonics raises $4.3M investment from Contour Venture Partners, LDV Capital and DARPA. The founding team of the startup came from Lipson Nanophotonics Group at Columbia University.

"In the past, attempts in chip-based photonics to send out a coherent laser-like beam from a surface of lightguides (elements used to steer light around or emit it) have been limited by a low field of view and power because the light tends to interfere with itself at close quarters.

Voyant’s version of these “optical phased arrays” sidesteps that problem by carefully altering the phase of the light traveling through the chip.

This is an enabling technology because it’s so small,” says Voyant CEO and co-founder Steven Miller. “We’re talking cubic centimeter volumes.

It’s a misconception that small lidars need to be low-performance. The silicon photonic architecture we use lets us build a very sensitive receiver on-chip that would be difficult to assemble in traditional optics. So we’re able to fit a high-performance lidar into that tiny package without any additional or exotic components. We think we can achieve specs comparable to lidars out there, but just make them that much smaller.

BusinessWire: Aeye publishes a whitepaper "AEye Redefines the Three “R’s” of LiDAR – Rate, Resolution, and Range." Basically, it proposes to bend the performance metrics in such a way that Aeye LiDAR looks better:

Extended Metric #1: From Frame Rate to Object Revisit Rate

It is universally accepted that a single interrogation point, or shot, does not deliver enough confidence to verify a hazard. Therefore, passive LiDAR systems need multiple interrogations/detects on the same object or position over multiple frames to validate an object. New, intelligent LiDAR systems, such as AEye’s iDAR™, can revisit an object within the same frame. These agile systems can accelerate the revisit rate by allowing for intelligent shot scheduling within a frame, with the ability to interrogate an object or position multiple times within a conventional frame.

In addition, existing LiDAR systems are limited by the physics of fixed laser pulse energy, fixed dwell time, and fixed scan patterns. Next generation systems such as iDAR, are software definable by perception, path and motion planning modules so that they can dynamically adjust their data collection approach to best fit their needs. Therefore, Object Revisit Rate, or the time between two shots at the same point or set of points, is a more important and relevant metric than Frame Rate alone.

Extended Metric #2: From Angular Resolution to Instantaneous (Angular) Resolution

The assumption behind the use of resolution as a conventional LiDAR metric is that the entire Field of View will be scanned with a constant pattern and uniform power. However, AEye’s iDAR technology, based on advanced robotic vision paradigms like those utilized in missile defense systems, was developed to break this assumption. Agile LiDAR systems enable a dynamic change in both temporal and spatial sampling density within a region of interest, creating instantaneous resolution. These regions of interest can be fixed at design time, triggered by specific conditions, or dynamically generated at run-time.

“Laser power is a valuable commodity. LiDAR systems need to be able to focus their defined laser power on objects that matter,” said Allan Steinhardt, Chief Scientist at AEye. “Therefore, it is beneficial to measure how much more resolution can be applied on demand to key objects in addition to merely measuring static angular resolution over a fixed pattern. If you are not intelligently scanning, you are either over sampling, or under sampling the majority of a scene, wasting precious power with no gain in information value.”

Extended Metric #3: From Detection Range to Classification Range

The traditional metric of detection range may work for simple applications, but for autonomy the more critical performance measurement is classification range. While it has been generally assumed that LiDAR manufacturers need not know or care about how the domain controller classifies or how long it takes, this can ultimately add latency and leave the vehicle vulnerable to dangerous situations. The more a sensor can provide classification attributes, the faster the perception system can confirm and classify. Measuring classification range, in addition to detection range, will provide better assessment of an automotive LiDAR’s capabilities, since it eliminates the unknowns in the perception stack, pinpointing salient information faster.

Unlike first generation LiDAR sensors, AEye’s iDAR is an integrated, responsive perception system that mimics the way the human visual cortex focuses on and evaluates potential driving hazards. Using a distributed architecture and edge processing, iDAR dynamically tracks objects of interest, while always critically assessing general surroundings. Its software-configurable hardware enables vehicle control system software to selectively customize data collection in real-time, while edge processing reduces control loop latency. By combining software-definability, artificial intelligence, and feedback loops, with smart, agile sensors, iDAR is able to capture more intelligent information with less data, faster, for optimal performance and safety.

Medium: Researches from Baidu Research, the University of Michigan, and the University of Illinois at Urbana-Champaign demo a way to hide objects from discovering by LiDAR:

Imec and Holst Centre Transparent Fingerprint Sensor

Charbax publishes a video interview with Hylke Akkerman (Holst Centre) and Pawel Malinowski (Imec) on the transparent fingerprint sensor that won the 2019 I-Zone Best Prototype Award SID Display Week:

Thursday, July 18, 2019

Basler Announces ToF Camera with Sony Sensor

Basler unveils Blaze ToF camera based on Sony DepthSense IMX556PLR sensor technology:

Under-Display News

IFNews: Credit Suisse report on smartphone display market talks about under-display selfie camera in Oppo phones:

"Oppo also became the first smartphone brand to unveil an engineering sample with under-display selfie camera last week, by putting the front facing camera under the AMOLED display, although we believe its peers such as Xiaomi, Lenovo, Apple, Huawei, etc., are also working on similar solution. This technology allows a real full screen design as there is no hole or notch on the display, and the screen can act as a screen when the front camera is not in use. Nevertheless, the display image quality in the area surrounding the camera seems to be worse than the rest of the display as it requires special treatment and processing. Moreover, the native image quality (resolution, contrast, brightness, etc.) taken by the under-display selfie camera is also not comparable with current front facing camera. Our checks suggest the brands (not just Oppo) are currently working with software/AI companies for post-processing."

The report also talks about the efforts to reduce under-display fingerprint sensor thickness:

"All of the flagship Android smartphones showcased at the MWC Shanghai are equipped with under-display fingerprint sensing, mostly adopting optical sensor with only Samsung using ultrasonic sensor, and none of them is using Face ID-like biometric sensing. We believe under-display fingerprint is becoming the mainstream for Android's high-end smartphones and could further proliferate into mid-end as the overall cost comes down. We estimate overall under-display fingerprint shipment of ~200 mn units in 2019E (60 mn units for ultrasonic and 140 mn units for optical), up from ~30 mn units in 2018, and could further increase to 300 mn units in 2020E, excluding the potential adoption by iPhone.

For the optical under-display fingerprint, our checks suggest the industry is working on (1) thinner stacking for 5G; (2) half-screen sensing for OLED panel; (3) single-point sensing for LCD panel; and (4) full-screen in-cell solution for LCD panel. As mentioned earlier, 5G smartphone will consume more battery power and it will be necessary to reduce the thickness of the under-display fingerprint module for more room to house a bigger battery.

Currently, optical under-display fingerprint sensor module has a thickness of nearly 4 mm, as its structure requires certain distance between the CMOS sensor and the AMOLED display to have the best optical imaging performance. Given the overall thickness of the handset nowadays is around 7.5-9.0 mm, smartphone makers are required to sacrifice the battery capacity to make extra room for the optical under-display fingerprint sensor. The new structure for 5G smartphone that Goodix and Egis are working on will be adopting MEMS Pinhole structure, replacing the current 2P/3P optical lens structure, given the MEMS Pinhole design could achieve total thickness of 0.5-0.8 mm, versus ~4 mm for 2P/3P optical lens. Our checks suggest the supply chain is preparing sampling/qualification of the new structure in 2H19 for mass production in 2020.

Wednesday, July 17, 2019

TechInsights Overviews Smartphone CIS Advances: Pixel Scaling and Scaling Enablers

TechInsights' image sensor analyst Ray Fontaine continues his excellent series of reviews based on his paper for the International Image Sensors Workshop (IISW) 2019. Part 2 talks about pixel scaling:

"At TechInsights, as technology analysts we are often asked to predict: what’s next? So, what about scaling down below 0.8 µm? Of course, 0.7 µm and smaller pixels are being developed mostly in secrecy, including for non-obvious use cases. For now, we will stick with our trend analysis and suggest that if a 0.7 µm generation is going to happen, it may be ready for the back end of 2020 or in 2021."

"The absence of major callouts in 2016 and onward do not correlate to inactivity. The innovation we have been documenting in leading edge parts of recent years could be described as incremental, although it is a subjective assessment. In summary, it is our belief that development of DTI and associated passivation schemes was the main contributor to delayed pixel introduction of 1.12 µm down to 0.9 µm pixels."

See Device Startup Proposes "Quantum PATPD Pixel"

Buena Park, CA-based See Device Inc. startup proposes:

"Photon Assisted Tunneling Photodetector (PAT-PD) Technology, is new photodetector technology redefining what's possible with standard silicon CMOS image sensor without compromise to performance and efficiency. An innovative pixel array system formed by new structures and design mechanisms of silicon, SeeDevice's proprietary image sensor uses Quantum Tunneling resulting in high sensitivity, quantum efficiency, low SNR, and wide spectral response."

"The PAT-PD sensor is designed incorporating principles of quantum mechanics and nanotechnology to produce groundbreaking improvements in dynamic range, sensitivity, and low light capabilities without compromising size and efficiency. Standard sensors require conceding either the cost efficiency of CMOS and the better specifications of CCD sensors. This compromise is eliminated by the groundbreaking technology used in the SeeDevice image sensors and photodetectors.

PAT-PD completely redefines the physical principles used for sensors by using photon-activated current flow. SeeDevice owns 50 patents worldwide which enable us to produce industry-disrupting specifications by using photons as a trigger mechanism to enable current flow. The technology has a wide spectrum of applications and can be easily integrated since the entire device is built on a CMOS process.

PAT-PD enables device development with no compromise on technical specification. One device can have high resolution, high frame rate, high sensitivity, and a wide dynamic range without modifications.

I'm told that the registered agent of Pixel Device Inc, Hoon Kim, has the same name as the CTO of infamous Planet82 company. Does anybody know if this is the same person?

Thanks to RA for the link!

Tuesday, July 16, 2019

Goodix Sues Egis over Under-Display Fingerprint Patents Infringement

Digitimes: China-based Goodix sues Taiwan's Egis Technology over infringing on its patents of under-display optical fingerprint sensors. Goodix sensors are used in many smartphones manufactured in China, while Egis sensors are used mostly in Samsung smartphones. The lawsuit was filed in Beijing IP court. Goodix demands CNY50.5M (US$7.35M) in compensation from Egis.

"With five years of arduous, tireless and indigenous innovation, a dedicated R&D team of 400+ overcame great difficulties to bring to the world the innovative optical IN-DISPLAY FINGERRPINT SENSOR, which has been leading a technological trend in the global mobile industry since its debut in early 2017. As of today, the innovative technology has been adopted by 52 smartphone models offered by mainstream brands, benefiting hundreds of millions of worldwide consumers, and is recognized as the most popular biometric solution in the bezel-less era.

The precious achievement is a result of enormous investment and persistence – Goodix invests at least 10% of its revenue into research and development each year. In 2018, the number has reached 22.5%, with a compound growth rate of 80% in the past five years. As of June, 2019, Goodix had submitted over 3,300 patent filings and accumulated over 480 issued patents, among which, over 760 filings and 50 issued patents are parts of the optical IN-DISPLAY FINGERPRINT SENSOR technology.

The success of Goodix’s optical IN-DISPLAY FINGERPRINT SENSOR embodies the hard work of all employees of Goodix; yet the team’s painstaking effort was stolen by a competitor. IP theft is a disrespectful act towards enterprises that are dedicated to innovations. It is also a vandalism of the market order. Out of the responsibilities and accountabilities to the employees, customers, consumers, as well as the entire industry, Goodix will defend its legitimate rights and interests by the justice of law.

Together with industry partners and peers, Goodix Technology is looking forward to establishing a healthy and sustainable industry environment that respects innovations and intellectual property rights.

Last year, Goodix was involved in a couple of lawsuits on capacitive fingerprint sensors. Goodix sued its Chinese competitor Silead, while Goodix itself was sued by Sweden-based FCP. This year, the optical fingerprint sensors are becoming a field of legal battles.

ADI Presents ToF Development Kit

Analog Devices presents its VGA ToF camera kit developed in cooperation with Arrow. The kit uses Panasonic CCD as a ToF sensor:

SPAD LiDAR from Chinese Academy of Science

Acta Photonics Sinica publishes a paper "A 16×1 Pixels 180nm CMOS SPAD-based TOF Image Sensor for LiDAR Applications" by CAO Jing, ZHANG Zhao, QINan, LIU Li-yuan, and WU Nan-jian.

"The sensor integrates 16 structure-optimized single photon avalanche diode pixels and a dual-counter-based 13-bit time-to-digital converter. Each pixel unit has a novel active quench and recharge circuit. The dark noise of single photon avalanche diode is reduced by optimizing the guard ring of the device. The active quench and recharge circuit with a feedback loop is proposed to reduce the dead time. A dual-counter-based time-to-digital converter is designed to prevent counting errors caused by the metastability of the counter in the time-to-digital converter. The sensor is fabricated in 180 nm CMOS standard technology. The measurement results show the median dark count rate of the single photon avalanche diode is 8 kHz at 1 V excess voltage, the highest photon detection efficiency is 18% at 550 nm light wavelength. The novel active quench circuit effectively reduces the dead time down to 8 ns. The time-to-digital converter with 416 ps resolution makes the system achieve the centimeter-accuracy detection. A 320×160 depth image is captured at a distance of 0.5 m. The maximum depth measurement nonlinear error is 1.9% and the worst-case precision is 3.8%."

Monday, July 15, 2019

TrinamiX Presents 3D FaceID Module

LinkedIn: TrinamiX unveils its compact FaceID solution for smartphones: "Protecting your data is nowadays more important than ever. #trinamiX3Dimaging allows you to protect your confidential infos by unlocking your mobile device only by facial recognition.

The system does not only provide 2D and 3D information, but also a material classification which adds another authentication layer: skin recognition is introduced as a further protective barrier. The #3DImager thus enhances the #safety of mobile devices.

See in the picture below the 3D Imaging system for mobile applications.

Trinamix also demos its fiber-based distance measuring system for industrial applications:

Samsung Event-Driven Sensors

Hyunsurk Eric Ryu from Samsung presents the company's progress with event-driven sensors:

Melexis ToF Sensor Detailed Datasheet

Melexis publishes quite a detailed datasheet of its MLX75024 QVGA ToF sensor based on Sony-Softkinetic pixel. So detailed spec is quite a rarity in the world of ToF imaging: