Monday, March 30, 2020

UX Factory Image Sensor with Integrated AI quotes Park Jun-young, CEO of a startup UX Factory, Korea, saying that the company has developed an image sensor with integrated AI engine:

"This 'cognitive' sensor, which will be located next to the main image sensor, is designed to be used only for face recognition, object recognition, and QR code recognition. AI chip technology from UX Factory and technology from a domestic image sensor design company are combined. This chip is an ultra-low-power chip that reduces the amount of power when an electronic device recognizes an object by a hundredth of a conventional sensor.

“If the existing image sensor's object recognition operating power was 1 W (watt), the product developed this time can reduce the power to a maximum of 10 kW (milliwatt)

The new image sensor appears to be a continuation of K-Eye cooperation project with KAIST presented at ISSCC in 2017.

"The goal of Park is to promote a sample chip in the second half, and then to mass-produce the chip in the first half of next year and apply it to home appliances."

Sunday, March 29, 2020

Huawei Smartphone Claimed to Measure Human Body Temperature with Si-based Cameras

cnTechPost quotes Huawei Consumer Business CEO Richard Yu on-line interview:

"When asked if the P40 series has some functional design in terms of hygiene, Yu mentioned that the P40 Pro+ can detect human body temperature very accurately through a rear camera with a unique algorithm.

Yu added that its global sales team initially stated that such a feature was not needed, but the Chinese team insisted. Of course, Yu pointed out that Huawei cares about user privacy, and related functions require consumer authorization to turn on.

According to Yu's outlook, with AI training and sensor cooperation, Huawei products will have a bigger stage in the future. He also previewed an app that can detect data such as breathing rate, pressure value, heart rate, etc., which is currently ready and will be the first to be launched for Chinese users.

Saturday, March 28, 2020

TechInsights Finds Sony ToF Sensor Inside iPad Pro LiDAR, iFixit Tests LiDAR Operation

TechInsights twits first info from Apple iPad Pro 2020 teardown, saying that LiDAR sensor is made by Sony.

Update: TechInsights fixed a typo in the original twit. The spatial resolution is 0.03MP, 10x lower than initially reported.

"TechInsights has begun the teardown process of #Apple iPad Pro (Model A2068). Our early findings indicate a 4.18 mm x 4.30 mm (18.0 mm²) #Sony ToF sensor with 0.03 MP resolution & 10 µm pitch pixels within the #LiDAR system. Our analysis continues with in-depth reports to follow."

iFixIt publishes a teardown video showing that LiDAR IR illumination pattern is less dense than a FaceID one:

Current-Assisted SPAD

Vrije Universiteit Brussel, Belgium, publishes a paper "Current-Assisted Single Photon Avalanche Diode (CASPAD) Fabricated in 350 nm Conventional CMOS" by Gobinath Jegannathan, Hans Ingelberts, and Maarten Kuijk.

"A current-assisted single-photon avalanche diode (CASPAD) is presented with a large and deep absorption volume combined with a small p-n junction in its middle to perform avalanche trigger detection. The absorption volume has a drift field that serves as a guiding mechanism to the photo-generated minority carriers by directing them toward the avalanche breakdown region of the p-n junction. This drift field is created by a majority current distribution in the thick (highly-resistive) epi-layer that is present because of an applied voltage bias between the p-anode of the avalanching region and the perimeter of the detector. A first CASPAD device fabricated in 350-nm CMOS shows functional operation for NIR (785-nm) photons; absorbed in a volume of 40 × 40 × 14 μm3. The CASPAD is characterized for its photon-detection probability (PDP), timing jitter, dark-count rate (DCR), and after pulsing."

Friday, March 27, 2020

Sony Statement on Coronavirus Impact

Sony releases "Statement Regarding the Impact of the Spread of the Novel Coronavirus:"

"At this time, there has been no material impact on the production of CMOS image sensors, including any impact on the procurement of materials. However, Sony's primary customers in this segment are smartphone makers who rely on supply chains in China, and although recovery in these supply chains has led to sales gradually returning to normal levels, there is a risk that going forward sales could be impacted by a slowdown in the smartphone market."

Cambridge Mechatronics 3D Sensing Technology

Cambridge Mechatronics uses Apple iPad Pro LiDAR announcement opportunity to emphasize advantages of its 3D sensing technology:

"Systems using Indirect Time of Flight (iToF) technology have shipped in Android smartphones for some time, but their practical working range is only around two metres. This has limited their use to camera enhancements such as portrait photo background blurring. Apple advise their Direct Time of Flight (dToF) technology has a useful range of five metres.

To unlock the broadest range of AR user experiences, accurately measuring depth of ten metres or more is necessary. All technologies in use today compromise system resolution and performance when increasing range. However, CML has developed technology combining optical components, actuators and software to increase working range to ten metres and more without any compromise to measurement resolution or performance. This gives a best of both worlds solution targeted at smartphones, tablets and other mobile devices.

CML’s 3D sensing enhancement technology is available to licence now. We are working with our global partners, including major device brands and their supply chains, to bring the most engaging and immersive next generation AR experiences to consumers.

Update: A PCT Patent Application WO2020030916 "Improved 3D Sensing" by David Richards and Joshua Carr describes the company's approach:

"...there is provided an apparatus for use in generating a three-dimensional representation of a scene, the apparatus comprising: a time-of-flight (ToF) imaging camera system comprising a multipixel sensor and a light source and arranged to emit illumination having a spatially-nonuniform intensity over the field of view of the sensor; and an actuation mechanism for moving the illumination across at least part of the field of view of the sensor, thereby enabling generation of the representation. This may be achieved without moving the sensor.

The non-uniform illumination may be any form of illumination, including a beam of light, a pattern of light, a striped pattern of light, a dot pattern of light.

Thursday, March 26, 2020

ST Announces 3rd Generation Global Shutter Stacked Sensors

GlobeNewswire: STMicro aims to computer-vision applications with new high-speed image sensors with global shutter. The new stacked sensors feature class-leading pixel size, high sensitivity, and low crosstalk.

The VD55G0 with 640 x 600 pixels and the VD56G3 with 1.5MP measure 2.6mm x 2.5mm and 3.6mm x 4.3mm, respectively, said to be the smallest on the market in relation to resolution. Embedded optical-flow processing in the VD56G3 calculates movement vectors, without the need for host computer processing. Samples are shipping now to lead customers.

These new global shutter image sensors are based on our third generation of advanced pixel technology and deliver significant improvements in performance, size, and system integration,” said Eric Aussedat, Imaging Sub-Group General Manager and EVP of the Analog, MEMS and Sensors Group, STMicro. “They are enabling another step forward in computer-vision applications, empowering designers to create tomorrow’s smart, autonomous industrial and consumer devices.

Senseeker Announces 8 µm and 12 µm Pitch Dual-Band IR DROICs

Senseeker Engineering announces the Oxygen RD0092, the world's first 8 µm pitch dual-band digital readout IC (DROIC). The Oxygen RD0092 supports a 1280 x 720 frame size at over 500 fps and dual-polarity inputs to provide compatibility with all industry-standard direct-injection detector materials. The solution was designed to optimize infrared imaging system performance through state-of-the-art integrated features and multiple operating modes that offer flexibility for a wide range of high-performance application requirements.

"The RD0092 is our first off-the-shelf readout product and we wanted to make sure that it strikes the right balance between being feature-rich and easy to operate," said Thomas Poonnen, Director of Engineering. "You can change operating modes or window sizes on the fly and toggle detector polarity or checkerboard integration pattern between frames, all of which can be accomplished by flipping just a few bits."

Senseeker Engineering announces the Magnesium MIL RP0092, an advanced 12 µm pitch high dynamic range dual-band digital pixel readout IC (DPROIC). Product sales are restricted to customers who have approval from the U.S. Government. The Magnesium MIL RP0092 supports a 1280 x 720 frame size at up-to 120 fps, with dual-polarity inputs to provide compatibility with all industry-standard direct-injection compatible detector materials.

Thanks to MJ for the pointer!

Wednesday, March 25, 2020

Yole on Coronavirus Impact on CIS Market

Yole Developpement's Q4 2019 quarterly monitor "CIS: Q4 2019 went way above forecast but this was before COVID-19" states:


  • Q4 2019 is 17% above revenue forecast and reaches US$5,746 million: 11.3% of upside is due to volume upside and 9.1% due to ASP3 upside.
  • The coronavirus outbreak impact of the epidemic will influence mostly on mobile and consumer CIS market with a drop in forecast on the global smartphone market expected on Q1 and Q2 2020.
  • 2019 YoY revenue growth is higher than expected and reaches 25%, with a Q2Q growth at 38% in Q4 2019.
  • CIS YoY growth should slow down to 7% in 2020 but this number will be aggravated by the outbreak of COVID-19.
  • Long term growth should go below 10% within 5 years.

Y2020 NUMBERS: The best ever year for the CIS industry

This time reality exceeded Yole Développement (Yole) forecast quite significantly. Yole had predicted revenue of US$17.2b for 2020 and this prediction ended 11% below the confirmed numbers for the year. The extensive growth of CIS has brought this semiconductor specialty to revenues of US$19.3b in 2019, exceeding 4.6% of total semiconductor sales.

Reflecting on the year’s dynamics, Q1 and Q2 2019 had been underwhelming, both running 6% below expectation in a context of smartphone market saturation and trade war rhetoric. Q3 and Q4 totally reversed the gloomy trend of H1 2020, and the release of exciting smartphones with numerous cameras propelled the industry to overcapacity bringing US$5.7 billion per quarter to the ecosystem…

Q1 & Q2 2020: Short term forecast will be impacted by COVID-19

What we cannot predict at Yole is the possibility of a systemic recession,” comments Pierre Cambou, Principal Analyst, Imaging at Yole. “People will still buy smartphones and smart speakers in 2021 so the risk is more a contamination coming from the financial sector than a biological threat,” he adds.

CNN for Event-Based Sensors

University of Zurich-ETH publishes a video supplement to the paper "Event-based Asynchronous Sparse Convolutional Networks" by Nico Messikommer, Daniel Gehrig, Antonio Loquercio, and Davide Scaramuzza.

"Recently, pattern recognition algorithms, such as learning-based methods, have made significant progress with event cameras by converting events into synchronous dense, image-like representations and applying traditional machine learning methods developed for standard cameras. However, these approaches discard the spatial and temporal sparsity inherent in event data at the cost of higher computational complexity and latency. In this work, we present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output, thus directly leveraging the intrinsic asynchronous and sparse nature of the event data. We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks without sacrificing accuracy.

In addition, our framework has several desirable characteristics: (i) it exploits spatio-temporal sparsity of events explicitly, (ii) it is agnostic to the event representation, network architecture, and task, and (iii) it does not require any train-time change, since it is compatible with the standard neural networks' training process.

We thoroughly validate the proposed framework on two computer vision tasks: object detection and object recognition. In these tasks, we reduce the computational complexity up to 20 times with respect to high-latency neural networks. At the same time, we outperform state-of-the-art asynchronous approaches up to 24% in prediction accuracy.