Tuesday, June 30, 2020

Assorted News: Brookman, Smartsens, AIStorm, Cista, Prophesee, Unispectral, SiLC, Velodyne, Himax

Brookman demos the absence of interference between its 4 pToF cameras working simultaneously:



Smartsens reports it has garnered three awards from the 2020 China IC Design Award Ceremony and Leaders Summit —co-presented by EE Times China, EDN China, and ESMC China. SmartSens won awards in three categories: Outstanding Technical Support: IC Design Companies, Popular IC Products of the Year: Sensors/MEMS, and Silicon 100.


Other imaging companies on EE Times Silicon 100 list of Emerging Startups to Watch are AIStorm, Cista Systems, Prophesee, Unispectral, SiLC


Bloomberg reports that a blank-check company Graf Industrial Corp. is in talks to merge with Velodyne Lidar in a deal that would take Velodyne public. Graf Industrial Corp. has been established in 2018 as as a blank check company with an aim to acquire one and more businesses and assets, via a merger, capital stock exchange, asset acquisition, stock purchase, and reorganization. Merging with a blank-check company has become a popular way for companies to go public, as the coronavirus pandemic upends the markets.

GlobeNewswire: Himax launches of WiseEye WE-I Plus HX6537-A AI platform that supports Google’s TensorFlow Lite for Microcontrollers.

The Himax WiseEye solution is composed of the Himax HX6537-A processor and Himax Always-on sensor. With support to TensorFlow Lite for Microcontrollers, developers are able to take advantage of the WE-I Plus platform as well as the integrated ecosystem from TensorFlow Lite for Microcontrollers to develop their NN based edge AI applications targeted for Notebook, TV, Home Appliance, Battery Camera and IP Surveillance edge computing markets.

The processor remains in low power mode until a movement/object is identified by accelerators. Afterwards, DSP coped with the running NN inference on TensorFlow Lite for Microcontrollers kernel will be able to perform the needed CV operation to send out the metadata results over TLS (Transport Level Security) protocol to main SOC and/or cloud service for application level operation. The average power consumption for Google Person Detection example inference could be under 5mW. Additionally, average Himax Always-on sensor power consumption can be less than 1mW.

Himax WE-I Plus, coupled with Himax AoS image sensors, broadens TensorFlow Lite ecosystem offering and provides developers with possibilities of high performance and ultra low power,” said Pete Warden, Technical Lead of TensorFlow Lite for Microcontrollers at Google.

Monday, June 29, 2020

Sony Prepares Subscription Service for its AI-Integrated Sensors

Reuters, Bloomberg, Yahoo: Sony plans to sell software by subscription for data-analyzing sensors with integrated AI processor like the recently announced IMX500.

We have a solid position in the market for image sensors, which serve as a gateway for imaging data,” said Sony’s Hideki Somemiya, who heads a new team developing sensor applications. Analysis of such data with AI “would form a market larger than the growth potential of the sensor market itself in terms of value,” Somemiya said in an interview, pointing to the recurring nature of software-dependent data processing versus a hardware-only business.

Most of our sensor business today can be explained only by revenues from our five biggest customers, who would buy our latest sensors as we develop,” Somemiya said. “In order to be successful in the solution business, we need to step outside that product-oriented approach.

Customer support is currently included in the one-time price of Sony sensors. But Somemiya said Sony would provide the service via separate subscription in the future. Made-by-Sony software tools would initially focus on supporting the company’s own sensors and the coverage may later expand to retain customers even if they decide to switch to non-Sony sensors, he added.

We often get queries from customers about how they can use our exotic products such as polarization sensors, short-wavelength infrared sensors and dynamic vision sensors,” Somemiya said. “So we offer them hands-on support and customized tools.

Sony will seek business partnerships and acquisitions to build out its software engineering expertise and offer seamless support anywhere in the world. Somemiya said the sensor unit’s subscription offering is a long-term plan and shouldn’t be expected to become profitable anytime soon, at least not at meaningful scale.


Sunday, June 28, 2020

LFoundry Data Shows that BSI Sensors are Less Reliable than FSI

LFoundry and Sapienza University of Rome, Italy, publish an open source paper in IEEE Journal of the Electron Devices Society "Performance and reliability degradation of CMOS Image Sensors in Back-Side Illuminated configuration" by Andrea Vici, Felice Russo, Nicola Lovisi, Aldo Marchioni, Antonio Casella, and Fernanda Irrera. The data shows that BSI sensors' lifetime in a specific discussed failure mechanism is 150-1,000 times shorter than FSI. Of course, there can be many other failure sources that mask this huge difference.

"We present a systematic characterization of wafer-level reliability dedicated test structures in Back-Side-Illuminated CMOS Image Sensors. Noise and electrical measurements performed at different steps of the fabrication process flow, definitely demonstrate that the wafer flipping/bonding/thinning and VIA opening proper of the Back-Side-Illuminated configuration cause the creation of oxide donor-like border traps. Respect to conventional Front-Side-Illuminated CMOS Image Sensors, the presence of these traps causes degradation of the transistors electrical performance, altering the oxide electric field and shifting the flat-band voltage, and strongly degrades also reliability. Results from Time-Dependent Dielectric Breakdown and Negative Bias Temperature Instability measurements outline the impact of those border traps on the lifetime prediction."


"TDDB measurements were performed on n-channel Tx at 125C, applying a gate stress voltage Vstress in the range +7 to +7.6V. For each Vstress several samples were tested and the time-to-breakdown was measured adopting the three criteria defined in the JEDEC standard JESD92 [21]. For each stress condition, the fit of the Weibull distribution of the time-to-breakdown values gave the corresponding Time-to Failure (TTF). Then, the TTFs were plotted vs. Vstress in a log-log scale and the lifetime at the operating gate voltage was extrapolated with a power law (E-model [22]).

NBTI measurements were performed on p-channel Tx at 125C, applying Vstress in the range -3 to -4V. Again, several Tx were tested. Following the JEDEC standard JESD90 [23], in this case, lifetime is defined as the stress time required to have a 10% shift of the nominal VT. The VT shift has a power law dependence on the stress time and the lifetime value at the operating gate voltage could be extrapolated.
"


"Noise and charge pumping measurements denoted the presence of donor-like border traps in the gate oxide, which were absent in the Front-Side Illuminated configuration. The trap density follows an exponential dependence on the distance from the interface and reaches the value 2x10e17 cm-3 at 1.8 nm. Electrical measurements performed at different steps during the manufacturing process demonstrated that those border traps are created during the process loop of the Back-Side configuration, consisting of wafer upside flipping, bonding, thinning and VIA opening.

Traps warp the oxide electric field and shift the flat-band voltage with respect to the Front-Side configuration, as if a positive charge centroid of 1.6x10e-8 C/cm2 at 1.7 nm was present in Back-Side configuration, altering the drain and gate current curves.

We found that the donor-like border traps affect also the Back-Side device long term performance. Time Dependent Dielectric Breakdown and Negative Bias Temperature Instability measurements were performed to evaluate lifetime. As expected, the role of border traps in the lifetime prediction is different in the two cases, but the reliability degradation of Back-Side with respect to Front-Side-Illuminated CMOS Image Sensors is evident in any case.
"

Update: Here is comment from Felice Russo:

The following comments intend to clarify the scope of the paper “Performance and reliability degradation of CMOS Image Sensors in Back-Side Illuminated configuration”.

The title reported in the Image Sensor Blog, “LFoundry Data shows that BSI Sensors are Less Reliable than FSI”, leads to a conclusion different from the intent of the authors. The purpose of the paper was to evaluate potential reliability failure mechanisms, intrinsic to a particular BSI process flow, rather than highlighting a general BSI reliability weakness. BSI sensors produced at LFoundry incorporate numerous process techniques to exceed all product reliability requirements.

It is widely accepted [Ref.1-3] that the BSI process is sensitive to charging effects, independent of the specific process flow and production line. It may cause an oxide degradation, mainly related to the presence of additional distributions of donor-like traps in the oxide, located within a tunneling distance from the silicon-oxide interface (border/slow traps) and likely linked to an oxygen vacancy.

The work, published by the University, was based on wafer level characterization data, collected in 2018 using dedicated test structures fabricated with process conditions properly modified to emphasize the influence of the main BSI process steps on the trap generation.

To address these potential intrinsic failure mechanisms, several engineering solutions have been implemented to meet all reliability requirements up to automotive grade. Our earlier published work, [Ref.4], shows BSI can match FSI TDDB lifetime with the properly engineered solutions. Understandably not all solutions can be published.

Results have been used to further improve the performance of BSI products and to identify subsequent innovative solutions for the future generations of BSI sensors.

References:
[1] J. P. Gambino et al., “Device reliability for CMOS image sensors with backside through-silicon vias”, in Proceedings of the IEEE International Reliability Physics Symposium (IRPS), 2018
[2] Lahav et al., “BSI complementary metal-oxide-semiconductor (CMOS) imager sensors”, in High performance Silicon Imaging, Second Edition, Edited by D. Durini, 2014
[3] S. G. Wuu et al., “A manufacturing BSI illumination technology using bulk-Si substrate for Advanced CMOS Image sensors”, in Proceedings of the International Image Sensor Workshop, 2009
[4] A Vici et al., “Through-silicon-trench in back-side-illuminated cmos image sensors for the improvement of gate oxide long term performance,” in Proceedings of the International Electron Devices Meeting, 2018.

Saturday, June 27, 2020

Imec Presentation on Low-Cost NIR and SWIR Imaging

SPIE publishes an Imec presentation "Image sensors for low cost infrared imaging and 3D sensing" by Jiwon Lee, Epimetheas Georgitzikis, Edward Van Sieleghem, Yun Tzu Chang, Olga Syshchyk, Yunlong Li, Pierre Boulenc, Gauri Karve, Orges Furxhi, David Cheyns, and Pawel Malinowski (available after free SPIE account registration.)

"Thanks to state-of-the-art III-V and thin-film (organics or quantum dots) material integration experience combined with imager design and manufacturing, imec is proposing a set of research activities which ambition is to innovate in the field of low cost and high resolution NIR/SWIR uncooled sensors as well as 3D sensing in NIR with Silicon-based Time-of-Flight pixels. This work will present the recent integration achievements with demonstration examples as well as development prospects in this research framework."

1/f and RTS Noise Model

IEEE open source Journal of the Electron Devices Society publishes Hong Kong University of Science and Technology paper "1/f Low Frequency Noise Model for Buried Channel MOSFET" by Shi Shen and Jie Yuan.

"The Low Frequency Noise (LFN) in MOSFETs is critical to Signal-to-Noise Ratio (SNR) demanding circuits. Buried Channel (BC) MOSFETs are commonly used as the source-follower transistors for CCDs and CMOS image sensors (CIS) for lower LFN. It is essential to understand the BC MOSFETs noise mechanism based on trap parameters with different transistor biasing conditions. In this paper, we have designed and fabricated deep BC MOSFETs in a CIS-compatible process with 5 V rating. The 1/f Y LFN is found due to non-uniform space and energy distributed oxide traps. To comprehensively explain the BC MOSFETs noise spectrum, we developed a LFN model based on the Shockley-Read-Hall (SRH) theory with WKB tunneling approximation. This is the first time that the 1/f Y LFN spectrum of BC MOSFET has been numerically analyzed and modeled. The Random Telegraph Signal (RTS) amplitudes of each oxide traps are extracted efficiently with an Impedance Field Method (IFM). Our new model counts the noise contribution from each discretized oxide trap in oxide mesh grids. Experiments verify that the new model matches well the noise power spectrum from 10 to 10k Hz with various gate biasing conditions from accumulation to weak inversion."

Friday, June 26, 2020

ST ToF Products Tour

ST publishes a nice presentation "Going further with FlightSense" at Sensor+Test 2020 virtual exhibition. There is also a short presentation about Flightsense applications.

Thursday, June 25, 2020

v2e and Event-Driven Camera Nonidealities

ETH Zurich publishes an Arxiv.org paper "V2E: From video frames to realistic DVS event camera streams" by Tobi Delbruck, Yuhuang Hu, and Zhe He. The V2E open source tool is available here.

"To help meet the increasing need for dynamic vision sensor (DVS) event camera data, we developed the v2e toolbox, which generates synthetic DVS event streams from intensity frame videos. Videos can be of any type, either real or synthetic. v2e optionally uses synthetic slow motion to upsample the video frame rate and then generates DVS events from these frames using a realistic pixel model that includes event threshold mismatch, finite illumination-dependent bandwidth, and several types of noise. v2e includes an algorithm that determines the DVS thresholds and bandwidth so that the synthetic event stream statistics match a given reference DVS recording. v2e is the first toolbox that can synthesize realistic low light DVS data. This paper also clarifies misleading claims about DVS characteristics in some of the computer vision literature. The v2e website is this https URL and code is hosted at this https URL."


The paper also explains some of the misconceptions about DVS sensors:

"Debunking myths of event cameras: Computer vision papers about event cameras have made rather misleading claims such as “Event cameras [have] no motion blur” and have “latency on the order of microseconds” [7]–[9], which were perhaps fueled by the titles (though not the content) of papers like [1], [10], [11]. Review papers like [5] are more accurate in their descriptions of DVS limitations, but are not very explicit about the actual behavior.

DVS cameras must obey the laws of physics like any other vision sensor: They must count photons. Under low illumination conditions, photons become scarce and therefore counting them becomes noisy and slow. v2e is aimed at realistic modeling of these conditions, which are crucial for deployment of event cameras in uncontrolled natural lighting.
"