Friday, July 23, 2021

ST Presentation on Pixel-Level Stacking

ST presentation "Challenges and capabilities of 3D integration in CMOS imaging sensors" by Dominique Thomas, Jean Michailos, Krysten Rochereau, Joris Jourdon, and Sandrine Lhostis presents the company's achievements up to September 2019:

Thursday, July 22, 2021

A (Wrong) Attempt to Improve Imaging

University of Glasgow and University of Edinburgh publish a paper "Noise characteristics with CMOS sensor array scaling" by Claudio Accarino, Valerio F. Annese, Boon Chong Cheah, Mohammed A. Al-Rawhania, Yash D. Shaha, James Beeley, Christos Giagkoulovitis, Srinjoy Mitra, David R.S.Cumming. The paper compares SNR of a large single sensor with an array of smaller sensors having the same combined area. The conclusion looks fairly strange:

"In this paper we have compared the noise performance of a sensor system made using a single large sensor, versus the noise achieved when averaging the signal from an array of small independent sensors. Whilst the SNR of a smaller physical sensor is typically less than that of a single larger sensor, the properties of uncorrelated Gaussian noise are such that the overall performance of an array of small sensors is significantly better when the signal is averaged.

This elegant result suggests that there is merit in using sensor arrays, such as those that can be implemented in CMOS, even if the application only calls for a single measurement. Given the relatively low cost of CMOS and the wide availability of CMOS sensors, it is therefore beneficial to use arrays in any application where low noise or multiple parallel sensing are a priority."

More about Sony-TSMC Fab in Japan

NikkeiAsia, TaiwanNews: The planned TSMC fab in Kumamoto, on the island of Kyushu in western Japan, would go forward in two phases, according to Nikkei Asia. The board of TSMC is expected to decide on the investment in the current quarter. 

The plant is expected to start operation in 2023. Once both phases are complete, the new fab will produce about 40,000 wafers per month in 28nm process. The fab is expected to be mainly used to make image sensors for Sony, TSMC's largest Japanese customer. Nikkei has been told that TSMC is open to a collaboration that would give Sony more say in operating the plant and negotiating with the Japanese government.

ElectronicsWeekly presents another view on Sony-TSMC fab project: "Sony has a $7 billion+  revenue business in image sensors which makes the  $2.5 billion cost of such a fab a reasonable proposition."

Bloomberg reports that Japan intends to revive its domestic chip design and production industry and reverse the current downward R&D trend:

Wednesday, July 21, 2021

Image Sensors at In-Person Autosens Brussels

Austosens Brussels is to be held in in-person (!!!) on September 15-16. The agenda has been published and includes a lot of image sensor related stuff:
  • Sensor technology and safety features to address the challenging needs for reliable and robust sensing/viewing systems
    Yuichi Motohasi, Automotive Image Sensor Applications Engineer, Sony
    In this presentation, the key characteristics of the image sensors will be presented. Also, the state-of-the-art of functional safety and cybersecurity requirement to achieve reliable and robust sensing/viewing system will be discussed.
  • Beyond the Visible: SWIR Enhanced Gated Depth Imaging
    Ziv Livne, CBO, TriEye
    We will introduce a new and exciting SWIR-based sensor modality which provides HD imaging and ranging information in all conditions (“SEDAR”). How it works, its main benefits, and why it is the future. We will then show experimental evidence of SEDAR superiority over sensors of other wavelengths. These include recordings in difficult conditions such as nighttime, fog, glare, dust, and more. Also, show depth map field results.
  • Automotive 2.1 µm High Dynamic Range Image Sensors
    Sergey Velichko, Sr. Manager, ASD Technology and Product Strategy, ON Semiconductor
    This work describes a first generation 8.3 Mega-Pixel (MP) 2.1 µm dual conversion gain (DCG) pixel image sensor developed and released to the market. The sensor has high dynamic range (HDR) up to 140 dB and cinematographic image quality. Non-bayer color filter arrays improve low light performance significantly for front and surround Advanced Driver Assistance System (ADAS) cameras. This enables transitioning from level 2 to level 3 autonomous driving (AD) and fulfilling challenging Euro NCAP requirements.
  • High Dynamic Range Backside Illuminated Voltage Mode Global Shutter CIS for in Cabin Monitoring
    Boyd Fowler, CTO, OmniVision Technologies
    Although global shutter operation is required to minimize motion artifacts in in-cabin monitoring, it forces large changes in the CIS architecture. Most global shutter CMOS image sensors available in the market today have larger pixels and lower dynamic range than rolling shutter image sensors. This adversely impacts their size/cost and performance under different lighting conditions. In this paper we describe the architecture and operation of backside illuminated voltage mode global shutter pixels. We also describe how the dynamic range of these pixels can be extended using either multiple integration times or LOFIC techniques. In addition, how backside illuminated voltage mode global shutter pixels can be scaled, enabling smaller more cost effective camera solutions and results from recent backside illuminated voltage mode global shutter CIS will be presented.
  • Chip-scale LiDAR for affordability and manufacturability
    Dongjae Shin, Principal researcher, Samsung Advanced Institute of Technology
    In this presentation, we introduce a chip-scale solid-state LiDAR technology promising the cost and manufacturability advantages inherited from the silicon technology. The challenge of the light source integration has been overcome by the III/V-on-silicon technology that has just emerged in the silicon industry. With the III/V-on-silicon chip in the core, initial LiDAR module performance, performance scalability, and application status are presented for the first time. Cost-volume analysis and eco-system implications are also discussed.
  • A novel scoring methodology and tool for assessing LiDAR performance
    Dima Sosnovsky, Principal System Architect, Huawei
    This presentation presents a tool, which summarizes the most crucial characteristics and provides a common ground to compare each solution's pros and cons, by drawing a scoring envelope based on 8 major parameters of the LiDAR system, representing its performance, suitability to an automotive application, and business advantages.

Assorted Videos: Omnivision, Aeye, Qualcomm, MIPI

Omnivision continues its series of video interviews with its CTO Boyd Fowler. This part is about LED flicker mitigation:

Aeye tells a (marketing) story behind its inception:

Qualcomm publishes a panel with its customers adopting its AI camera technology:

MIPI Alliance publishes a couple of presentation about the future imaging needs needs and A-PHY standard (link1 and link2):

Tuesday, July 20, 2021

Luminar Acquires InGaAs Sensor Manufacturer

BusinessWire: LiDAR maker Luminar is acquiring its exclusive InGaAs chip design partner and manufacturer, OptoGration Inc., securing supply chain as Luminar scales Iris LiDAR into series production. The acquisition secures a key part of Luminar’s supply chain and enables deeper integration with its ROIC design subsidiary Black Forest Engineering (BFE), which Luminar acquired in 2017. Luminar is combining the latest technology from Optogration and BFE to power its new fifth-generation lidar chip in Iris as the company prepares for series production of its product and technology.

For the past five years, Luminar has been closely collaborating with OptoGration, developing, iterating, and perfecting the specialized InGaAs photodetector technology that is required for 1550nm lidar. OptoGration has capacity to produce approximately one million InGaAs chips with Luminar’s design each year at their specialized fabrication facility in Wilmington, Mass, with the opportunity to expand to up to ten million units per year capacity.

Acquiring OptoGration is the culmination of a deep, half-decade long technology partnership that has dramatically advanced the proprietary lidar chips that power the industry-leading performance of our newest Iris sensor,” said Jason Eichenholz, Co-founder and CTO at Luminar. “The OptoGration team is unique in their ability to deliver photodetectors with the performance and quality that achieve our increasingly demanding requirements. Chip-level innovation and integration has been key to unlocking our performance and driving the substantial cost reductions we’ve achieved.

Luminar combines its InGaAs photodetector chips from Optogration with silicon ASICs, produced by BFE, to create its lidar receiver and processing chip, which is said to be the most sensitive, highest DR InGaAs receiver of its kind in the world.

OptoGration’s founders are joining Luminar as part of this transaction and will continue to lead the business with support from Luminar.

Luminar is a great home for OptoGration because we share a vision for transforming automotive safety and autonomy with lidar,” said William Waters, President of OptoGration. “We also share a commitment to continuous innovation and have an incredible track record of combining our technologies to increase performance and lower cost. Together we can go even faster to scale and realize Luminar’s vision.

The OptoGration acquisition is expected to close in the third quarter. The transaction price was not disclosed but does not represent a material impact to Luminar’s cash position or share count.

Optical Neural Processor Integrated onto Image Sensor

Metasurface-based optical CNNs start to be a hot topic for papers and presentations, for example, here and here. Another metasurface CNN example by Aydogan Ozcan from UCLA is shown in the video below:

A recent paper "Metasurface-Enabled On-Chip Multiplexed Diffractive Neural Networks in the Visible" by Xuhao Luo, Yueqiang Hu, Xin Li, Xiangnian Ou, Jiajie Lai, Na Liu, and Huigao Duan from Hunan University (China), University of Stuttgart (Germany), and Max Planck Institute for Solid State Research (Germany) presents a fairly complete system integrated on an image sensor:

"Replacing electrons with photons is a compelling route towards light-speed, highly parallel, and low-power artificial intelligence computing. Recently, all-optical diffractive neural deep neural networks have been demonstrated. However, the existing architectures often comprise bulky components and, most critically, they cannot mimic the human brain for multitasking. Here, we demonstrate a multi-skilled diffractive neural network based on a metasurface device, which can perform on-chip multi-channel sensing and multitasking at the speed of light in the visible. The metasurface is integrated with a complementary metal oxide semiconductor imaging sensor. Polarization multiplexing scheme of the subwavelength nanostructures are applied to construct a multi-channel classifier framework for simultaneous recognition of digital and fashionable items. The areal density of the artificial neurons can reach up to 6.25x106/mm2 multiplied by the number of channels. Our platform provides an integrated solution with all-optical on-chip sensing and computing for applications in machine vision, autonomous driving, and precision medicine."

Monday, July 19, 2021

Event-Based Camera Tutorial

Tobi Delbruck delivers an excellent tutorial on event-based cameras prepared for the 2020 Telluride Neuromorphic workshop and ESSCIRC. The pdf file with slides is available here.

Graphene and Other 2D Materials Sensors Review

Nature publishes a review paper "Silicon/2D-material photodetectors: from near-infrared to mid-infrared" by Chaoyue Liu, Jingshu Guo, Laiwen Yu, Jiang Li, Ming Zhang, Huan Li, Yaocheng Shi & Daoxin Dai from Zhejiang University, China.

"Two-dimensional materials (2DMs) have been used widely in constructing photodetectors (PDs) because of their advantages in flexible integration and ultrabroad operation wavelength range. Specifically, 2DM PDs on silicon have attracted much attention because silicon microelectronics and silicon photonics have been developed successfully for many applications. 2DM PDs meet the imperious demand of silicon photonics on low-cost, high-performance, and broadband photodetection. In this work, a review is given for the recent progresses of Si/2DM PDs working in the wavelength band from near-infrared to mid-infrared, which are attractive for many applications. The operation mechanisms and the device configurations are summarized in the first part. The waveguide-integrated PDs and the surface-illuminated PDs are then reviewed in details, respectively. The discussion and outlook for 2DM PDs on silicon are finally given."

Sunday, July 18, 2021

Assorted Videos: ST, Leti, Omnivision, Innoviz, P2020, University of Wisconsin-Madison

ST presents one more use case for its ToF proximity sensors:

CEA-Leti publishes a video about its perovskite-based X-Ray imagers:

Omnivision publishes its CTO Boyd Fowler's interview on automotive in-cabin monitoring: "Automotive in-cabin monitoring is on the rise – not just for drivers, but for passengers as well. Why? Hear from our FutureInSight chief technology officer, Boyd Fowler, in his one-on-one with AutoSens researcher Francis Nedvidek."

Innoviz CEO Omer Keilaf presents his company's LiDAR technology:

American Traffic Safety Services Association (ATSSA) publishes an IEEE P2020 presentation on LED flicker by Brian Deegan from Valeo, team leader of the IEEE P2020 Automotive Image Quality Working Group, LED Flicker Subgroup, and Robin Jenkin, Principal Image Quality Engineer at NVIDIA.

University of Wisconsin-Madison publishes an 1-hour long presentation by Mohit Gupta on single-photon imaging: