Monday, January 25, 2021

Next Generation EDOF

OSA Optics Express publishes a paper "Depth-of-field engineering in coded aperture imaging" by Mani Ratnam Rai and Joseph Rosen from Ben-Gurion University of the Negev, Israel.

"Extending the depth-of-field (DOF) of an optical imaging system without effecting the other imaging properties has been an important topic of research for a long time. In this work, we propose a new general technique of engineering the DOF of an imaging system beyond just a simple extension of the DOF. Engineering the DOF means in this study that the inherent DOF can be extended to one, or to several, separated different intervals of DOF, with controlled start and end points. Practically, because of the DOF engineering, entire objects in certain separated different input subvolumes are imaged with the same sharpness as if these objects are all in focus. Furthermore, the images from different subvolumes can be laterally shifted, each subvolume in a different shift, relative to their positions in the object space. By doing so, mutual hiding of images can be avoided. The proposed technique is introduced into a system of coded aperture imaging. In other words, the light from the object space is modulated by a coded aperture and recorded into the computer in which the desired image is reconstructed from the recorded pattern. The DOF engineering is done by designing the coded aperture composed of three diffractive elements. One element is a quadratic phase function dictating the start point of the in-focus axial interval and the second element is a quartic phase function which dictates the end point of this interval. Quasi-random coded phase mask is the third element, which enables the digital reconstruction. Multiplexing several sets of diffractive elements, each with different set of phase coefficients, can yield various axial reconstruction curves. The entire diffractive elements are displayed on a spatial light modulator such that real-time DOF engineering is enabled according to the user needs in the course of the observation. Experimental verifications of the proposed system with several examples of DOF engineering are presented, where the entire imaging of the observed scene is done by single camera shot."

LiDAR News: Levandowski, Aeva, DENSO, Ouster, Outsight, Argo, Valeo, Hyundai, Velodyne

World IP Review: The outgoing Trump administration has granted a full pardon to Anthony Levandowski, the former LiDAR head at Waymo, sentenced to 18 months in prison for stealing trade secrets.

In a memo, released on January 20, 2021, the administration says Levandowski “paid a significant price for his actions and plans to devote his talents to advance the public good.

It also cited a quote from the sentencing judge in the case in which he described Levandowski as a “brilliant, groundbreaking engineer that our country needs.

BusinessWire: Ouster and Outsight partner on the first integrated solution in the lidar industry with embedded pre-processing software. This plug-and-play system is designed to deliver real-time, processed 3D data and designed to be integrated into any application within minutes. The solution combines Ouster’s high-resolution digital lidar sensors with Outsight’s perception software which detects, classifies, and tracks objects without relying on machine learning.

ReutersDENSO partners with Aeva to develop next-generation sensing and perception systems. Together, the companies will advance FMCW LiDAR and bring it to the mass vehicle market.

MSNGroundTruthAutonomy: Argo.ai presents its new platform featuring 6 LiDARs and 11 cameras. Some of the versions even have a multi-storied LiDAR pyramid on the roof:


ETNews reports that Hyundai is contemplating using Valeo SCALA LiDAR in its first autonomous vehicle scheduled to release in 2022. The reason for choosing Valeo is quite interesting:

"This decision is likely based on the fact that Velodyne has yet to reach a level to mass-produce LiDAR sensors even though it is working with Hyundai Mobis, which invested $54.3 million (60 billion KRW) in Velodyne, on the development. 

Velodyne received an $50 million investment (3% stake) from Hyundai Mobis back in 2019. Although it stands at the top of the global market for LiDAR sensors, a supply of automotive LiDAR sensors for a research and development purpose is its only experience with automotive LiDAR sensors. It is reported that it has yet to reach Hyundai Motor Group’s requests due to its lack of experience with mass-production of automotive LiDAR sensors. Although it was planning to supply LiDAR sensors that will be used for a level 3 autonomous driving system, its plan is now facing a setback.

Velodyne is currently working with Hyundai Mobis at Hyundai Mobis’s Technical Center of Korea in Mabuk and is focusing on securing its ability to mass-produce automotive LiDAR sensors while having the sensors satisfy reliability that future cars require. The key is for Velodyne to minimize any different in qualities between products during mass-production

Valeo is the only company in the world that has succeeded in mass-producing automotive LiDAR sensors. It supplied “SCALA Gen. 1” to Audi for Audi’s full-size sedan “A8”. SCALA Gen. 1 is a 4-channel LiDAR sensor and it has a detection range of about 150 meters."

Sunday, January 24, 2021

International Image Sensor Society on LinkedIn

International Image Sensor Society (IISS) has opened a LinkedIn page. Please feel free to follow to be updated about the latest events and announcements:

12-ps Resolution Vernier Time-to-Digital Converter for SPAD Sensor

MDPI paper "A 13-Bit, 12-ps Resolution Vernier Time-to-Digital Converter Based on Dual Delay-Rings for SPAD Image Sensor" by Zunkai Huang, Jinglin Huang, Li Tian,Ning Wang, Yongxin Zhu, Hui Wang, and Songlin Feng from Shanghai Advanced Research Institute, Chinese Academy of Sciences, presents a fairly complex pixel.

"In this paper, we propose a novel high-performance TDC for a SPAD image sensor. In our design, we first present a pulse-width self-restricted (PWSR) delay element that is capable of providing a steady delay to improve the time precision. Meanwhile, we employ the proposed PWSR delay element to construct a pair of 16-stages vernier delay-rings to effectively enlarge the dynamic range. Moreover, we propose a compact and fast arbiter using a fully symmetric topology to enhance the robustness of the TDC. To validate the performance of the proposed TDC, a prototype 13-bit TDC has been fabricated in the standard 0.18-µm complementary metal–oxide–semiconductor (CMOS) process. The core area is about 200 µm × 180 µm and the total power consumption is nearly 1.6 mW. The proposed TDC achieves a dynamic range of 92.1 ns and a time precision of 11.25 ps. The measured worst integral nonlinearity (INL) and differential nonlinearity (DNL) are respectively 0.65 least-significant-bit (LSB) and 0.38 LSB, and both of them are less than 1 LSB. The experimental results indicate that the proposed TDC is suitable for SPAD-based 3D imaging applications."

WDR Sensor with Binary Image Feature

IET Electronics Letters publishes a paper "CMOS image sensor for wide dynamic range feature extraction in machine vision" by Hyeon‐June Kim from Kangwon National University, Korea.

"The proposed pixel structure has two operating modes, the normal and WDR modes. In the normal operating mode, the proposed CIS captures a normal image with high sensitivity. In addition, as a unique function, a bi‐level image is obtained for real‐time FE even if a pixel is saturated in strong illumination conditions. Thus, compared to typical CISs for machine vison, the proposed CIS can reveal object features that are blocked by light in real time. In the WDR operating mode, the proposed CIS produces a WDR image with its corresponding bi‐level image. A prototype CIS was fabricated using a standard 0.35‐μm 2P4M CMOS process with a 320 × 240 format (QVGA) with 10‐μm pitch pixels. At 60 fps, the measured power consumption was 5.98 mW at 3.3 V for pixel readout and 2.8 V for readout circuitry. The dynamic range of 73.1 dB was achieved in the WDR operating mode."

Saturday, January 23, 2021

Smartsens Released More than 30 Tapeouts in 2020

Smartsens reports that it has released more than 30 tapeouts in 2020 or one tapeout every 12 days, on average. The company also won the "Unicorn Enterprise of the Year Award" from 2021 China Semiconductor Investment Alliance Annual Conference and China IC Billboard:

CMOS Sensors Design with Synopsys Custom Compiler

While most of analog design in the industry is done with Cadence EDA tools, Imasenic CTO Adria Bofill Petit presents an alternative path with Synopsys Custom Compiler:

Friday, January 22, 2021

Call for Papers for Special Issue of 2022 IEEE TED on Solid-State Image Sensors

Over the last decade, solid-state image sensors have sustained impressive technological developments as well as growth in existing markets such as camera phones, automotive cameras, security and industrial cameras and medical/scientific cameras. This has included:
  • sub-micron pixels,
  • high dynamic range sensors for automotive and machine vision,
  • time-of- flight sensors for 3D imaging,
  • 3-dimensional integration (wafer level stacking) for small and efficient imaging systems on a chip,
  • sub-electron read noise pixels and avalanche photodetectors for single-photon imaging,
  • detector structures for non-cooled infrared imaging,
  • and many others.
Solid-state image sensors are also taking off into new applications and markets (IoT, 3D imaging, medical, biometrics and others). Solid-state image sensors are now key components in a vast array of consumer and industrial products. This special issue will provide a focal point for reporting these advancements in an archival journal and serve as an educational tool for the solid-state image sensor community. Previous special issues on solid-state image sensors were published in 1968, 1976, 1985, 1991, 1997, 2003, 2009 and 2016.
  • Topics of interest include, but are not limited to:
  • Pixel device physics (New devices and structures, Advanced materials, Improved models and scaling, Advanced pixel circuits, Performance enhancement for QE, Dark current, Noise, Charge Multiplication Devices, etc.)
  • Image sensor design and performance (New architectures, Small pixels and Large format arrays, High dynamic range, 3D range capture, Low voltage, Low power, High frame rate readout, Scientific-grade, Single-Photon Sensitivity)
  • Image-sensor-specific peripheral circuits (ADCs and readout electronics, Color and image processing, Smart sensors and computational sensors, System on a chip)
  • Non-visible “image” sensors (Enhanced spectral response e.g., UV, NIR, High energy photon and particle detectors e.g., electrons, X-rays, Ions, Hybrid detectors, THz imagers)
  • Stacked image sensor architectures, fabrication, packaging and manufacturing (two or more tiers, back-side illuminated devices)
  • Miscellaneous topics related to image sensor technology
Submission deadline: July 30, 2021
Publication date: June 2022

GEO Semi Reports the 250 Automotive OEM Design Win Milestone

BusinessWire: GEO Semiconductor announces surpassing a major milestone for the company, 250 Automotive OEM design wins. These camera video processor (CVP) design wins represent engagements with over 30 different Tier 1 suppliers, and over a dozen of the world’s top automotive OEMs. 

GEO released it’s first automotive product in 2015 and made the strategic decision to exclusively develop CVPs for automotive from that point forward. In the past 5 years we leveraged our world class team, our focused product strategy, and our customers to propel us to grow to the position of market leadership.” said Dave Orton, GEO Semiconductor CEO. “The world’s leading automotive companies chose GEO due to our camera, video, and computer vision expertise, and our ability to provide timely cutting edge solutions for these complex applications.
 

Thursday, January 21, 2021

Bucket-Brigade Device Inventor Kees Teer Passed Away at the Age of 95

ED: A former Philips Research Labs head Kees Teer passed away at the age of 95. Kees was the inventor of a bucket-brigate device, the predecessor of the CCD.

Smartsens Claims #1 Spot in CIS Volume for Machine Vision Applications

Smartsens publishes a promotional video on global shutter advantages where it claims to be #1 in terms of machine vision image sensors shipment volume:



Update: Smartsens has updated the video with explanations on of the machine vision market positioning:

Wednesday, January 20, 2021

Samsung Aims to Take a Lead on Automotive CIS Market

PulseNews reports that, currently, Samsung market share in automotive image sensors is only 2%, after ON Semi, Omnivision, and Sony. Samsung intends to increase it and take a lead in automotive sensors.



ams’ NanEye Endoscopic Camera Reverse Engineering

SystemPlus publishes a reverse engineering of ams’ NanEye endoscopic camera:

"To achieve an exceedingly small size and minimal cost, the NanEye relegates memory and image processing functionality off-chip and uses low-voltage differential signaling to stream image data at 38 Mbps. The NanEye includes a wafer-level packaged (WLP) 1 x 1 mm2 249 x 250-pixel front-side illuminated CMOS image sensor designed by AWAIBA (acquired by ams in 2015) and WLO technology developed by Heptagon (acquired by ams in 2016). Through-silicon via technology connects the sensor to the 4-pad solder-masked ball grid array package on the backside, facilitating integration into novel imaging products. The camera can be ordered with several preset optical configurations with an F-stop range of F2.4 – 6.0 and a field of view (FOV) range of 90° – 160°. The version analyzed in this report has an F-stop of F#4.0 and FOV of 120°."

Tuesday, January 19, 2021

Samsung CIS Capacity Expansion Chart

IFNews quotes HSBC report showing Samsung CIS capacity expanison chart:

SPAD Super-Resolution Sensing

Nature publishes a joint paper of Bonn University, Germany, and Glasgow University, UK, "Super-resolution time-resolved imaging using computational sensor fusion" by C. Callenberg, A. Lyons, D. den Brok, A. Fatima, A. Turpin, V. Zickus, L. Machesky, J. Whitelaw, D. Faccio, and M. B. Hullin.

"Imaging across both the full transverse spatial and temporal dimensions of a scene with high precision in all three coordinates is key to applications ranging from LIDAR to fluorescence lifetime imaging. However, compromises that sacrifice, for example, spatial resolution at the expense of temporal resolution are often required, in particular when the full 3-dimensional data cube is required in short acquisition times. We introduce a sensor fusion approach that combines data having low-spatial resolution but high temporal precision gathered with a single-photon-avalanche-diode (SPAD) array with data that has high spatial but no temporal resolution, such as that acquired with a standard CMOS camera. Our method, based on blurring the image on the SPAD array and computational sensor fusion, reconstructs time-resolved images at significantly higher spatial resolution than the SPAD input, upsampling numerical data by a factor 12×12, and demonstrating up to 4×4 upsampling of experimental data. We demonstrate the technique for both LIDAR applications and FLIM of fluorescent cancer cells. This technique paves the way to high spatial resolution SPAD imaging or, equivalently, FLIM imaging with conventional microscopes at frame rates accelerated by more than an order of magnitude."

Monday, January 18, 2021

Brigates Prepares $207M IPO at Shanghai Stock Exchange

EastMoney, CapitalWhale, ElecFans: Yet another China-based image sensor company prepares an IPO at Shanghai Stock Exchange - Brigates (Chinese name - Ruixinwei or Ruixin Micro-Tech Innovation or Kunshan Ruixin.)

"The IPO of the Science and Technology Innovation Board intends to raise 1.347 billion yuan for the R&D and industrialization projects of high-end image sensor chips and movement, as well as development and technology reserve funds.

So, what is the advantage of Ruixinwei?

The prospectus declares that: the company’s technologies and products in the field of high-end image chip customization and high-sensitivity camera cores have reached the domestic leading and internationally advanced level; “has a number of domestic leading and internationally advanced core technologies” and “breaks through foreign countries. "The technology monopoly of giants", "The company has become one of the few companies in the world that master ECCD technology", "It has replaced and surpassed similar foreign products, and filled many gaps in the field of domestic image sensors", "A few global suppliers Business One", "in a dominant position."

The Shanghai Stock Exchange took note of the company's above statement and requested the company to list the basis for its described market position.

In the reply letter, Ruixin Micro stated that it has revised "replacement and surpasses similar foreign products" in the prospectus to "partially replace similar foreign products", and at the same time, it will "achieve a subversive replacement of vacuum analog signal device technology." "Modified to "Realize the renewal of vacuum analog signal device technology".

For other statements, Ruixinwei believes that the statement is well-founded. In particular, the company mentioned that it is "the few companies in the world that master ECCD technology."

“The MCCD and ECCD technology independently developed by Ruixin Micro is helpful to improve the imaging quality of the image sensor.”

“At present, CMOS image sensor is the mainstream technology route, accounting for nearly 90% of the image sensor market. Ruixin Micro is equivalent to taking a new technological path. Ruixin Micro has developed a high-sensitivity camera core with MCCD technology as its core, has achieved industrialization. However, ECCD process development is very difficult, and currently there are relatively few publicly available materials."

Luminar CES Presentation Compares LiDAR Approaches

Luminar publishes its presentations at CES2021. The first one done by Matt Weed compares the LiDAR technologies:


In its investor presentation, Luminar also shows its single-pixel InGaAs sensor integrated onto a Si ROIC and costing $3:

Sunday, January 17, 2021

Modeling of Current-Assisted Photonic Demodulator for ToF Sensor

Hong Kong University publishes a video presentation "Compact Modeling of Current-Assisted Photonic Demodulator for Time-of-Flight CMOS Image Sensor" by Cristine Jin Delos Santos. The work has won Best Student Paper Award at IEEE Student Symposium on Electron Devices and Solid-State Circuits (s-EDSSC) in October 2020.

Saturday, January 16, 2021

Review of SPAD Photon-to-Digital Converters

MDPI paper "3D Photon-to-Digital Converter for Radiation Instrumentation: Motivation and Future Works" by Jean-François Pratte, Frédéric Nolet, Samuel Parent, Frédéric Vachon, Nicolas Roy, Tommy Rossignol, Keven Deslandes, Henri Dautet, Réjean Fontaine, and Serge A. Charlebois from Université de Sherbrooke, Canada, reviews the new opportunities coming from SPAD stacked chip integration.

"Analog and digital SiPMs have revolutionized the field of radiation instrumentation by replacing both avalanche photodiodes and photomultiplier tubes in many applications. However, multiple applications require greater performance than the current SiPMs are capable of, for example timing resolution for time-of-flight positron emission tomography and time-of-flight computed tomography, and mitigation of the large output capacitance of SiPM array for large-scale time projection chambers for liquid argon and liquid xenon experiments. In this contribution, the case will be made that 3D photon-to-digital converters, also known as 3D digital SiPMs, have a potentially superior performance over analog and 2D digital SiPMs. A review of 3D photon-to-digital converters is presented along with various applications where they can make a difference, such as time-of-flight medical imaging systems and low-background experiments in noble liquids. Finally, a review of the key design choices that must be made to obtain an optimized 3D photon-to-digital converter for radiation instrumentation, more specifically the single-photon avalanche diode array, the CMOS technology, the quenching circuit, the time-to-digital converter, the digital signal processing and the system level integration, are discussed in detail."