Friday, April 29, 2022

Color sensing for nano-vision-sensors

Ningxin Li et al. from Georgia State University have published a new article titled "Van der Waals Semiconductor Empowered Vertical Color Sensor" in ACS Nano.

Abstract
Biomimetic artificial vision is receiving significant attention nowadays, particularly for the development of neuromorphic electronic devices, artificial intelligence, and microrobotics. Nevertheless, color recognition, the most critical vision function, is missed in the current research due to the difficulty of downscaling of the prevailing color sensing devices. Conventional color sensors typically adopt a lateral color sensing channel layout and consume a large amount of physical space, whereas compact designs suffer from an unsatisfactory color detection accuracy. In this work, we report a van der Waals semiconductor-empowered vertical color sensing structure with the emphasis on compact device profile and precise color recognition capability. More attractive, we endow color sensor hardware with the function of chromatic aberration correction, which can simplify the design of an optical lens system and, in turn, further downscales the artificial vision systems. Also, the dimension of a multiple pixel prototype device in our study confirms the scalability and practical potentials of our developed device architecture toward the above applications.





“This work is the first step toward our final destination­–to develop a micro-scale camera for microrobots,” says assistant professor of Physics Sidong Lei, who led the research. “We illustrate the fundamental principle and feasibility to construct this new type of image sensor with emphasis on miniaturization.”

Press: https://scitechdaily.com/new-electric-eye-neuromorphic-artificial-vision-device-developed-using-nanotechnology/

Thursday, April 28, 2022

Samsung making a new larger ISOCELL camera sensor?

Samsung is the world’s second-biggest mobile camera sensor maker, and its sensors are used by almost every smartphone brand. Over the past couple of years, the South Korean firm has launched various big-sized camera sensors, including the ISOCELL GN1 and the ISOCELL GN2. This year, it has made one more giant ISOCELL camera sensor.

The company has developed the ISOCELL GNV camera sensor, and it will be used in a Vivo smartphone. It is being reported that the ISOCELL GNV is custom-made for Vivo phones, and it has a size of 1/1.3-inch. It is most likely a 50MP sensor, similar to the ISOCELL GN1, ISOCELL GN2, and the ISOCELL GN5. It will act as the Vivo X80 Pro+’s primary camera and features a gimbal-like OIS system.



The ISOCELL GNV is likely a slightly modified version of Samsung’s ISOCELL GN1. The Vivo smartphone has three other cameras, including a 48MP/50MP ultrawide camera (Sony IMX sensor), a 12MP telephoto camera with 2x optical zoom and OIS, and an 8MP telephoto camera with 5x optical zoom and OIS. The phone can record 8K videos using the primary camera and up to 4K 60fps videos using the rest of its cameras. On the front, it could have a 44MP selfie camera.

The phone also uses Vivo’s custom ISP (Image Signal Processor) named V1+, which has been made in close collaboration with MediaTek. The new chip brings 16% higher brightness and 12% better white balance to images in low-light conditions. Prominent sections of an image can see up to 350% better brightness for lower noise and better colors.

The rest of the phone’s specifications include a 6.78-inch 120Hz Super AMOLED LTPO display, Snapdragon 8 Gen 1 processor, 8GB/12GB RAM, 128GB/256GB storage, 4,700mAH battery, 80W fast wired charging, 50W fast wireless charging, stereo speakers, and an IP68 rating for dust and water resistance.

https://www.sammobile.com/news/samsung-isocell-gnv-camera-sensor-coming/

Wednesday, April 27, 2022

90-min Tutorial on Single Photon Detectors

Krister Shalm of National Institute of Standards and Technologies presented a tutorial: Single-photon detectors at the 2013 QCrypt Conference in August. http://2013.qcrypt.net

This is from a while back but an excellent educational resource nevertheless!

The video is roughly 90-minutes long but has several gaps that can be skipped ahead. Or play it at >1x speed!




Tuesday, April 26, 2022

Embedded Vision Summit 2022

The Edge AI and Vision Alliance, a 118-company worldwide industry partnership is organizing the 2022 Embedded Vision Summit, May 16-19 at the Santa Clara Convention Center, Santa Clara, California.

The premier conference and tradeshow for practical, deployable computer vision and edge AI, the Summit focuses on empowering product creators to bring perceptual intelligence to products. This year’s Summit will attract more than 1,000 innovators and feature 90+ expert speakers and 60+ exhibitors across four days of presentations, exhibits and deep-dive sessions. Registration is now open.

Highlights of this year’s program include:
  • Keynote speaker Prof. Ryad Benosman of University of Pittsburgh and the CMU Robotics Institute will speak on “Event-based Neuromorphic Perception and Computation: The Future of Sensing and AI”
  • General session speakers include:
  • Zach Shelby, co-founder and CEO of Edge Impulse, speaking on “How Do We Enable Edge ML Everywhere? Data, Reliability, and Silicon Flexibility”
  • Ziad Asghar, Vice President of Product Management at Qualcomm, speaking on “Powering the Connected Intelligent Edge and the Future of On-Device AI”
  • 90+ sessions across four tracks—Fundamentals, Technical Insights, Business Insights, and Enabling Technologies
  • 60+ exhibitors including Premier Sponsors Edge Impulse and Qualcomm, Platinum Sponsors FlexLogix and Intel, and Gold Sponsors Arm, Arrow, Avnet, BDTi, City of Oulu, Cadence, Hailo, Lattice, Luxonis, Network Optics, Nota, Perceive, STMicroelectronics, Synaptics and AMD Xilinx
  • Deep Dive Sessions — offering opportunities to explore cutting-edge topics in-depth — presented by Edge Impulse, Qualcomm, Intel, and Synopsys
  • “We are delighted to return to being in-person for the Embedded Vision Summit after two years of online Summits,” said Jeff Bier, founder of the Edge AI and Vision Alliance. “Innovation in visual and edge AI continues at an astonishing pace, so it’s more important than ever to be able to see, in one place, the myriad of practical applications, use cases and building-block technologies. Attendees with diverse technical and business backgrounds tell us this is the one event where they get a complete picture and can rapidly sort out the hype from what’s working. A whopping 98% of attendees would recommend attending to a colleague.”
Registration is now open at https://embeddedvisionsummit.com.

The Embedded Vision Summit is operated by the Edge AI and Vision Alliance, a worldwide industry partnership bringing together technology providers and end-product companies to accelerate the adoption of edge AI and vision in products. More at https://edge-ai-vision.com.


EETimes Article

EETimes has published a "teaser" article written by the general chair of this year's summit.

Half a billion years ago something remarkable occurred: an astonishing, sudden increase in new species of organisms. Paleontologists call it the Cambrian Explosion, and many of the animals on the planet today trace their lineage back to this event.

A similar thing is happening in processors for embedded vision and artificial intelligence (AI) today, and nowhere will that be more evident than at the Embedded Vision Summit, which will be an in–person event held in Santa Clara, California, from May 16–19. The Summit focuses on practical know–how for product creators incorporating AI and vision in their products. These products demand AI processors that balance conflicting needs for high performance, low power, and cost sensitivity. The staggering number of embedded AI chips that will be on display at the Summit underscores the industry’s response to this demand. While the sheer number of processors targeting computer vision and ML is overwhelming, there are some natural groupings that make the field easier to comprehend. Here are some themes we’re seeing. 
Founded in 2011, the Edge AI and Vision Alliance is a worldwide industry partnership that brings together technology providers who are enabling innovative and practical applications for edge AI and computer vision. Its 100+ Member companies include suppliers of processors, sensors, software and services.

First, some processor suppliers are thinking about how to best serve applications that simultaneously apply machine learning (ML) to data from diverse sensor types — for example, audio and video. Synaptics’ Katana low–power processor, for example, fuses inputs from a variety of sensors, including vision, sound, and environmental. Xperi’s talk on smart toys for the future touches on this, as well.

Second, a subset of processor suppliers are focused on driving power and cost down to a minimum. This is interesting because it enables new applications. For example, Cadence will be presenting on additions to their Tensilica processor portfolio that enable always–on AI applications. Arm will be presenting low–power vision and ML use cases based on their Cortex–M series of processors. And Qualcomm will be covering tools for creating low–power computer vision apps on their Snapdragon family.

Third, although many processor suppliers are focused mainly or exclusively on ML, a few are addressing other kinds of algorithms typically used in conjunction with deep neural networks, such as classical computer vision and image processing.  A great example is quadric, whose new q16 processor is claimed to excel at a wide range of algorithms, including both ML and conventional computer vision.

Finally, an entirely new species seems to be coming to the fore: neuromorphic processors. Neuromorphic computing refers to approaches that mimic the way the brain processes information. For example, biological vision systems process events in the field of view, as opposed to classical computer vision approaches that typically capture and process all the pixels in a scene at a fixed frame rate that has no relation to the source of the visual information. The Summit’s keynote talk, “Event–based Neuromorphic Perception and Computation: The Future of Sensing and AI” by Prof. Ryad Benosman, will give an overview of the advantages to be gained by neuromorphic approaches. Opteran will be presenting on their neuromorphic processing approach to enable vastly improved vision and autonomy, the design of which was inspired by insect brains.

Whatever your application is, and whatever your requirements are, somewhere out there is an embedded AI or vision processor that’s the best fit for you. At the Summit, you’ll be able to learn about many of them, and speak with the innovative companies developing them.  Come check them out, and be sure to check back in 10 years — when we will see how many of 2032’s AI processors trace their lineage to this modern–day Cambrian Explosion!

—Jeff Bier is the president of consulting firm BDTI, founder of the Edge AI and Vision Alliance, and the general chair of the Embedded Vision Summit.

About the Edge AI and Vision Alliance

The mission of the Alliance is to accelerate the adoption of edge AI and vision technology by:
  • Inspiring and empowering product creators to incorporate AI and vision technology into new products and applications
  • Helping Member companies achieve success with edge AI and vision technology by:
  • Building a vibrant AI and vision ecosystem by bringing together suppliers, end-product designers, and partners
  • Delivering timely insights into AI and vision market research, technology trends, standards and application requirements
  • Assisting in understanding and overcoming the challenges of incorporating AI in their products and businesses

Monday, April 25, 2022

Perspective article on solar-blind UV photodetectors

A research group from the Indian Institute of Science has published a perspective article titled "The road ahead for ultrawide bandgap solar-blind UV photodetectors" in the Journal of Applied Physics.

Abstract:
This perspective seeks to understand and assess why ultrawide bandgap (UWBG) semiconductor-based deep-UV photodetectors have not yet found any noticeable presence in real-world applications despite riding on more than two decades of extensive materials and devices’ research. Keeping the discussion confined to photodetectors based on epitaxial AlGaN and Ga2O3, a broad assessment of the device performance in terms of its various parameters is done vis-à-vis the dependence on the material quality. We introduce a new comprehensive figure of merit (CFOM) to benchmark photodetectors by accounting for their three most critical performance parameters, i.e., gain, noise, and bandwidth. We infer from CFOM that purely from the point of view of device performance, AlGaN detectors do not have any serious shortcoming that is holding them back from entering the market. We try to identify the gaps that exist in the research landscape of AlGaN and Ga2O3 solar-blind photodetectors and also argue that merely improving the material/structural quality and device performance would not help in making this technology transition from the academic realm. Instead of providing a review, this Perspective asks the hard question on whether UWBG solar-blind detectors will ever find real-world applications in a noticeable way and whether these devices will be ever used in space-borne platforms for deep-space imaging, for instance.



The chain of UWBG detector technology development: A general status.



State-of-art n-type (right axis) and p-type (left axis) conductivity values in epitaxial AlGaN as a function of the bandgap of the ternary alloy, as reported in the literature.


Scatter plot of the product of UV-to-visible rejection ratio and gain of various types of AlGaN solar-blind photodetectors, as published in the literature, benchmarked with a Hamamatsu commercial-grade solar-blind photomultiplier tube.


A possible blown-up schematic of deep-UV imaging assembly based on AlGaN photodetector, which can significantly cut down on weight, footprint, and complexities such as high voltage requirement.


A qualitative plot of the current status of solar-blind UV photodetectors vis-à-vis their approximate TRL levels for AlGaN, β-Ga2O3, α-Ga2O3, and ɛ-Ga2O3.



Friday, April 22, 2022

Videos du jour - CICC, PhotonicsNXT and EPIC

IEEE CICC 2022 best paper candidates present their work

Solid-State dToF LiDAR System Using an Eight-Channel Addressable, 20W/Ch Transmitter, and a 128x128 SPAD Receiver with SNR-Based Pixel Binning and Resolution Upscaling
Shenglong Zhuo, Lei Zhao,Tao Xia, Lei Wang, Shi Shi, Yifan Wu, Chang Liu, et al.
Fudan University, PhotonIC Technologies, Southern Univ. of S&T

A 93.7%-Efficiency 5-Ratio Switched-Photovoltaic DC-DC Converter
Sandeep Reddy Kukunuru,Yashar Naeimi, Loai Salem
University of California, Santa Barbara

A 23-37GHz Autonomous Two-Dimensional MIMO Receiver Array with Rapid Full-FoV Spatial Filtering for Unknown Interference Suppression
Boce Lin, Tzu-Yuan Huang,Amr Ahmed, Min-Yu Huang, Hua Wang
Georgia Institute of Technology


PhotonicsNXT Fall Summit keynote discusses automotive lidar

This keynote session by Pierrick Boulay of Yole Developpement at the PhotonicsNXT Fall Summit held on October 28, 2021 provides an overview of the lidar ecosystem and shows how lidar is being used within the auto industry for ranging and imaging.




EPIC Online Technology Meeting on Single Photon Sources and Detectors

The power hidden in one single photon is unprecedented. But we need to find ways to harness that power. This meeting will discuss cutting-edge technologies paving the way for versatile and efficient pure single-photon sources and detection schemes with low dark count rates, high saturation levels, and high detection efficiencies. This meeting will gather the key players in the photonic industry pushing the development of these technologies towards commercializing products that harness the intrinsic properties of photons.



Thursday, April 21, 2022

Wide field-of-view imaging with a metalens

A research group from Nanjing University has published a new paper titled "Planar wide-angle-imaging camera enabled by metalens array" in the recent issue of Optica.

Abstract:
Wide-angle imaging is an important function in photography and projection, but it also places high demands on the design of the imaging components of a camera. To eliminate the coma caused by the focusing of large-angle incident light, traditional wide-angle camera lenses are composed of complex optical components. Here, we propose a planar camera for wide-angle imaging with a silicon nitride metalens array mounted on a CMOS image sensor. By carefully designing proper phase profiles for metalenses with intentionally introduced shifted phase terms, the whole lens array is capable of capturing a scene with a large viewing angle and negligible distortion or aberrations. After a stitching process, we obtained a large viewing angle image with a range of >120 degrees using a compact planar camera. Our device demonstrates the advantages of metalenses in flexible phase design and compact integration, and the prospects for future imaging technology.


Metalens array mounted directly on a CMOS camera




Schematic diagram of the principle and device architecture. (a) Schematics of wide-angle imaging by MIWC. Zoom-in figure shows the imaging principle with each part of the wide-angle image clearly imaged separately by each metalens. (b) Photograph of MIWC. The metalens array can be seen in the middle of the enlarged figure on the right. (c) Architecture of MIWC. The metalens array is integrated directly on the CMOS image sensor (DMM 27UJ003-ML) and fixed by an optically clear adhesive (OCA) tape (Tesa, 69402).



Experimental wide-angle imaging results by MIWC. (a) Projected “NANJING UNIVERSITY” on the curved screen covers a viewing angle of 120° and then is imaged by MIWC. (b) Imaging results and corresponding mask functions of lenses with designed angles of −57.5∘, 0°, 57.5°. (d) Imaging result of a traditional metalens showing limited field of view. (e) Final imaging result of MIWC by processing with mask functions and sub-images, which shows three times larger FOV compared with the traditional lens.

Press release: https://phys.org/news/2022-04-miniature-wide-angle-camera-flat-metalenses.html

Wednesday, April 20, 2022

PhD Thesis on Analog Signal Processing for CMOS Image Sensors

The very first PhD thesis that came out of Albert Theuwissen's group at TU Delft is now freely available as a pdf. This seems like a great educational resource for people interested in image sensors.

Direct download link: https://repository.tudelft.nl/islandora/object/uuid:2fbc1f51-7784-4bcd-85ab-70fc193c5ce9/datastream/OBJ/download

Title: Analog Signal Processing for CMOS Image Sensors
Author: Martijn Snoeij
Year: 2007

Abstract: 
This thesis describes the development of low-noise power-efficient analog interface circuitry for CMOS image sensors. It focuses on improving two aspects of the interface circuitry: firstly, lowering the noise in the front-end readout circuit, and secondly the realization of more power-efficient analog-to-digital converters (ADCs) that are capable of reading out high-resolution imaging arrays. 

Chapter 2 provides an overview of the analog signal processing chain in conventional, commercially-available CMOS imagers. First of all, the different photo-sensitive elements that form the input to the analog signal chain are briefly discussed. This is followed by a discussion of the analog signal processing chain itself, which will be divided into two parts. Firstly, the analog front-end, consisting of in-pixel circuitry and column-level circuitry, is discussed. Second, the analog back-end, consisting of variable gain amplification and A/D conversion is discussed. Finally, a brief overview of advanced readout circuit techniques is provided.

In chapter 3, the performance of the analog front-end is analyzed in detail. It is shown that its noise performance is the most important parameter of the front-end. An overview of front-end noise sources is given and their relative importance is discussed. It will be shown that 1/f noise is the limiting noise source in current CMOS imagers. A relatively unknown 1/f noise reduction technique, called switched-biasing or large signal excitation (LSE), is introduced and its applicability to CMOS imagers is explored. Measurement results on this 1/f noise reduction technique are presented. Finally, at the end of the chapter, a preliminary conclusion on CMOS imager noise performance is presented. 

The main function of the back-end analog signal chain is analog-to-digital conversion, which is described in chapter 4. First of all, the conventional approach of a single chip-level ADC is compared to a massively-parallel, column-level ADC, and the advantages of the latter will be shown. Next, the existing column-level ADC architectures will be briefly discussed, in particular the column-parallel single-slope ADC. Furthermore, a new architecture, the multiple-ramp single-slope ADC will be proposed. Finally, two circuit techniques are introduced that can improve ADC performance. Firstly, it will be shown that the presence of photon shot noise in an imager can be used to significantly decrease ADC power consumption. Secondly, an column FPN reduction technique, called Dynamic Column Switching (DCS) is introduced.

Chapter 5 and 6 present two realisations of imagers with column-level ADCs. In chapter 5, a CMOS imager with single-slope ADC is presented that consumes only 3.2µW per column. The circuit details of the comparator achieving this low power consumption are described, as well as the digital column circuitry. The ADC uses the dynamic column switching technique introduced in chapter 4 to reduce the perceptional effects of column FPN. Chapter 6 presents an imager with a multiple-ramp single-slope architecture, which was proposed in chapter 4. The column comparator used in this design is taken from a commercially available CMOS imager. The multiple ramps are generated on chip with a low power ladder DAC structure. The ADC uses an auto-calibration scheme to compensate for offset and delay of the ramp drivers.

Tuesday, April 19, 2022

Google AI Blog article on Lidar-Camera Fusion

A team from Google Research has a new blog article on fusing Lidar and camera data for 3D object detection. The motivating problem here seems to be the issue of misalignment between 3D LiDAR data and 2D camera data.


The blog discusses the team's forthcoming paper titled "DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection" which will be presented at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) conference in June 2022. A preprint of the paper is available here.

Some excerpts from the blog and the associated paper:

LiDAR and visual cameras are two types of complementary sensors used for 3D object detection in autonomous vehicles and robots. To develop robust 3D object detection models, most methods need to augment and transform the data from both modalities, making the accurate alignment of the features challenging.

Existing algorithms for fusing LiDAR and camera outputs generally follow two approaches --- input-level fusion where the features are fused at an early stage, decorating points in the LiDAR point cloud with the corresponding camera features, or mid-level fusion where features are extracted from both sensors and then combined. Despite realizing the importance of effective alignment, these methods struggle to efficiently process the common scenario where features are enhanced and aggregated before fusion. This indicates that effectively fusing the signals from both sensors might not be straightforward and remains challenging.



In our CVPR 2022 paper, “DeepFusion: LiDAR-Camera Deep Fusion for Multi-Modal 3D Object Detection”, we introduce a fully end-to-end multi-modal 3D detection framework called DeepFusion that applies a simple yet effective deep-level feature fusion strategy to unify the signals from the two sensing modalities. Unlike conventional approaches that decorate raw LiDAR point clouds with manually selected camera features, our method fuses the deep camera and deep LiDAR features in an end-to-end framework. We begin by describing two novel techniques, InverseAug and LearnableAlign, that improve the quality of feature alignment and are applied to the development of DeepFusion. We then demonstrate state-of-the-art performance by DeepFusion on the Waymo Open Dataset, one of the largest datasets for automotive 3D object detection.






We evaluate DeepFusion on the Waymo Open Dataset, one of the largest 3D detection challenges for autonomous cars, using the Average Precision with Heading (APH) metric under difficulty level 2, the default metric to rank a model’s performance on the leaderboard. Among the 70 participating teams all over the world, the DeepFusion single and ensemble models achieve state-of-the-art performance in their corresponding categories.







Monday, April 18, 2022

Quantum Dot Photodiodes for SWIR Cameras

A research team from Ghent University in Belgium  has published an article titled "Colloidal III–V Quantum Dot Photodiodes for Short-Wave Infrared Photodetection".

Abstract: Short-wave infrared (SWIR) image sensors based on colloidal quantum dots (QDs) are characterized by low cost, small pixel pitch, and spectral tunability. Adoption of QD-SWIR imagers is, however, hampered by a reliance on restricted elements such as Pb and Hg. Here, QD photodiodes, the central element of a QD image sensor, made from non-restricted In(As,P) QDs that operate at wavelengths up to 1400 nm are demonstrated. Three different In(As,P) QD batches that are made using a scalable, one-size-one-batch reaction and feature a band-edge absorption at 1140, 1270, and 1400 nm are implemented. These QDs are post-processed to obtain In(As,P) nanocolloids stabilized by short-chain ligands, from which semiconducting films of n-In(As,P) are formed through spincoating. For all three sizes, sandwiching such films between p-NiO as the hole transport layer and Nb:TiO2 as the electron transport layer yields In(As,P) QD photodiodes that exhibit best internal quantum efficiencies at the QD band gap of 46±5% and are sensitive for SWIR light up to 1400 nm.



a) Normalized absorbance spectra of the three QD batches (red) measured in tetrachloroethylene (TCE) before and (blue) dimethylformamide (DMF) after phase transfer. For each set of spectra, the vertical line indicates the maximum absorbance of the band-edge transition at 1140, 1270, and 1400 nm, respectively. The spectra after ligand exchange have been offset for clarity. b) (top) Photograph of the extraction of QDs from (top phase) octane to (bottom phase) DMF and (bottom) representation of the phase transfer chemistry when using 3-mercapto-1,2-propanediol (MPD) and butylamine (n-BuNH2) as phase transfer agents, indicating several reactions that bring about the replacement of the as-synthesized ligand shell of chloride and oleylamine by deprotonated MPD and n-BuNH2. c) X-ray photoelectron spectra (red) before and (blue) after ligand exchange in different energy ranges, showing the disappearance of chloride, the appearance of sulfide and the preservation of the In:As ratio after ligand exchange.


a) Schematic of the In(As, P) QD field effect transistor, consisting of a spincoated film of ligand exchanged QDs on top of cross-fingered source and drain electrodes and separated from the gate electrode by a thermally grown oxide. b) Transfer characteristics of the field effect transistor at a source–drain voltage of 5 V.



a) (top) Energy level diagram of the In(As,P) QDPD stack used here. The diagram was constructed by combining UPS results for the 1140 In(As,P) QD film and literature data for the contact materials.[39-43] (bottom) Schematic of the QDPD stack. b–d) Dark and photocurrent densities under white-light illumination of In(As,P) QDPDs for specific absorber layers as indicated. e) Photocurrent density as a function of white light illumination power in log–log scale. The reference power is 114.7 mW cm^−2.



a–c) External quantum efficiency spectra for the different In(As,P) QDPDs as indicated, recorded at a reverse bias of −2, −3, and −4 V. The absorbance spectrum of the corresponding In(As,P) QD batch is added in each graph for comparison. d) Normalized transient photocurrent response of the different In(As,P) QDPDs following a 400 µs step illumination. Rise and fall times have been indicated by the dominant fast time constant obtained from a multi-exponential fit of the transient.

Full article (open access): https://onlinelibrary.wiley.com/doi/10.1002/advs.202200844

Friday, April 15, 2022

PreAct Technologies and ESPROS Photonics Collaborate on Next-Generation Sensing Solutions

PreAct Technologies, an Oregon-based developer of near-field flash LiDAR technology  and ESPROS Photonics, a Swiss company leader in the design and production of time-of flight chips and 3D cameras, recently announced a collaboration agreement to develop  new flash LiDAR technologies for specific use cases in automotive, trucking, industrial  automation and robotics. The collaboration combines the dynamic abilities of PreAct’s  software-definable flash LiDAR and the versatile and ultra-ambient-light-robust time-of flight technology from ESPROS to create next-generation near-field sensing solutions. 

“Our partnership with ESPROS is a major milestone for our company in our goal to provide high performance, software-definable sensors to meet the needs of customers across  various industries,” said Paul Drysch, CEO and co-founder of PreAct Technologies.  “Looking to the future, vehicles across all industries will be software-defined, and our flash  LiDAR solutions are built to support that infrastructure from the beginning.” 

The automotive and trucking industries continue to rapidly integrate ADAS and self-driving  capabilities into vehicles, and as NHTSA (the U.S. National Highway Traffic Safety  Administration) has just revised the requirements for human control in fully automated  vehicles, the need for ultra-precise, high performance sensors is paramount to ensuring safe autonomous driving. The sensor solutions created by PreAct and ESPROS will  address top ADAS and self-driving features such as traffic sign recognition, curb detection, night vision and pedestrian detection with the highest frame rates and resolution of any  sensor on the market. 

In addition to providing solutions for automotive and trucking, PreAct and ESPROS will  show superior performance, functionality, and cost to the expanding robotics industry.  According to a new report published by Allied Market Research, the global industrial  robotics market size is expected to reach $116.8 billion by 2030. PreAct and ESPROS  solutions will enable a wide range of robotics and automation applications including QR  code scanning, obstacle avoidance and gesture recognition.  

“We have extensive plans to demonstrate the incredible capabilities of our 3D chipsets  with PreAct’s hardware and software. The ESPROS Pre-Act partnership ensures  customers can benefit from the shortest possible time to market for advanced tools such  as simulation. Our combined resources and expertise will allow us to enable  groundbreaking products across every industry" said Beat DeCoi, President and CEO of  ESPROS Photonics. “By combining our best in class TOF chips with PreAct’s innovation  and drive, we will see great results with clients benefiting from this partnership.”  

About PreAct Technologies 
PreAct Technologies creates the world’s fastest flash LiDAR that powers near-field sensing and object tracking solutions for automotive, trucking, robotic and industrial markets. Its patent-pending suite of sensor technologies is also the only software-definable LiDAR on  the market designed specifically to support the extended life of software-defined vehicles.  The company is located in Portland, Oregon. For sales inquiries, please contact  sales@preact-tech.com. 


About ESPROS Photonics 
ESPROS Photonics AG was founded in 2006 and is a highly specialized IC (Integrated  Circuit) design and production company. The company is built around a unique  CMOS/CCD process developed and owned by ESPROS. Swiss precision, quality, and  innovation are its core driving forces. Products are TOF and LiDAR imagers as well as  custom ASICS. The company also develops and produces 3D camera modules, all based  on its own 3D imagers. It is headquartered in Sargans, Switzerland. For further  information, please contact info@espros.com.





Thursday, April 14, 2022

New Videos from IEEE Sensors Council



Ultra-Thin Image Sensor Chip Embeded Foil

Author: Shuo Wang{2}, Björn Albrecht{1}, Christine Harendt{1}, Jan Dirk Schulze Spüntrup{1}, Joachim Burghartz{1}

Affiliation: {1}Institut für Mikroelektronik Stuttgart, Germany; {2}Institut für Nano- und Mikroelektronische Systeme, Germany

Abstract: Hybrid Systems in Foil (HySiF) is an integration concept for high-performance and large-area flexible electronics. The technology allows for integrating ultra-thin chips and widely distributed electronic components, such as sensors, microcontrollers or antennas, in thin flexible polymer film, using CMOS-compatible equipment and processing. This paper focuses on the embedding and characterization of a bendable ultra-thin image sensor in flexible polymer foil.



Bio-Inspired Electronic Eye and Bio-Integrated Drug Delivery Device

Author: Daehyeong Kim

Affiliation: Seoul National University, Korea

Abstract: Despite recent progresses, significant challenges still exist in developing a miniaturized and lightweight type of artificial vision that features wide field-of-view (FoV), high contrast, and low noise. Meanwhile, the wireless integration of wearable devices with implantable devices can present a new opportunity in the development of unconventional biomedical electronic devices. In this talk, recent progresses in the bio-inspired electronic eye and the wirelessly-integrated bioelectronics will be presented. In the first part, a fish-eye-inspired camera integrating a monocentric lens and a hemispherical silicon-nanorod photodetector array will be presented. In the second part, a bioelectronics device that consists of a soft implantable drug delivery device integrated wirelessly with a wearable electrophysiology sensing device will be presented. These novel types of device are expected to provide new opportunities for the next generation bio-inspired electronics and bio-integrated electronics.


Wednesday, April 13, 2022

Harvest Imaging Forum 2022 is open for registration!

After the Harvest Imaging forums during the last 7 years, an eighth one will be organized on June 23 & 24, 2022 in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. Due to well-known reasons, the 2022 version of the forum will be organized in a hybrid form :

You can attend in-person and can benefit in an utmost way of the live interaction with the speakers and audience,

There will be also a live broadcast of the forum, still interactions with the speakers through a chat box will be made possible,

Finally the forum also can be watched on-line at a later date.

The 2022 Harvest Imaging forum will deal with two subjects in the field of solid-state imaging and two speakers. Both speakers are world-level experts in their own fields.


"Dark current, dim points and bright spots : coming to the dark side of image sensors"

Dr. Daniel McGrath (GOODiX, USA)

Abstract:

Charge-generating defects are an intersection of physics, material properties, manufacturing processes and image science. In this time when pixels are reduced in dimensions comparable to the wavelength of light and noise performance is approaching photon counting, processes that produce erroneous signals in the dark have come to limit image sensor performance. The reduction of dark current over the last decades has been a success story, but has got the industry to a point where it is not clear the path for further improvement.

The aim of this forum is to provide an feet-on-the-ground exploration of the nature of dark current and of bright defects in image sensors. The start will be a discussion of the nature of both with their individual challenges and a timeline to put the development that has got the technology to its present state. It will discuss the challenge and opportunity provided by extreme sensitivity of the pixel, a curse and a blessing for understanding. It will traverse the physics and material issues related in spontaneous charge generation in semiconductors. It will take time to ponder gettering, passivation and radiation effects. It will try to provide a path through the tangle of manufacturing's mysteries and challenges. The goal is to climb to the present precipice, there to consider options that can take the technology to the next advance.

Bio:

Dan McGrath has worked for 40 years specializing in the device physics of silicon-based pixels, CCD and CIS, and in the integration of image-sensor process enhancements in the manufacturing flow. He chose his first job because it offered that “studying defects in image sensors means doing physics” and has kept this passion front-and-center in his work. After obtaining his doctorate from The Johns Hopkins University, he pursued this work at Texas Instruments, Polaroid, Atmel, Eastman Kodak, Aptina and BAE Systems. He has worked with manufacturing facilities in France, Italy, Taiwan, and the USA. In 2019 he joined GOODiX Technology, a supplier to the cell phone and IoT market. He has held organizational positions in the Semiconductor Interface Specialists Conference, the International Solid State Circuits Conference, The International Electron Device Conference and the International Image Sensor Workshop. He has made presentations on dark current at ESSDERC, Electronic Imaging and the International Image Sensor Workshop. His publications include the first megapixel CCD and the basis for dark current spectroscopy (DCS).


"Random Telegraph Signal and Radiation Induced Defects in CMOS Image Sensors"

Dr. Vincent Goiffon (ISAE-SUPAERO, Fr)

Abstract:

CMOS Image Sensors (CIS) are by far the main solid-state image sensor technology in 2021. Each and every year, this technology comes closer to the ideal visible imaging device with near 100% peak quantum efficiency, sub electron readout noise and ultra-low dark current (< 1 e-/s) at room temperature. In such near-perfect pixel arrays, the appearance of a single defect can seriously jeopardize the pixel function. Oxide/silicon interface and silicon bulk defects can remain after manufacturing or can be introduced by aging or after exposure to particle radiation. This later source of performance degradation limits the use of commercial “unhardened” solid-state sensors in a wide range of key applications such as medical imaging, space exploration, nuclear power plant safety, electron microscopy, particle physics and nuclear fusion instrumentation.

The aim of this forum is to explore the influence of semiconductor defects on CIS performances through the magnifying glass of radiation damage. In a first part, a review of radiation effects on CIS will be provided alongside the main mitigation techniques (so-called radiation hardening by design or RHBD techniques). The trade-off between radiation-hardening and performance will be discussed on chosen applications. This first part has a double objective: 1) to provide image sensors professionals the background to anticipate and improve the radiation hardness of their sensors in radiation environment, and 2) to give a different perspective on parasitic physical mechanisms that can be observed in as-fabricated sensors such as hot pixels and charge transfer inefficiency.

The second part will focus on Random Telegraph Signals (RTS) in image sensors, a defect related phenomenon of growing importance in advanced technologies. The fundamental differences between the two main RTS in imagers – MOSFET channel RTS, also called RTN, and Dark Current RTS (DC-RTS) – will be presented. Similarly to the first part, radiation damage will be used to clarify the mysterious origin of DC-RTS. The discussion will conclude with an opening towards the RTS mechanisms similarities between CIS and other image sensor technologies (e.g. SPAD and infrared detectors) and integrated circuits (DRAM).


Bio:

Vincent Goiffon received his Ph.D. in EE from the University of Toulouse in 2008. The same year he joined the ISAE-SUPAERO Image Sensor Research group as Associate Professor and he has been a Full Professor of Electronics at the Institute since 2018.

He has contributed to advance the understanding of radiation effects on solid-state image sensors, notably by identifying original degradation mechanisms in pinned photodiode pixels and by clarifying the role of interface and bulk defects in the mysterious dark current random telegraph signal phenomenon.

Besides his contributions to various space R&D projects, Vincent has been leading the development of radiation hardened CMOS image sensors (CIS) and cameras for nuclear fusion experiments (e.g. ITER and CEA Laser MegaJoule) and nuclear power plant safety. Vincent recently became the head of the Image Sensor Group of ISAE-SUPAERO.

Vincent Goiffon is the author of one book chapter and more than 90 scientific publications, including more than 10 conference awards at IEEE NSREC, RADECS and IISW.

He has been an associate editor of the IEEE Transactions on Nuclear Science since 2017 and has served the community as reviewer and session chair.


Register here: https://www.harvestimaging.com/forum_introduction_2021_new.php

Tuesday, April 12, 2022

Gigajot Announces the World's Highest Resolution Photon Counting Sensor

PASADENA, Calif., April 4, 2022 /PRNewswire/ -- Gigajot Technology, inventors and developers of Quanta Image Sensors (QIS), today announced the expansion of its groundbreaking QIS product portfolio with the GJ04122 sensor and associated QIS41 camera. With market leading low read noise, the GJ04122 sensor is capable of photon counting and photon number resolving at room temperature. The QIS41 camera, built around the GJ04122 sensor, pairs well with standard 4/3-inch microscopy optics, bringing unparalleled resolution and low light performance to scientific and industrial imaging applications.


Gigajot GJ04122 Sensor


Gigajot QIS41 Camera


"We are excited about the discoveries that our latest QIS will enable in the life sciences community," said Gigajot's CEO, Dr. Saleh Masoodian, "Additionally, this QIS device further validates that Gigajot has the world's leading small pixel performance which will eventually be deployed to high volume consumer products that value high resolution, low light imaging performance and HDR."

The 41 Megapixel GJ04122 sensor, which was funded in part by the National Science Foundation SBIR Program, utilizes a 2.2-micron pixel and has a read noise of only 0.35 electrons, which is significantly lower than state-of-the-art pixels of similar size. The sensor is capable of photon counting and photon number resolving up to its top speed of 30 frames per second at full resolution. The high resolution and the extremely low read noise provide flexibility for binning and additional post-processing, while maintaining a read noise that is still lower than native lower resolution sensors. While pushing the limits of low light imaging, the GJ04122 sensor also offers an impressive single-exposure dynamic range of 95 dB by utilizing Gigajot's patented high dynamic range pixel.

The QIS41 is a fully featured scientific camera based on the GJ04122 sensor. The QIS41 camera has a SuperSpeed USB 3.0 interface and is capable of true photon counting at room temperature. For more information, or to schedule a virtual demonstration contact Gigajot Sales at www.gigajot.tech/order. The QIS41 camera can be pre-ordered now for Q4 2022 deliveries.

About Gigajot Technology, Inc.: Headquartered in Pasadena, CA, Gigajot is developing the next generation of image sensors. Gigajot's mission is to develop innovative Quanta Image Sensor (QIS) devices and advance this technology for the next generation of image sensors, offering high-speed and high-resolution single-photon detection to realize new, unprecedented image capture capabilities for professionals, and consumers. At Gigajot, every photon counts. For more information, visit www.gigajot.tech.

Press release: https://www.prnewswire.com/news-releases/gigajot-announces-the-worlds-highest-resolution-photon-counting-sensor-301516410.html

Monday, April 11, 2022

MojoVision Announces New Contact Lens Prototype

From TechCrunch News and MojoVision Blog:

The Bay Area-based firm [MojoVision] announced a new prototype of its augmented reality contact lens technology. The system is based around what Mojo calls “Invisible Computing,” its heads up display technology that overlays information onto the lens. Essentially it’s an effort to realize the technology you’ve seen in every science-fiction movie from the past 40+ years. The set-up also features an updated version of the startup’s operating system, all designed to reduce user reliance on screens by — in a sense — moving the screen directly in front of their eyes. 

 

 

[The] new prototype of Mojo Lens incorporates numerous industry-first features, including the world’s smallest and densest dynamic display, low-latency communication, and an eye-controlled user interface.

The company continues to work with the FDA to help bring the tech to market as part of its Breakthrough Devices Program. The company also announced previous partnerships with fitness brands like Adidas Running to develop workout applications for the tech.

MojoVision Blog: https://www.mojo.vision/news/we-have-reached-a-significant-milestone-blog

TechCrunch Article: https://techcrunch.com/2022/03/30/mojo-vision-takes-another-step-toward-ar-contact-lenses-with-new-prototype/

Friday, April 08, 2022

Yole report on 3D imaging technologies

Full article here: https://www.i-micronews.com/will-3d-depth-cameras-return-to-android-phones/

Some excerpts:

Apple started using structured light for facial recognition technology in the iPhone X in 2017, ushering in an era of 3D depth imaging in the mobile field. Within the next year, in 2018, Android players Oppo, Huawei, and Xiaomi also launched front 3D depth cameras, using very similar structured light technologies to Apple. 

The Android camp attempted to use another 3D imaging technology, indirect Time of Flight (iToF). It was used for rear 3D depth cameras, for quick focus and imaging bokeh and some highly anticipated AR games and other applications.

The hardware for this technique is more compact than structured light, requiring only a ToF sensor chip, and a flood illuminator. The distance is computed by the time difference between emission and reception. Compared to structured light, it does not need much computing power, software integration is relatively simple, and overall, it has cost advantages.

LG, Samsung and Huawei used this kind of technology both for front and/or rear implementations.

For a while, no Android player included 3D depth cameras in their flagship phones. However, during Mobile World Congress 2022, Honor unexpectedly released the Magic 4 Pro with a 3D depth camera on the front of the phone. Will 3D depth cameras return to Android phones?







Market report: https://www.i-micronews.com/products/3d-imaging-and-sensing-technology-and-market-trends-2021/



Thursday, April 07, 2022

Microsoft Surface Hub 2 Camera

The Verge has a new article titled "How Microsoft built its smart Surface camera" about the new Surface Hub 2 camera. It required huge effort on the entire computational photography pipeline: the optics, the image sensor and on-board machine learning-based processing algorithms. The camera is expected to have a $800 price tag.
 
 
Some excerpts below.



“From day one of Surface Hub 2, we knew we were going to make our cameras smart,” explains Steven Bathiche, who oversees all hardware innovation for Microsoft devices, in an interview with The Verge. Microsoft’s surprise $799.99 Surface Hub 2 Smart Camera debuted last week, offering automatic reframing without the warping and distortions you might typically see on other conference room cameras.
 
It can detect faces and bodies, in an effort to make sure everyone in a room is visible during meetings whether they’re close to the camera or up to eight meters away. The Surface Hub 2 Smart Camera is able to pretty much see an entire conference room thanks to its 136-degree field of view, which keeps the people at the front in focus alongside those in the back.
 
Bathiche and his team have created Microsoft’s own optics, AI model, and edge computer to go into the Surface Hub 2 Smart Camera and power its computational photography. “It has onboard compute, 1 teraflops of compute that essentially houses a really large AI model that we’ve built,” says Bathiche. “It includes the autoframing application, it resides in the camera, so what comes out is just a 4K image so it literally looks like a webcam to the Surface Hub.”

“We designed an 11 element, completely glass lens with super sharp focus and basically close to the refraction limits,” explains Bathiche. Behind the lens is a 12-megapixel sensor (4000 x 3000) with an f/1.8 aperture that all generates the 4K cropped image. “The actual lens is a 184-degree field of view, so the camera can look behind itself.”

 

Wednesday, April 06, 2022

OMNIVISION’s New 3-megapixel Image Sensor

SANTA CLARA, Calif.--(BUSINESS WIRE)--OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog and touch & display technology, today announced the new OS03B10 CMOS image sensor that brings high-quality digital images and high-definition (HD) video to security surveillance, IP and HD analog cameras in a 3-megapixel (MP) 1/2.7-inch optical format.

 


The OS03B10 image sensor features a 2.5µm pixel that is based on OMNIVISION’s OmniPixel®3-HS technology. This high-performance, cost-effective solution uses high-sensitivity frontside illumination (FSI) for true-to-life color reproduction in both bright and dark conditions.


“Many of our customers already use our OS02G10, an FSI based 2MP 1/2.9-inch image sensor, for security and video applications, such as IP cameras, baby monitors, doorbell cameras, smart TVs, dashcams and more,” said Cheney Zhang, senior marketing manager, OMNIVISION. “The OS03B10 is pin-to-pin compatible with OS02G10, enabling our customers to seamlessly upgrade their security products to a 3MP image sensor, greatly improving image capture and HD video without any redesigns.”


By leveraging an advanced 2.5µm pixel architecture, the OS03B10 achieves excellent low-light sensitivity, signal-to-noise ratio, full-well capacity, quantum efficiency and low-power consumption. It can capture videos in a 16:9 format at 30 frames per second. Default and programmable modes allow for a more convenient way of controlling the parameters of frame size, exposure time, gain value, etc. It also offers image control functions such as mirror and flip, windowing, auto black level calibration, defective pixel correction, black sun cancellation and more. The OS03B10 supports DVP and MIPI interfaces.


Samples of the OS03B10 are available now and will be in mass production in Q2 2022. For more information, contact your OMNIVISION sales representative: www.ovt.com/contact-sales
.

 

Link: https://www.businesswire.com/news/home/20220330005300/en/OMNIVISION%E2%80%99s-New-3-megapixel-Image-Sensor-with-OmniPixel%C2%AE3-HS-Brings-the-Most-Vivid-Pictures-to-Security-IP-and-HD-Cameras

Monday, April 04, 2022

Hamamatsu videos

Hamamatsu has published new videos on their latest products and technologies.


ORCA-Quest quantitative CMOS (qCMOS) scientific camera: With ultra-low read noise of 0.27 electrons (rms), a high pixel count of 9.4 megapixels, and the ability to detect and quantify the number of photoelectrons, discover how our new camera can revolutionise scientific imaging applications.




Automotive LiDAR technologies - TechBites series: How recent advances in photonics, specifically in LiDAR, have played a major role in the move towards autonomous vehicles



InGaAs Cameras - TechBites Series: Short-wave infrared cameras and their applications today and in the future




Mini Spectrometers: What are mini-spectrometers and how can they be used in the medical industry?


Sunday, April 03, 2022

Better Piezoelectric Light Modulators for AMCW Time-of-Flight Cameras

A team from Stanford University's Laboratory for Integrated Nano-Quantum Systems (LINQS) and ArbabianLab present a new method that can potentially convert any conventional CMOS image sensor into an amplitude-modulated continuous-wave time-of-flight camera. The paper titled "Longitudinal piezoelectric resonant photoelastic modulator for efficient intensity modulation at megahertz frequencies" appeared in Nature Communications.

Intensity modulators are an essential component in optics for controlling free-space beams. Many applications require the intensity of a free-space beam to be modulated at a single frequency, including wide-field lock-in detection for sensitive measurements, mode-locking in lasers, and phase-shift time-of-flight imaging (LiDAR). Here, we report a new type of single frequency intensity modulator that we refer to as a longitudinal piezoelectric resonant photoelastic modulator. The modulator consists of a thin lithium niobate wafer coated with transparent surface electrodes. One of the fundamental acoustic modes of the modulator is excited through the surface electrodes, confining an acoustic standing wave to the electrode region. The modulator is placed between optical polarizers; light propagating through the modulator and polarizers is intensity modulated with a wide acceptance angle and record breaking modulation efficiency in the megahertz frequency regime. As an illustration of the potential of our approach, we show that the proposed modulator can be integrated with a standard image sensor to effectively convert it into a time-of-flight imaging system.



a) A Y-cut lithium niobate wafer of diameter 50.8 mm and of thickness 0.5 mm is coated on top and bottom surfaces with electrodes having a diameter of 12.7 mm. The wafer is excited with an RF source through the top and bottom electrodes. b) Simulated ∣s11∣ of the wafer with respect to 50 Ω, showing the resonances corresponding to different acoustic modes of the wafer (loss was added to lithium niobate to make it consistent with experimental results). The desired acoustic mode appears around 3.77 MHz and is highlighted in blue. c) The desired acoustic mode ∣s11∣ with respect to 50 Ω is shown in more detail. d) The dominant strain distribution (Syz) when the wafer is excited at 3.7696 MHz with 2 Vpp is shown for the center of the wafer. This strain distribution corresponds to the ∣s11∣ resonance shown in (c). e) The variation in Syz parallel to the wafer normal and centered along the wafer is shown when the wafer is excited at 3.7696 MHz with 2 Vpp.



a) Schematic of the characterization setup is shown. The setup includes a laser (L) with a wavelength of 532 nm that is intensity-modulated at 3.733704 MHz, aperture (A) with a diameter of 1 cm, neutral density filter (N), two polarizers (P) with transmission axis t^=(a^x+a^z)/2–√, wafer (W), and a standard CMOS camera (C). The wafer is excited with 90 mW of RF power at fr = 3.7337 MHz, and the laser beam passes through the center of the wafer that is coated with ITO. The camera detects the intensity-modulated laser beam. b) The desired acoustic mode is found for the modulator by performing an s11 scan with respect to 50 Ω using 0 dBm excitation power and with a bandwidth of 100 Hz. The desired acoustic mode is highlighted in blue. c) The desired acoustic mode is shown in more detail by performing an s11 scan with respect to 50 Ω using 0 dBm excitation power with a bandwidth of 20 Hz. d) The fabricated modulator is shown. e) The depth of intensity modulation is plotted for different angles of incidence for the laser beam (averaged across all the pixels), where ϕ is the angle between the surface normal of the wafer and the beam direction k^ (see “Methods” for more details). Error bars represent the standard deviation of the depth of intensity modulation across the pixels. f) Time-averaged intensity profile of the laser beam detected by the camera is shown for ϕ = 0. g) The DoM at 4 Hz of the laser beam is shown per pixel for ϕ = 0. h) The phase of intensity modulation at 4 Hz of the laser beam is shown per pixel for ϕ = 0.


a) Schematic of the imaging setup is shown. The setup includes a standard CMOS camera (C), camera lens (CL), two polarizers (P) with transmission axis t^=(a^x+a^z)/sqrt(2), wafer (W), aperture (A) with a diameter of 4 mm, laser (L) with a wavelength of 635 nm that is intensity-modulated at 3.733702 MHz, and two metallic targets (T1 and T2) placed 1.09 m and 1.95 m away from the imaging system, respectively. For the experiment, 140 mW of RF power at fr = 3.7337 MHz is used to excite the wafer electrodes. The laser is used for illuminating the targets. The camera detects the reflected laser beam from the two targets, and uses the 2 Hz beat tone to extract the distance of each pixel corresponding to a distinct point in the scene (see “Methods” for more details). b) Bird’s eye view of the schematic in (a). c) Reconstructed depth map seen by the camera. Reconstruction is performed by mapping the phase of the beat tone at 2 Hz to distance using Eq. (3). The distance of each pixel is color-coded from 0 to 3 m (pixels that receive very few photons are displayed in black). The distance of targets T1 and T2 are estimated by averaging across their corresponding pixels, respectively. The estimated distances for T1 and T2 are 1.07 m and 1.96 m, respectively (averaged across all pixels corresponding to T1 and T2). d) Ambient image capture of the field-of-view of the camera, showing the two targets T1 and T2. e The dimensions of the targets used for ToF imaging are shown.


The paper points out limitations of other approaches such as spatial light modulators and meta-optics, but doesn't mention any potential challenges or limitations of their proposed method. Interestingly, the authors cite some recent papers on high-resolution SPAD sensors to make the claim that their method is more promising than "highly specialized costly image sensors that are difficult to implement with a large number of pixels." Although the authors do not explicitly mention this in the paper, their piezoelectric material of choice (lithium niobate) is CMOS compatible. Thin-film deposition of lithium niobate on silicon using a CMOS process seems to be an active area of research (for example, see Mercante et al., Optics Express 24(14), 2016 and Wang et al., Nature 562, 2018.)

Friday, April 01, 2022

Two new papers on 55 nm Bipolar-CMOS-DMOS SPADs

The AQUA research group at EPFL together with Global Foundries have published two new articles on 55 nm Bipolar-CMOS-DMOS (BCD) SPAD technology in the upcoming issues of IEEE Journal of Selected Topics in Quantum Electronics.

Engineering Breakdown Probability Profile for PDP and DCR Optimization in a SPAD Fabricated in a Standard 55 nm BCD Process

 
Abstract:
 
CMOS single-photon avalanche diodes (SPADs) have broken into the mainstream by enabling the adoption of imaging, timing, and security technologies in a variety of applications within the consumer, medical and industrial domains. The continued scaling of technology nodes creates many benefits but also obstacles for SPAD-based systems. Maintaining and/or improving upon the high-sensitivity, low-noise, and timing performance of demonstrated SPADs in custom technologies or well-established CMOS image sensor processes remains a challenge. In this paper, we present SPADs based on DPW/BNW junctions in a standard Bipolar-CMOS-DMOS (BCD) technology with results comparable to the state-of-the-art in terms of sensitivity and noise in a deep sub-micron process. Technology CAD (TCAD) simulations demonstrate the improved PDP with the simple addition of a single existing implant, which allows for an engineered performance without modifications to the process. The result is an 8.8 μ\mu m diameter SPAD exhibiting ∼\sim 2.6 cps/ μ\mu m 2^2 DCR at 20 ∘^{\circ} C with 7 V excess bias. The improved structure obtains a PDP of 62% and ∼\sim 4.2% at 530 nm and 940 nm, respectively. Afterpulsing probability is ∼\sim 0.97% and the timing response is 52 ps FWHM when measured with integrated passive quench/active recharge circuitry at 3 V excess bias.

 

 

On Analog Silicon Photomultipliers in Standard 55-nm BCD Technology for LiDAR Applications

 
Abstract:
 
We present an analog silicon photomultiplier (SiPM) based on a standard 55 nm Bipolar-CMOS-DMOS (BCD) technology. The SiPM is composed of 16 x 16 single-photon avalanche diodes (SPADs) and measures 0.29 x 0.32 mm2. Each SPAD cell is passively quenched by a monolithically integrated 3.3 V thick oxide transistor. The measured gain is 3.4 x 105 at 5 V excess bias voltage. The single-photon timing resolution (SPTR) is 185 ps and the multiple-photon timing resolution (MPTR) is 120 ps at 3.3 V excess bias voltage. We integrate the SiPM into a co-axial light detection and ranging (LiDAR) system with a time-correlated single-photon counting (TCSPC) module in FPGA. The depth measurement up to 25 m achieves an accuracy of 2 cm and precision of 2 mm under the room ambient light condition. With co-axial scanning, the intensity and depth images of complex scenes with resolutions of 128 x 256 and 256 x 512 are demonstrated. The presented SiPM enables the development of cost-effective LiDAR system-on-chip (SoC) in the advanced technology.