Friday, December 29, 2023

Videos du jour: TinyML, Hamamatsu, ADI

tinyML Asia 2022
In-memory computing and Dynamic Vision Sensors: Recipes for tinyML in Internet of Video Things
Arindam BASU , Professor, Department of Electrical Engineering, City University of Hong Kong

Vision sensors are unique for IoT in that they provide rich information but also require excessive bandwidth and energy which limits scalability of this architecture. In this talk, we will describe our recent work in using event-driven dynamic vision sensors for IoVT applications like unattended ground sensors and intelligent transportation systems. To further reduce the energy of the sensor node, we utilize In-memory computing (IMC)—the SRAM used to store the video frames are used to perform basic image processing operations and trigger the following deep neural networks. Lastly, we introduce a new concept of hybrid IMC combining multiple types of memory.


 
Photon counting imaging using Hamamatsu's scientific imaging cameras - TechBites Series

With our new photon number resolving mode the ORCA-Quest enables photon counting resolution across a full 9.4 megapixel image. See the camera in action and learn how photon number imaging pushes quantitative imaging to a new frontier.


Accurate, Mobile Object Dimensioning using Time of Flight Technology

ADI's High Resolution 3D Depth Sensing Technology coupled with advanced image stitching algorithms enables the dimensioning of non-conveyable large objects for Logistics applications. Rather than move the object to a fixed dimensioning gantry, ADI's 3D technology enables operators to take the camera to the object to perform the dimensioning function. With the same level of accuracy as fixed dimensioners, making the system mobile reduces time and cost of measurement while enhancing energy efficiency.



Wednesday, December 27, 2023

CEA-Leti IEDM 2023 papers on emerging devices

From Semiconductor Digest: https://www.semiconductor-digest.com/cea-leti-will-present-gains-in-ultimate-3d-rf-power-and-quantum-neuromorphic-computing-with-emerging-devices/

CEA-Leti Will Present Gains in Ultimate 3D, RF & Power, and Quantum & Neuromorphic Computing with Emerging Devices

CEA-Leti papers at IEDM 2023, Dec. 9-13, in San Francisco, will present results in multiple fields, including ultimate 3D advances in radio frequency, such as performance improvement at cryogenic temperature.

The institute will present nine papers during the conference this year. Two presentations will highlight a breakthrough in 3D sequential integration and results pushing GaN/Si HEMT closer to GaN/SiC performance at 28 GHz:

 “3D Sequential Integration with Si CMOS Stacked on 28nm Industrial FDSOI with Cu-ULK iBEOL Featuring RO and HDR Pixel”, reports the world-first 3D sequential integration of CMOS over CMOS with advanced metal line levels, which brings 3DSI with intermediate BEOL closer to commercialization.

 “6.6W/mm 200mm CMOS Compatible AlN/GaN/Si MIS-HEMT with In-Situ SiN Gate Dielectric and Low Temperature Ohmic Contacts” reports development of CMOS compatible 200mm SiN/AlN/GaN MIS-HEMT on silicon substrate that brings GaN/Si high electron mobility transistors (HEMT) closer to GaN/SiC performance at 28 GHz in power density. It also highlights that SiN/AlN/GaN on silicon metal-insulated semiconductor (MIS-HEMT) is a potential candidate for high power Ka-band power amplifiers.

Leti Devices Workshop
“Semiconductor Devices: Moving Towards Efficiency & Sustainability”
Dec. 10 @ 5:30 pm, Nikko Hotel, 222 Mason Street, Third Floor
The workshop will present CEA-Leti experts’ visions for and key results in efficient computing and radiofrequency devices for More than Moore applications.

CEA-Leti Presentations

Radio Frequency

 RF: “A Cost Effective RF-SOI Drain Extended MOS Transistor Featuring PSAT=19dBm @28GHz & VDD=3V for 5G Power Amplifier Application”, by Xavier Garros
 Session 34.2: Wednesday, Dec. 13 @ 9:30 am (Continental 7-9)
 RF crypto: “RF Performance Enhancement of 28nm FD-SOI Transistors Down to Cryogenic Temperature Using Back Biasing”, by Quentin Berlingard
 Session 34.3: Wednesday, Dec. 13 @ 9:55 am (Continental 7-9)
 GaN RF: “6.6W/mm 200mm CMOS Compatible AlN/GaN/Si MIS-HEMT with In-Situ SiN Gate Dielectric and Low Temperature Ohmic Contacts”, by Erwan Morvan
 Session 38.3: Wednesday, Dec. 13 @ 2:25 pm (Continental 4)

3D Sequential Stacking

 “Ultimate Layer Stacking Technology for High Density Sequential 3D Integration”, a collaborative paper with Ionut Rad of Soitec
 Session 19.5: Tuesday, Dec. 12 @ 4:00 pm (Grand Ballroom A)
 “3D Sequential Integration with Si CMOS Stacked on 28nm Industrial FDSOI with Cu-ULK iBEOL Featuring RO and HDR Pixel”, by Perrine Batude
 Session 29.3: Wednesday, Dec. 13 @ 9:55 am (Grand Ballroom B)
Emerging Device and Compute Technology (EDT)
 “Designing Networks of Resistively-Coupled Stochastic Magnetic Tunnel Junctions for Energy-Based Optimum Search”, by Kamal Danouchi
 Session 22.3: Tuesday, Dec. 12 @ 3:10 (Continental 5)

Neuromorphic Computing

 Hybrid FeRAM/RRAM Synaptic Circuit Enabling On-Chip Inference and Learning at the Edge”, by Michele Martemucci (LIST)
 Session 23:3: Tuesday, Dec. 12 @ 3:10 (Continental 6)
 “Bayesian In-Memory Computing with Resistive Memories”, a collaborative paper with Damien Querlioz of CNRS-C2N
 Session 12:3: Tuesday, Dec. 12 @ 9:55 am (Continental 1-3)
Quantum Technology
 “Tunnel and Capacitive Coupling Optimization in FDSOI Spin-Qubit Devices”, by H. Niebojewski and B. Bertrand
 Session 22:6: Tuesday, Dec. 12 @ 4:25 pm (Continental 5)

Monday, December 25, 2023

STMicroelectronics releases new multizone time-of-flight sensor

Original article: https://www.eejournal.com/industry_news/next-generation-multizone-time-of-flight-sensor-from-stmicroelectronics-boosts-ranging-performance-and-power-saving/

Next-generation multizone time-of-flight sensor from STMicroelectronics boosts ranging performance and power saving

Target applications include human-presence sensing, gesture recognition, robotics, and other industrial uses

Geneva, Switzerland, December 14, 2023 – STMicroelectronics’ VL53L8CX, the latest-generation 8×8 multizone time-of-flight (ToF) ranging sensor, delivers a range of improvements including greater ambient-light immunity, lower power consumption, and enhanced optics.

ST’s direct-ToF sensors combine a 940nm vertical cavity surface emitting laser (VCSEL), a multizone SPAD (single-photon avalanche diode) detector array, and an optical system comprising filters and diffractive optical elements (DOE) in an all-in-one module that outperforms conventional micro lenses typically used with similar alternative sensors. The sensor projects a wide square field of view of 45° x 45° (65° diagonal) and receives reflected light to calculate the distance of objects up to 400cm away, across 64 independent zones, and up to 30 captures per second.

The new VL53L8CX boosts ranging performance with a new-generation VCSEL and advanced silicon-based meta-optics. Compared with the current VL53L5CX, the enhancements increase immunity to interference from ambient light, extending the sensor’s maximum range in daylight from 170cm to 285cm and reducing power consumption from 4.5mW to 1.6mW in low-power mode.

ST released the first multizone time-of-flight sensor with the VL53L5CX in 2021. By increasing performance, the new VL53L8CX now further extends the advantages of these sensors over alternatives with conventional optics, which have fewer native zones and lose sensitivity in the outer areas. Thanks to its true 8×8 multizone sensing, the VL53L8CX ensures uniform sensitivity and accurate ranging throughout the field of view, with superior range in ambient light.

When used for system activation and human presence detection, the VL53L8CX’s greater ambient-light immunity enables equipment to respond more consistently and quickly. As part of ST’s STGesture™ platform that also includes the STSW-IMG035 turnkey gesture-recognition software and Gesture EVK development tool, the new sensor delivers the precision needed for repeatable gesture-based interaction. In addition to motion gesture recognition, hand posture recognition is also possible leveraging the latest AI models available in the STM32ai-modelzoo on GitHub.

Moreover, the VL53L8CX provides increased accuracy for monitoring the contents of bins, containers, silos, and tanks, including liquid-level monitoring, in industrial bulk storage and warehousing. The superior accuracy can also enhance the performance of drinks machines such as coffee makers and beverage dispensers.

Mobile robots including autonomous vacuum cleaners can leverage the VL53L8CX to improve guidance capabilities like floor sensing, small object detection, collision avoidance, and cliff detection. Also, the synchronization pin enables projectors and cameras to benefit from coordinated autofocus. There is also a motion indicator, an auto-stop feature that allows real-time actions, and the sensor is immune to cover-glass crosstalk beyond 60cm. Now supporting SPI connectivity, in addition to the 1MHz I2C interface, the new sensor handles host data transfers at up to 3MHz.

Designers can quickly evaluate the VL53L8CX and jump-start their projects taking advantage of the supporting ecosystem that includes the X-NUCLEO-53L8A1 expansion board and SATEL-VL53L8 breakout boards. The P-NUCLEO-53L8A1 pack is also available, which contains a STM32F401 Nucleo microcontroller board and X-NUCLEO-53L8A1 expansion board ready to power up and start exploring.
The VL53L8CX is available now, housed in a 6.4mm x 3.0mm x 1.75mm leadless package, from $3.60 for orders of 1000 pieces.

Please visit www.st.com/VL53L8CX for more information.

Friday, December 22, 2023

3D cameras at CES 2024: Orbbec and MagikEye

Annoucements below from (1) Orbbec and (2) MagikEye about their upcoming CES demos.


Orbbec releases Persee N1 camera-computer kit for 3D vision enthusiasts, powered by the NVIDIA Jetson platform


Orbbec’s feature-rich RGB-D camera-computer is a ready-to-use out-of-the box solution for 3D vision application developers and experimenters

Troy, Mich, 13 December 2023 — Orbbec, an industry leader dedicated to 3D vision systems, has developed the Persee N1, an all-in-one combination of a popular stereo-vision 3D camera and a purpose-built computer based on the NVIDIA Jetson platform, and equipped with industry-standard interfaces for the most useful accessories and data connections. Developers using the newly launched camera-computer will also enjoy the benefits of the Ubuntu OS and OpenCV libraries. Orbbec recently became an NVIDIA Partner Network (NPN) Preferred Partner.

Persee N1 delivers highly accurate and reliable data for in-door/semi-outdoor operation, ideally suited for healthtech, dimensioning, interactive gaming, retail and robotics applications, and features:

  • An easy setup process using the Orbbec SDK and Ubuntu-based software environment.
  • Industry-proven Gemini 2 camera, based on active stereo IR technology, which includes Orbbec’s custom ASIC for high-quality, in-camera depth processing.
  • The powerful NVIDIA Jetson platform for edge AI and robotics.
  • HDMI and USB ports for easy connections to a monitor and keyboard.
  • Multiple USB ports for data and a POE (Power over Ethernet) port for combined data and power connections.
  •  Expandable storage with MicroSD and M.2 slots.

“The self-contained Persee N1 camera-computer makes it easy for computer vision developers to experiment with 3D vision,” said Amit Banerjee, Head of Platform and Partnerships at Orbbec. “This combination of our Gemini 2 RGB-D camera and the NVIDIA Jetson platform for edge AI and robotics allows AI development while at the same time enabling large-scale cloud-based commercial deployments.”

The new camera module also features official support for the widely used Open Computer Vision (OpenCV) library. OpenCV is used in an estimated 89% of all embedded vision projects according to industry reports. This integration marks the beginning of a deeper collaboration between Orbbec and OpenCV, which is operated by the non-profit Open Source Vision Foundation.

“The Persee N1 features robust support for the industry-standard computer vision and AI toolset from OpenCV,” said Dr. Satya Mullick, CEO of OpenCV. “OpenCV and Orbbec have entered a partnership to ensure OpenCV compatibility with Orbbec’s powerful new devices and are jointly developing new capabilities for the 3D vision community.”


MagikEye's Pico Image Sensor: Pioneering the Eyes of AI for the Robotics Age at CES

From Businesswire.

December 20, 2023 09:00 AM Eastern Standard Time
STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc. (www.magik-eye.com), a trailblazer in 3D sensing technology, is set to showcase its groundbreaking Pico Depth Sensor at the 2024 Consumer Electronics Show (CES) in Las Vegas, Nevada. Embarking on a mission to "Provide the Eyes of AI for the Robotics Age," the Pico Depth Sensor represents a key milestone in MagikEye’s journey towards AI and robotics excellence.

The heart of the Pico Depth Sensor’s innovation lies in its use of MagikEye’s proprietary Invertible Light™ Technology (ILT), which operates efficiently on a “bare-metal” ARM M0 processor within the Raspberry Pi RP2040. This noteworthy feature underscores the sensor's ability to deliver high-quality 3D sensing without the need for specialized silicon. Moreover, while the Pico Sensor showcases its capabilities using the RP2040, the underlying technology is designed with adaptability in mind, allowing seamless operation on a variety of microcontroller cores, including those based on the popular RISC-V architecture. This flexibility signifies a major leap forward in making advanced 3D sensing accessible and adaptable across different platforms.

Takeo Miyazawa, Founder & CEO of MagikEye, emphasizes the sensor's transformative potential: “Just as personal computers democratized access to technology and spurred a revolution in productivity, the Pico Depth Sensor is set to ignite a similar transformation in the realms of AI and robotics. It is not just an innovative product; it’s a gateway to new possibilities in fields like autonomous vehicles, smart home systems, and beyond, where AI and depth sensing converge to create smarter, more intuitive solutions.”

Attendees at CES 2024 are cordially invited to visit MagikEye's booth for an exclusive first-hand experience of the Pico Sensor. Live demonstrations of MagikEye’s latest ILT solutions for next-gen 3D sensing solutions will be held from January 9-11 at the Embassy Suites by Hilton Convention Center Las Vegas. Demonstration times are limited and private reservations will be accommodated by contacting ces2024@magik-eye.com.

Wednesday, December 20, 2023

imec paper at IEDM 2023 on a waveguide design for color imaging

News article: https://optics.org/news/14/12/11

imec presents new way to render colors with sub-micron pixel sizes

This week at the International Electron Devices Meeting, in San Francisco, CA, (IEEE IEDM 2023), imec, a Belgium-based research and innovation hub in nanoelectronics and digital technologies, has demonstrated a new method for “faithfully splitting colors with sub-micron resolution using standard back-end-of-line processing on 300mm wafers”.

imec says that the technology is poised to elevate high-end camera performance, delivering higher signal-to-noise ratio, enhanced color quality with unprecedented spatial resolution.
Designing next-generation CMOS imagers requires striking a balance between collecting all incoming photons, achieving a resolution down to photon size or diffraction limit, and accurately recording the light color.

Traditional image sensors with color filters on the pixels are still limited in combining all three requirements. While higher pixel densities would increase the overall image resolution, smaller pixels capture even less light and are prone to artifacts that result from interpolating color values from neighboring pixels.

Even though diffraction-based color splitters represent a leap forward in increasing color sensitivity and capturing light, they are still unable to improve image resolution.

'Fundamentally new' approach
imec is now proposing a fundamentally new way for splitting colors at sub-micron pixel sizes (i.e., beyond the fundamental Abbe diffraction limit) using standard back-end processing. The approach is said to “tick all the boxes” for next-generation imagers by collecting nearly all photons, increasing resolution by utilizing very small pixels, and rendering colors faithfully.
To achieve this, imec researchers built an array of vertical Si3N4 multimode waveguides in an SiO2 matrix. The waveguides have a tapered, diffraction-limited sized input (e.g., 800 x 800 nm2) to collect all the incident light.

“In each waveguide, incident photons are exciting both symmetric and asymmetric modes, which propagate through the waveguide differently, leading to a unique “beating” pattern between the two modes for a given frequency. This beating pattern enables a spatial separation at the end of the waveguides corresponding to a specific color,” said Prof. Jan Genoe, scientific director at imec.

Cost-efficient structures
The total output light from each waveguide is estimated to reach over 90% within the range of human color perception (wavelength range 400-700nm), making it superior to color filters, says imec.
Robert Gehlhaar, principal member of technical staff at imec, said, “Because this technique is compatible with standard 300-mm processing, the splitters can be produced cost-efficiently. This enables further scaling of high-resolution imagers, with the ultimate goal to detect every incident photon and its properties.

“Our ambition is to become the future standard for color imaging with diffraction-limited resolution. We are welcoming industry partners to join us on this path towards full camera demonstration.”

 

RGB camera measurement (100x magnification) of an array of waveguides with alternating 5 left-side-open-aperture and 5 right-side-open-aperture (the others being occluded by TiN) waveguides at a 1-micron pitch. Yellow light exits at the right part of the waveguide, whereas the blue exits at the left. The wafer is illuminated using plane wave white light. Credit: imec.



3D visualization (left) and TEM cross-section (right) of the vertical waveguide array for color splitting in BY-CR imaging. Credit: imec.


Monday, December 18, 2023

OmniVision 15MP/1MP hybrid RGB/event vision sensor (ISSCC 2023)

Guo et al. from Omnivision presented a hybrid RGB/event vision sensor in a paper titled "A 3-Wafer-Stacked Hybrid 15MPixel CIS + 1 MPixel EVS with 4.6GEvent/s Readout, In-Pixel TDC and On-Chip ISP and ESP Function" at ISSCC 2023.

Abstract: Event Vision Sensors (EVS) determine, at pixel level, whether a temporal contrast change beyond a predefined threshold is detected [1–6]. Compared to CMOS image sensors (CIS), this new modality inherently provides data-compression functionality and hence, enables high-speed, low-latency data capture while operating at low power. Numerous applications such as object tracking, 3D detection, or slow-motion are being researched based on EVS [1]. Temporal contrast detection is a relative measurement and is encoded by so-called “events” being further characterized through x/y pixel location, event time-stamp (t) and the polarity (p), indicating whether an increase or decrease in illuminance has been detected.

 

Schematic of dual wafer 4x4 macro-pixel and peripheral readout circuitry on third wafer.

EVS readout block diagram and asynchronous scanner with hierarchical skip-logic.
 
 
Event-signal processor (ESP) block diagram and MIPI interface.
 


Sensor output illustrating hybrid CIS and EVS data capture. 10kfps slow-motion images of an exploding water balloon from 1080p, 120fps + event data.
 
 
Characterization results: Contrast response, nominal contrast, latency and noise vs. illuminance.
 


 
Technology trend and chip micrograph.

Sunday, December 17, 2023

Job Postings - Week of 17 Dec 2023

 

Meta

Image Sensor Application Engineer

Sunnyvale, California, USA or Redmond Washington, USA

Link

California institute of Technology

Detector Engineer

Pasadena, California, USA

Link

University of Southampton

PhD Studentship: Greenhouse Gas Detection using Silicon Photonics Platform

Southampton, UK

(follow “How to Apply” instructions)

Link

Space Dynamics Laboratory

Electro-Optical Sensor Systems Engineer

North Logan, Utah, USA

Link

Raytheon

Process Code Engineer

Andover, Massachusetts, USA

Link

SOLIEL Synchrotron Group

Responsible Detector Group

Saint-Aubin, France

Link

Teledyne e2v Technologies

Focal Plane Engineer

Camarillo, California, USA

Link

Pixxel

Sensors Specialist (EO/IR)

Bengaluru, Karnataka, India

Link

onsemi

Summer 2024 Device Engineering Intern

Hopewell Junction, New York, USA

Link

Friday, December 15, 2023

X-FAB introduces NIR SPADs on their 180nm process

X-FAB Introduces New Generation of Enhanced Performance SPAD Devices focused on Near-Infrared Applications

Link: https://www.xfab.com/news/details/article/x-fab-introduces-new-generation-of-enhanced-performance-spad-devices-focused-on-near-infrared-applications?trk=feed_main-feed-card_feed-article-content

NEWS – Tessenderlo, Belgium – Nov 16, 2023
X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has introduced a specific near-infrared version to its single-photon avalanche diode (SPAD) device portfolio.

X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has introduced a specific near-infrared version to its single-photon avalanche diode (SPAD) device portfolio. Like the previous SPAD generation, which launched in 2021, this version is based on the company’s 180nm XH018 process. The inclusion of an additional step to the fabrication workflow has resulted in significant increases in signal while still retaining the same low noise floor, without negatively affecting parameters such as dark count rate, afterpulsing and breakdown voltage.

Through this latest variant, X-FAB is successfully expanding the scope of its SPAD offering, improving its ability to address numerous emerging applications where NIR operation proves critically important. Among these are time-of-flight sensing in industrial applications, vehicle LiDAR imaging, biophotonics and FLIM research work, plus a variety of different medical-related activities. Sensitivity is boosted over the whole near-infrared (NIR) band, with respective improvements of 40% and 35% at the key wavelengths of 850nm and 905nm.

Using the new SPAD devices will reduce the complexity of visible light filtering, since UV and visible light is already suppressed. Filter designs will consequently be simpler, with fewer component parts involved. Furthermore, having exactly the same footprint dimensions as the previous SPAD generation provides a straightforward upgrade route. Customers’ existing designs can gain major performance benefits by just swapping in the new devices.

X-FAB has compiled a comprehensive PDK for the near-infrared SPAD variant, with extensive documentation and application notes featured. Models for optical and electrical simulation will provide engineers the additional design support they need, enabling them to integrate these devices into their circuitry within a short time period.

As Heming Wei, Product Marketing Manager Sensors at X-FAB explains; “Our SPAD technology has already gained a very positive market response, seeing uptake with a multitude of customers. Thanks to continuing innovation at the process level, we have now been able to develop a solution that will secure business for us within various NIR applications, across automotive, healthcare and life sciences.”
The new NIR enhanced SPAD is available now. Engineers can start their design with the new device immediately.

Thursday, December 14, 2023

Apple is looking for two sensor designers

These just arrived direct to us from the Apple Camera Silicon team:

Image Sensor Analog Design Engineer - Cupertino, California, USA - Link

Image Sensor Digital Design Engineer - Cupertino, California, USA - Link

Lecture by Dr. Tobi Delbruck on the history of silicon retina and event cameras

Silicon Retina: History, Live Demo, and Whiteboard Pixel Design


 

Rockwood Memorial Lecture 2023: Tobi Delbruck, Institute of Neuroinformatics, UZH-ETH Zürich

Event Camera Silicon Retina; History, Live Demo, and Whiteboard Circuit Design
Rockwood Memorial Lecture 2023 (11/20/23)
https://inc.ucsd.edu/events/rockwood/
Hosted by: Terry Sejnowski, Ph.D. and Gert Cauwenberghs, Ph.D.
Organized by: Institute for Neural Computation, https://inc.ucsd.edu

Abstract: Event cameras electronically model spike-based sparse output from biological eyes to reduce latency, increase dynamic range, and sparsify activity in comparison to conventional imagers. Driven by the need for more efficient battery powered, always-on machine vision in future wearables, event cameras have emerged as a next step in the continued evolution of electronic vision. This lecture will have 3 parts: 1. A brief history of silicon retina development starting from Fukushima’s Neocognitron and Mahowald and Mead’s earliest spatial retinas; 2: A live demo of a contemporary frame-event DAVIS camera that includes an inertial measurement unit (IMU) vestibular system, 3: (targeted for neuromorphic analog circuit design students in the BENG 216 class), a whiteboard discussion about event camera pixel design at the transistor level, highlighting design aspects of event camera pixels which endow them with fast response even under low lighting, precise threshold matching even under large transistor mismatch, and temperature-independent event threshold.

Wednesday, December 13, 2023

A couple of direct job postings from Teledyne

Teledyne sent us an e-mail asking us to post these jobs for the consideration of our readers:

Staff Pixel Process Engineer – CMOS Image Sensor R&D - Waterloo, Ontario, Canada - Link
CMOS Sensor Product Support - Waterloo, Ontario, Canada - Link

Tuesday, December 12, 2023

3D stacked BSI SPAD sensor with on-chip lens

Fujisaki et al. from Sony Semiconductor (Japan) presented a paper titled "A back-illuminated 6 μm SPAD depth sensor with PDE 36.5% at 940 nm via combination of dual diffraction structure and 2×2 on-chip lens" at the 2023 IEEE Symposium on VLSI Technology and Circuits.

Abstract: We present a back-illuminated 3D-stacked 6 μm single-photon avalanche diode (SPAD) sensor with very high photon detection efficiency (PDE) performance. To enhance PDE, a dual diffraction structure was combined with 2×2 on-chip lens (OCL) for the first time. A dual diffraction structure comprises a pyramid surface for diffraction (PSD) and periodic uneven structures by shallow trench for diffraction formed on the Si surface of light-facing and opposite sides, respectively. Additionally, PSD pitch and SiO 2 film thickness buried in full trench isolation were optimized. Consequently, a PDE of 36.5% was achieved at λ = 940 nm, the world’s highest value. Owing to shield ring contact, crosstalk was reduced by about half compared to a conventionally plugged one.




Schematics of Gapless and 2x2 on-chip lens.




Cross sectional SPAD image of (a) our previous work and (b) this work.