Saturday, May 25, 2024

Two New Jobs Submitted by Luxima

Luxima Technology

Arcadia, California, USA     Career page link

Junior position - Analog Design Engineer

Senior position - Staff Analog Design Engineer 

Friday, May 24, 2024

"Black Silicon" photodiodes

Title: Excellent Responsivity and Low Dark Current Obtained with Metal-Assisted Chemical Etched Si Photodiode

Author: Kexun Chen, Olli E. Setälä, Xiaolong Liu, Behrad Radfar, Toni P. Pasanen, Michael D. Serué, Juha Heinonen, Hele Savin, Ville Vähänissi

Affiliation: Aalto University, Finland

Abstract: Metal-assisted chemical etched (MACE, also known as MacEtch or MCCE) nanostructures are utilized widely in the solar cell industry due to their excellent optical properties combined with a simple and cost-efficient fabrication process. The photodetection community, on the other hand, has not shown much interest towards MACE due to its drawbacks including insufficient surface passivation, increased junction recombination, and possible metal contamination, which are especially detrimental to pn-photodiodes. Here, we aim to change this by demonstrating how to fabricate high-performance MACE pn-photodiodes with above 90% external quantum efficiency (EQE) without external bias voltage at 200–1000 nm and dark current less than 3 nA/cm2 at −5 V using industrially applicable methods. The key is to utilize an induced junction created by an atomic layer deposited highly charged Al2O3 thin film that simultaneously provides efficient field-effect passivation and full conformality over the MACE nanostructures. Achieving close to ideal performance demonstrates the vast potential of MACE nanostructures in the fabrication of high-performance low-cost pn-photodiodes.



Wednesday, May 22, 2024

Prophesee AMD collaboration on DVS FPGA devkit

Prophesee collaborates with AMD to deliver industry-first Event-based Vision solution running on leading, FPGA-based AMD Kria™ KV260 Vision AI Starter Kit

Developers can now take full advantage of Prophesee Event-based Metavision® sensor and AI
performance, power, and speed to create the next generation of Edge AI machine vision applications
running on AMD platforms.

PARIS - May 6, 2024 –Prophesee SA, inventor of the world’s most advanced neuromorphic vision systems, today announced that its Event-based Metavision HD sensor and AI are now available for use with the AMD Kria ™ KV260 Vision AI Starter Kit, creating a powerful and efficient combination to accelerate the development of advanced Edge machine vision applications. It marks the industry’s first Event-based Vision development kit compatible with an AMD platform, providing customers a platform to both evaluate and go to production with an industrial-grade solution for target applications such as smart city and machine vision, security cameras, retail analytics, and many others.

The development platform for the AMD Kria™ K26 System-on-Module (SOM), the KV260 Vision AI starter kit is built for advanced vision application development without requiring complex hardware design knowledge or FPGA programming skills. AMD Kria SOMs for edge AI applications provide a production-ready, energy-efficient FPGA-based device with enough I/O to speed up vision and robotics tasks at an affordable price point. Combined with the Prophesee breakthrough Event-based vision technology, machine vision system developers can leverage the lower latency and lower power capabilities of the Metavision platform to experiment and create more efficient, and in many cases not previously possible, applications compared to traditional frame-based vision sensing approaches.

A breakthrough plug-and-play Active Markers Tracking application is included in this kit. It allows for >1,000Hz 3D pose estimation, with complete background rejection at pixel level while providing extreme robustness to challenging lighting conditions.

This application highlights unique features of Prophesee’s Event-based Metavision technologies, enabling a new range of ultra high-speed tracking use cases such as game controller tracking, construction site safety, heavy load anti-sway systems and many more.

Multiple additional ready-to-use application algorithms will be made available over the coming months.

The Prophesee Starter Kit provides an ‘out of the box’ development solution to quickly get up and running with the Prophesee Metavision SDK and IMX636 HD Event-based sensor realized in collaboration between Prophesee and Sony, allowing easy porting of algorithms to the AMD commercial and industrial-grade system-on-module (SOMs) powered by the custom-built Zynq™ UltraScale+™ multiprocessing SoC.

The new, Prophesee-enabled Kria KV260 AI Starter Kit will be on display at Automate 2024 in
Prophesee’s booth 3452

“The ever-expanding Kria ecosystem helps make motion capture, connectivity, and edge AI applications more accessible to roboticists and developers,” said Chetan Khona, senior director of Industrial, Vision, Healthcare and Sciences Markets, AMD. “Prophesee Event-based Vision offers unique advantages for machine vision applications. Its low data consumption translates into efficient energy consumption, less compute and memory needed, and fast response times.”

“It’s never been easier to develop Event-based Edge applications with this combination of development aids from AMD and Prophesee,” said Luca Verre, co-founder and CEO of Prophesee. “We are providing everything needed to take complete advantage of the lower power processing and low latency performance inherent in Event-based Vision, as well as provide an environment to optimize machine vision system based on specific KPIs for customer-defined applications and use cases. This will further accelerate the adoption of Event-based Vision in key market segments that can benefit from Metavision’s unique advantages.”

https://www.prophesee.ai/event-based-metavision-amd-kria-starter-kit-imx636/

 


 



Monday, May 20, 2024

PixArt far infrared sensors - 3 part video series


 This video is the first episode of the Far Infrared (FIR) sensor series, focusing on the basic concepts of FIR and highlighting the differences between traditional thermistor and FIR thermopile.

 

This video is the second episode of the Far Infrared (FIR) sensor series, introducing PixArt's range of FIR sensor product lines. In addition to single point and 64-pixel array sensors, PixArt also provides a powerful 3-in-1 evaluation board that integrates a range of automated thermal detection functions.

 

This video is the third episode of the Far Infrared (FIR) sensor series, featuring demonstrations of 3 FIR sensors. In addition to showcasing real-life scenarios using PixArt’s FIR sensors, it also introduces various applications in different fields.

Sunday, May 19, 2024

Job Postings - Week of 19 May 2024

Onsemi

Product Engineer

Nampa, Idaho, USA

Link

Qualcomm

Camera Sensor System Engineer, Senior to Staff

Taipei City, Taiwan

Link

Apple

Hardware Sensing Systems Engineer

San Diego, California, USA

Link

Qualcomm

Sr Engineer-Camera Sensor

Hyderabad, Telangana, India

Link

L3Harris Technologies - WESCAM

Principal, Product Management

Waterdown, Ontario, Canada

Link

NASA Postdoc

Infrared Detector Technology Development

Pasadena, California, USA

Link

FRAMOS

Account Manager, Americas

Ottawa, Ontario, Canada

Link

Diamond Light Source

PDRA High-Z sensors and charge integrating detectors – Postdoc

Didcot, Oxfordshire, England

Link

Omnivision

Sr. Field Applications Engineer

Fleet, Hampshire, England

Link

A DIY copper oxide camera sensor

Can we make photosensitive pixels from Copper Oxide? Youtuber "Breaking Taps" answers:



Friday, May 17, 2024

One man's (event camera) noise is another man's signal

In a preprint titled "Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras" Cao et al. propose a method to use the inherent pixel noise present in even camera sensors to recover scene intensity maps.

Abstract:

Event cameras capture changes of intensity over time as a stream of ‘events’ and generally cannot measure intensity itself; hence, they are only used for imaging dynamic scenes. However, fluctuations
due to random photon arrival inevitably trigger noise events, even for static scenes. While previous efforts have been focused on filtering out these undesirable noise events to improve signal quality, we find that,
in the photon-noise regime, these noise events are correlated with the static scene intensity. We analyze the noise event generation and model its relationship to illuminance. Based on this understanding, we propose a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene, which are otherwise invisible to event cameras. We experimentally collect a dataset of noise events on static scenes to train and validate Noise2Image. Our results show that Noise2Image can robustly recover intensity images solely from noise events, providing a novel approach for capturing static scenes in event cameras, without additional hardware.

Link: https://arxiv.org/abs/2404.01298






 

Wednesday, May 15, 2024

Photonic-electronic integrated circuit-based coherent LiDAR engine

Lukashchuk et al. recently published a paper titled "Photonic-electronic integrated circuit-based coherent LiDAR engine" in the journal Nature Communications.

Open access link: https://www.nature.com/articles/s41467-024-47478-z

Abstract: Chip-scale integration is a key enabler for the deployment of photonic technologies. Coherent laser ranging or FMCW LiDAR, a perception technology that benefits from instantaneous velocity and distance detection, eye-safe operation, long-range, and immunity to interference. However, wafer-scale integration of these systems has been challenged by stringent requirements on laser coherence, frequency agility, and the necessity for optical amplifiers. Here, we demonstrate a photonic-electronic LiDAR source composed of a micro-electronic-based high-voltage arbitrary waveform generator, a hybrid photonic circuit-based tunable Vernier laser with piezoelectric actuators, and an erbium-doped waveguide amplifier. Importantly, all systems are realized in a wafer-scale manufacturing-compatible process comprising III-V semiconductors, silicon nitride photonic integrated circuits, and 130-nm SiGe bipolar complementary metal-oxide-semiconductor (CMOS) technology. We conducted ranging experiments at a 10-meter distance with a precision level of 10 cm and a 50 kHz acquisition rate. The laser source is turnkey and linearization-free, and it can be seamlessly integrated with existing focal plane and optical phased array LiDAR approaches.


a Schematics of photonic-electronic LiDAR structure comprising a hybrid integrated laser source, charge-pump based HV-AWG ASIC, photonic integrated erbium-doped waveguide amplifier. b Coherent ranging principle. c Packaged laser source. RSOA is edge coupled to Si3N4 Vernier filter configuration waveguide, whereas the output is glued to the fiber port. PZT and microheater actuators are wirebonded as well as butterfly package thermistor. d Zoom-in view of (c) highlighting a microring with actuators. e Micrograph of the HV-AWG ASIC chip fabricated in a 130 nm SiGe BiCMOS technology. The total size of the chip is 1.17–1.07 mm2. f The Erbium-doped waveguide is optically excited by a 1480 nm pump showing green luminescence due to the transition from a higher lying energy level to the ground state.

a Schematics of the integrated circuit consisting of a 4-stage voltage-controlled differential ring oscillator which drives charge pump stages to generate high-voltage arbitrary waveforms. b Principles of waveform generation demonstrated by the output response to the applied control signals in the time domain. Inset shows the change in oscillation frequency in response to a frequency control input, from 88 MHz to 208 MHz, which modifies the output waveform. c Measured arbitrary waveforms generated by the ASIC with different shapes, amplitudes, periods and offset values. d Generation of the linearized sawtooth electrical waveform used in LiDAR measurements. Digital and analog control signals are modulated in the time domain to fine-tune the output. 

a Electrical waveform generated by the ASIC. Blue circles highlight the segment of ~ 16 μs used for ranging and linearity analysis. The red curve is a linear fit to the given segment. b Time-frequency map of the laser chirp obtained via heterodyne detection with auxiliary laser. RBW is set to 10 MHz. c Optical spectrum of Vernier laser output featuring 50 dB side mode suppression ratio. d Optical spectrum after EDWA with >20 mW optical power. e Instantaneous frequency of the optical chirp obtained via delayed homodyne measurement (inset: experimental setup). The red dashed line corresponds to the linear fit. The excursion of the chirp equates to 1.78 GHz over a 16 μs period. f Nonlinearity of the laser chirp inferred from (e). RMSE nonlinearity equates to 0.057% with the major chirp deviation from the linear fit lying in the window ± 2 MHz. g The frequency beatnote in the delayed homodyne measurement corresponds to the reference MZI delay ~10 m. The 90% fraction of the beatnote signal is taken for the Fourier transformation. h LiDAR resolution inferred from the FWHM of the MZI beatnotes over >20,000 realizations. The most probable resolution value is 11.5 cm, while the native resolution is 9.3 cm corresponding to 1.61 GHz (90% of 1.78 GHz).

a Schematics of the experimental setup for ranging experiments. The amplified laser chirp scans the target scene via a set of galvo mirrors. A digital sampling oscilloscope (DSO) records the balanced detected beating of the reflected and reference optical signals. CIRC - circulator, COL - collimator, BPD - balanced photodetector. b Point cloud consisting of ~ 104 pixels featuring the doughnut on a cone and C, S letters as a target 10 m away from the collimator. c The Fourier transform over one period, highlighting collimator, circulator and target reflection beatnotes. Blackman-Harris window function was applied to the time trace prior to the Fourier transformation. d Detection histogram of (b). e Single point imaging depth histogram indicating 1.5 cm precision of the LiDAR source.
 

Monday, May 13, 2024

SI Sensors introduces custom CIS design services

Custom CMOS image sensor design on a budget
 
Specialised Imaging Ltd reports on the recent market launch of SI Sensors (Cambridge, UK) - a new division of the company focused on the development of advanced CMOS image sensors.
 
Drawing upon a team of specialists with a broad range of experience in image sensor design – SI Sensors is creating custom image sensor designs with cutting edge performance. In particular, the company’s in-house experts have specialist knowledge of visible and non-visible imaging technologies, optimised light detection and charge transfer, radiation-hard sensor design, and creating CCD-in-CMOS pixels to enable novel imaging techniques such as ultra-fast burst mode imaging.
 
Philip Brown, General Manager of SI Sensors said, “In addition to developing new sensors for Specialised Imaging’s next generation of ultra-fast imaging cameras utilising the latest foundry technologies, we are developing solutions for other customers with unique image sensor design requirements including for space and defence applications”.
 
He added “SI Sensors team also use their skills and experience to develop bespoke image sensor packages that accommodate custom electrical, mechanical, and thermal interface requirements. Our aim is always to achieve the best balance between image sensor performance and cost (optimised value) for customers. To ensure performance and consistent quality and reliability we perform detailed electro-optical testing from characterisation through to mass production testing adhering to industry standards such as EMVA 1288”.
 
For further information on custom CMOS image sensor design and production please visit www.si-sensors.com or contact SI Sensors on +44-1442-827728 or info@si-sensors.com.
 
Specialised Imaging Ltd is a dynamic company focused on niche imaging markets and applications, with particular emphasis on high-speed image capture and analysis. Drawing upon over 20 years’ experience, Specialised Imaging Ltd today are market leaders in the design and manufacture of ultra-fast framing cameras and ultra high-speed video cameras.

Friday, May 10, 2024

NASA develops a 36 pixel sensor

From PetaPixel: https://petapixel.com/2024/04/30/nasa-develops-tiny-yet-mighty-36-pixel-sensor/

NASA Develops Tiny Yet Mighty 36-Pixel Sensor


 

While NASA’s James Webb Space Telescope is helping astronomers craft 122-megapixel photos 1.5 million kilometers from Earth, the agency’s newest camera performs groundbreaking space science with just 36 pixels. Yes, 36 pixels, not 36 megapixels.

The X-ray Imaging and Spectroscopy Mission (XRISM), pronounced “crism,” is a collaboration between NASA and the Japan Aerospace Exploration Agency (JAXA). The mission’s satellite launched into orbit last September and has been scouring the cosmos for answers to some of science’s most complex questions ever since. The mission’s imaging instrument, Resolve, has a 36-pixel image sensor.

This six-by-six pixel array measures 0.2 inches (five millimeters) per side, which is not so different from the image sensor in the Apple iPhone 15 and 15 Plus. The main camera in those smartphones is eight by six millimeters, albeit with 48 megapixels. That’s 48,000,000 pixels, just a handful more than 36.

How about a full-frame camera, like the Sony a7R V, the go-to high-resolution mirrorless camera? That camera has over 60 megapixels and captures images that are 9,504 by 6,336 pixels. The image sensor has a total of 60,217,344 pixels, 1,672,704 times the number of pixels in XRISM’s Resolve imager.

At this point, it is reasonable to wonder, “What could scientists possibly see with just 36 pixels?” As it turns out, quite a lot.

Resolve detects “soft” X-rays, which are about 5,000 times more energetic than visible light wavelengths. It examines the Universe’s hottest regions, largest structures, and most massive cosmic objects, like supermassive black holes. While it may not have many pixels, its pixels are extraordinary and can produce a rich spectrum of visual data from 400 to 12,000 electron volts.

“Resolve is more than a camera. Its detector takes the temperature of each X-ray that strikes it,” explains Brian Williams, NASA’s XRISM project scientist at Goddard. “We call Resolve a microcalorimeter spectrometer because each of its 36 pixels is measuring tiny amounts of heat delivered by each incoming X-ray, allowing us to see the chemical fingerprints of elements making up the sources in unprecedented detail.”

Put another way, each of the sensor’s 36 pixels can independently and accurately measure changes in temperature of specific wavelengths of light. The sensor measures how the temperature of each pixel changes based on the X-ray it absorbs, allowing it to measure the energy of a single particle of electromagnetic radiation.

There is a lot of information in this data, and scientists can learn an incredible amount about very distant objects based using these X-rays.

Resolve can detect particular wavelengths of light so precisely that it can detect the motions of individual elements within a target, “effectively providing a 3D view.” The camera can detect the flow of gas within distant galaxy clusters and track how different elements behave within the debris of supernova explosions.

The 36-pixel image sensor must be extremely cold during scientific operations to pull off this incredible feat.

Videographers may attach a fan to their mirrorless camera to keep it cool during high-resolution video recording. However, for an instrument like Resolve, a fan just won’t cut it.
Using a six-stage cooling system, the sensor is chilled to -459.58 degrees Fahrenheit (-273.1 degrees Celsius), which is just 0.09 degrees Fahrenheit (0.05 degrees Celsius) above absolute zero. By the way, the average temperature of the Universe itself is about -454.8 degrees Fahrenheit (-270.4 degrees Celsius).

While a 36-pixel camera helping scientists learn new things about the cosmos may sound unbelievable, “It’s actually true,” says Richard Kelley, the U.S. principal investigator for XRISM at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

“The Resolve instrument gives us a deeper look at the makeup and motion of X-ray-emitting objects using technology invented and refined at Goddard over the past several decades,” Kelley continues.

XRISM and Resolve offer the most detailed and precise X-ray spectrum data in the history of astrophysics. With just three dozen pixels, they are charting a new course of human understanding through the cosmos (and putting an end to the megapixel race).