Friday, July 26, 2024

SAE article on L3 autonomy

Link: https://www.sae.org/news/2024/07/adas-sensor-update

Are today’s sensors ready for next-level automated driving?

SAE Level 3 automated driving marks a clear break from the lower levels of driving assistance since that is the dividing line where the driver can be freed to focus on things other than driving. While the driver may sometimes be required to take control again, responsibility in an accident can be shifted from the driver to the automaker and suppliers. Only a few cars have met regulatory approval for Level 3 operation. Thus far, only Honda (in Japan), the Mercedes-Benz S-Class and EQS sedans with Drive Pilot and BMW’s recently introduced 7 Series offer Level 3 autonomy.

With more vehicles getting L3 technology and further automated driving skills being developed, we wanted to check in with some of the key players in this tech space and hear the latest industry thinking about best practices for ADAS and AV Sensors.

Towards More Accurate 3D Object Detection

Researchers from Japan's Ritsumeikan University have developed DPPFA-Net, an innovative network that combines 3D LiDAR and 2D image data to improve 3D object detection for robots and self-driving cars. Led by Professor Hiroyuki Tomiyama, the team addressed challenges in accurately detecting small objects and aligning 2D and 3D data, especially in adverse weather conditions.

DPPFA-Net incorporates three key modules:

  •  Memory-based Point-Pixel Fusion (MPPF): Enhances robustness against 3D point cloud noise by using 2D images as a memory bank.
  •  Deformable Point-Pixel Fusion (DPPF): Focuses on key pixel positions for efficient high resolution feature fusion.
  •  Semantic Alignment Evaluator (SAE): Ensures semantic alignment between data representations during fusion.

The network outperformed existing models in the KITTI Vision Benchmark, achieving up to 7.18% improvement in average precision under various noise conditions. It also performed well in a new dataset with simulated rainfall.

Ritsumeikan University researchers said this advancement has significant implications for self driving cars and robotics. It could lead to reduced accidents, improved traffic flow and safety, and enhanced robot capabilities in various applications. The improvements in 3D object detection are expected to contribute to safer transportation, enhanced robot capabilities, and accelerated development of autonomous systems.

AEVA

Aeva has introduced Atlas, the first 4D lidar sensor designed for mass-production automotive applications. Atlas aims to enhance advanced driver assistance systems (ADAS) and autonomous driving, meeting automotive-grade requirements.

  •  The company’s sensor is powered by two key innovations: the fourth-generation lidar-on-chip module called Aeva CoreVision that incorporate all key lidar elements in a smaller package, using silicon photonics technology.
  •  Aeva X1 new system-on-chip (SoC) lidar processor that integrate data acquisition, point cloud processing, scanning system, and application software.

These innovations make Atlas 70% smaller and four times more power-efficient than Aeva's previous generation, enabling various integration options without active cooling. Atlas uses Frequency Modulated Continuous Wave (FMCW) 4D lidar technology, which offers improved object detection range and immunity to interference. It also provides a 25% greater detection range for low-reflectivity targets and a maximum range of 500 meters.

Atlas is accompanied by Aeva’s perception software, which harnesses advanced machine learning-based classification, detection and tracking algorithms. Incorporating the additional dimension of velocity data, Aeva’s perception software provides unique advantages over conventional time-of-flight 3D lidar sensors.

Atlas is expected to be available for production vehicles starting in 2025, with earlier sample availability for select customers. Aeva's co-founder and CTO Mina Rezk said that Atlas will enable OEMs to equip vehicles with advanced safety and automated driving features at highway speeds, addressing previously unsolvable challenges. Rezk believes Atlas will accelerate the industry's transition to Frequency-Modulated Continuous-Wave 4D lidar technology, which is increasingly considered the end state for lidar due to its enhanced perception capabilities and unique instant velocity data.

Luminar

Following several rocky financial months and five years of development, global automotive technology company Luminar is launching Sentinel, its full-stack software suite. Sentinel enables automakers to accelerate advanced safety and autonomous functionality, including 3D mapping, simulation, and dynamic lidar features. A study by the Swiss Re Institute showed cars equipped with Luminar lidar and Sentinel software demonstrated up to 40% reduction in accident severity.

Developed primarily in-house with support from partners, including Scale AI, Applied Intuition, and Civil Maps (which Luminar acquired in 2022), Sentinel leverages Luminar's lidar hardware and AI-based software technologies.

CEO and founder Austin Russell said Luminar has been building next-generation AI-based safety and autonomy software since 2017. “The majority of major automakers don't currently have a software solution for next-generation assisted and autonomous driving systems,” he said. “Our launch couldn't be more timely with the new NHTSA mandate for next-generation safety in all U.S.-production vehicles by 2029, and as of today, we're the only solution we know of that meets all of these requirements.”

Mobileye

Mobileye has secured design wins with a major Western automaker for 17 vehicle models launching in 2026 and beyond. The deal covers Mobileye's SuperVision, Chauffeur, and Drive platforms, offering varying levels of autonomous capabilities from hands-off, eyes-on driving to fully autonomous robotaxis.

All systems will use Mobileye's EyeQ 6H chip, integrating sensing, mapping, and driving policy. The agreement includes customizable software to maintain brand-specific experiences.
CEO Amnon Shashua called this an "historic milestone" in automated driving, emphasizing the scalability of Mobileye's technology. He highlighted SuperVision's role as a bridge to eyes-off systems for both consumer vehicles and mobility services.

Initial driverless deployments are targeted for 2026.

BMW 

BMW new 7 Series received the world’s first approval for a combination Level 2/Level 3 driving assistance systems in the same vehicle. This milestone offers drivers unique benefits from both systems.
The Level 2 BMW Highway Assistant enhances comfort on long journeys, operating at speeds up to 81 mph (130 km/h) on motorways with separated carriageways. It allows drivers to take their hands off the steering wheel for extended periods while remaining attentive. The system can also perform lane changes autonomously or at the driver's confirmation.

The Level 3 BMW Personal Pilot L3 enables highly automated driving at speeds up to 37 mph (60 km/h) in specific conditions, such as motorway traffic jams. Drivers can temporarily divert their attention from the road, but they have to retake control when prompted.

This combination of systems offers a comprehensive set of functionalities for a more comfortable and relaxing driving experience on both long and short journeys. The BMW Personal Pilot L3, which includes both systems, is available exclusively in Germany for €6,000 (around $6,500). Current BMW owners can add the L2 Highway Assistant to their vehicle, if applicable, free of charge starting August 24.

Mercedes-Benz 

Mercedes-Benz’s groundbreaking Drive Pilot Level 3 autonomous driving system is available for the S-Class and EQS Sedan. It allows drivers to disengage from driving in specific conditions, such as heavy traffic under 40 mph (64 km/h) on approved freeways under certain circumstances. The system uses advanced sensors – including radar, lidar, ultrasound, and cameras – to navigate and make decisions.
While active, Drive Pilot enables drivers to use in-car entertainment features on the central display. However, drivers must remain alert and take control when requested. Drive Pilot functions under the following conditions:

  •  Clear lane markings on approved freeways
  •  Moderate to heavy traffic with speeds under 40 mph
  •  Daytime lighting and clear weather
  •  Driver visible by camera located above driver's display
  •  The car is not in a construction zone.

Drive Pilot relies on a high-definition 3D map of the road and surroundings. It's currently certified for use on major freeways in California and parts of Nevada.

NPS

At CES 2024, Neural Propulsion Systems (NPS) demonstrated its ultra-resolution imaging radar software for automotive vision sensing. The technology significantly improves radar precision without expensive lidar sensors or weather-related limitations.

NPS CEO Behrooz Rezvani likens the improvement to enhancing automotive imaging from 20/20 to better than 20/10 vision. The software enables existing sensors to resolve to one-third of the radar beam-width, creating a 10 times denser point cloud and reducing false positives by over ten times, the company said.

The demonstration compared performance using Texas Instruments 77 GHz chipsets with and without NPS technology. Former GM R&D vice president and Waymo advisor Lawrence Burns noted that automakers can use NPS to enhance safety, performance, and cost-effectiveness of driver-assistance features using existing hardware.

NPS' algorithms are based on the Atomic Norm framework, rooted in magnetic resonance imaging technology. The software can be deployed on various sensing platforms and implemented on processors with neural network capability. Advanced applications of NPS software with wide aperture multi-band radar enable seeing through physical barriers like shrubs, trees, and buildings — and even around corners. The technology is poised to help automakers meet NHTSA's proposed stricter standards for automatic emergency braking, aiming to reduce pedestrian and bicycle fatalities on U.S. roads.

Wednesday, July 24, 2024

Perovskite sensor with 3x more light throughput

Link: https://www.admin.ch/gov/en/start/documentation/media-releases.msg-id-101189.html


Dübendorf, St. Gallen und Thun, 28.05.2024 - Capturing three times more light: Empa and ETH researchers are developing an image sensor made of perovskite that could deliver true-color photos even in poor lighting conditions. Unlike conventional image sensors, where the pixels for red, green and blue lie next to each other in a grid, perovskite pixels can be stacked thus greatly increasing the amount of light each individual pixel can capture.

Family, friends, vacations, pets: Today, we take photos of everything that comes in front of our lens. Digital photography, whether with a cell phone or camera, is simple and hence widespread. Every year, the latest devices promise an even better image sensor with even more megapixels. The most common type of sensor is based on silicon, which is divided into individual pixels for red, green and blue (RGB) light using special filters. However, this is not the only way to make a digital image sensor – and possibly not even the best.

A consortium comprising Maksym Kovalenko from Empa's Thin Films and Photovoltaics laboratory, Ivan Shorubalko from Empa's Transport at Nanoscale Interfaces laboratory, as well as ETH Zurich researchers Taekwang Jang and Sergii Yakunin, is working on an image sensor made of perovskite capable of capturing considerably more light than its silicon counterpart. In a silicon image sensor, the RGB pixels are arranged next to each other in a grid. Each pixel only captures around one-third of the light that reaches it. The remaining two-thirds are blocked by the color filter.
Pixels made of lead halide perovskites do not need an additional filter: it is already "built into" the material, so to speak. Empa and ETH researchers have succeeded in producing lead halide perovskites in such a way that they only absorb the light of a certain wavelength – and therefore color – but are transparent to the other wavelengths. This means that the pixels for red, green and blue can be stacked on top of each other instead of being arranged next to each other. The resulting pixel can absorb the entire wavelength spectrum of visible light. "A perovskite sensor could therefore capture three times as much light per area as a conventional silicon sensor," explains Empa researcher Shorubalko. Moreover, perovskite converts a larger proportion of the absorbed light into an electrical signal, which makes the image sensor even more efficient.

Kovalenko's team was first able to fabricate individual functioning stacked perovskite pixels in 2017. To make the next step towards real image sensors, the ETH-Empa consortium led by Kovalenko had partnered with the electronics industry. "The challenges to address include finding new materials fabrication and patterning processes, as well as design and implementation of the perovskite-compatible read-out electronic architectures", emphasizes Kovalenko. The researchers are now working on miniaturizing the pixels, which were originally up to five millimeters in size, and assembling them into a functioning image sensor. "In the laboratory, we don't produce the large sensors with several megapixels that are used in cameras," explains Shorubalko, "but with a sensor size of around 100'000 pixels, we can already show that the technology works."

Good performance with less energy
Another advantage of perovskite-based image sensors is their manufacture. Unlike other semiconductors, perovskites are less sensitive to material defects and can therefore be fabricated relatively easily, for example by depositing them from a solution onto the carrier material. Conventional image sensors, on the other hand, require high-purity monocrystalline silicon, which is produced in a slow process at almost 1500 degrees Celsius.

The advantages of perovskite-based image sensors are apparent. It is therefore not surprising that the research project also includes a partnership with industry. The challenge lies in the stability of perovskite, which is more sensitive to environmental influences than silicon. "Standard processes would destroy the material," says Shorubalko. "So we are developing new processes in which the perovskite remains stable. And our partner groups at ETH Zurich are working on ensuring the stability of the image sensor during operation."

If the project, which will run until the end of 2025, is successful, the technology will be ready for transfer to industry. Shorubalko is confident that the promise of a better image sensor will attract cell phone manufacturers. "Many people today choose their smartphone based on the camera quality because they no longer have a stand-alone camera," says the researcher. A sensor delivering excellent images in much poorer lighting conditions could be a major advantage.

Monday, July 22, 2024

Sunday, July 21, 2024

Job Postings - Week of 21 July 2024

Anduril Industries

Digital IC Designer

Santa Barbara, California, USA

Link

CERN

Postdoctoral research position on detector R&D for experimental particle physics (LHCb)

Lucerne, Switzerland

Link

Ametek – Forza Silicon

Mixed Signal Design Engineer

Pasadena, California, USA

Link

Tsung-Dao Lee Institute

Postdoctoral Positions in Muon Imaging

Shanghai

Link

NASA

Far-Infrared Detectors for Space-Based Low-Background Astronomy

Greenbelt, Maryland, USA

Link

ESRF

Detector Engineer

Grenoble, France

Link

Tokyo Electron

Heterogenous Integration Process Engineer

Albany, New York, USA

Link

University of Oxford

Postdoctoral Research Assistant in Dark Matter Searches

Oxford, England, UK

Link

Lockheed Martin

Electro-Optical Senior Engineer

Denver, Colorado, USA

Link

Friday, July 19, 2024

Videos du jour - Sony, onsemi

Sony UV and SWIR sensors demo:



Webinar by ON Semi on image sensor selection:



ON Semi Hyperlux image sensor demo:



Thursday, July 18, 2024

Senseeker Expands Low-Noise Neon Digital Readout IC Family for SWIR Applications

The 10 µm, 1280 x 1024 Neon® RD0131 DROIC is available now for commercial use.


Santa Barbara, California (July 16 th , 2024) — Senseeker Corp, a leading innovator of digital infrared image sensing technology, has announced the availability of the Neon® RD0131, an advanced digital readout integrated circuit (DROIC) that expands the Neon product family with the addition of a high definition 1280 x 1024 format.

“The new larger format size of the Neon RD0131 is a welcome addition to the Neon DROIC family,” said Dr. Martin H. Ettenberg, President and CEO at Princeton Infrared Technologies. “Senseeker’s approach to offering families of compatible products allows reuse of test equipment, electronics and software, greatly simplifying the development of new high-performance SWIR cameras and imagers that we provide for the Industrial, Scientific and Defense markets.”

The Neon RD0131, with 1280 x 1024 format and 10 µm pitch has triple-gain modes with programmable well capacities of 22 ke-, 160 ke- and 1.1 Me-. The DROIC supports a read noise of 15 electrons at room temperature in high-gain.

“The Neon RD0131 CTIA DROIC is the second chip in our Neon product family that has proven to be a hit with customers that are developing solutions for low-light applications such as short-wave infrared (SWIR) and low-current technologies such as quantum dot-based detectors,” said Kenton Veeder, President of Senseeker. “We have included the popular features and operating modes that Senseeker isknown for, including on-chip temperature monitoring and programmable multiple high-speed windows to observe and track targets at thousands of frames per second.”

The Neon RD0131 is available in full or quarter wafers now and is supported by Senseeker’s CoaxSTACK™ electronics kit, CamIRa® imaging software and sensor test units (STUs) that, together, enable testing and evaluation of Neon-based focal plane arrays quickly and efficiently.

The Neon® RD0131-L10x is a low-noise, triple-gain digital readout integrated circuit (DROIC) that has a 10 µm pitch pixel with a capacitive transimpedance amplifier (CTIA) front-end circuit. This DROIC was developed for low-light applications such as short wave Infrared (SWIR) and low-current detector technologies such as quantum dot-based detectors. It has been designed for use in high operating temperature (HOT) conditions.

  • 10 μm , P-on-N polarity, CTIA input
  • Global snapshot, Integrate-while-read (IWR) operation
  • Three selectable gains with well capacity of 22 ke- (high-
  • gain), 160 ke- (medium-gain) and 1.1 Me- (low-gain)
  • Correlated Doubling Sampling (CDS) on and off chip
  • Advanced zero-signal noise floors of 15 e– rms (high-gain
  • using CDS, room temperature)
  • Synchronous or asynchronous integration control
  • High-speed windowing with multiple windows
  • Serialized to 16 bits per pixel (15 data, 1 valid flag bit)
  • SPI control interface (SenSPI®) and optional frame clock

 Neon RD0131 dies on wafer

 
Image of a bruised apple that uses the Neon ROIC with a short wave infrared (SWIR) detector.


 

Monday, July 15, 2024

International Image Sensor Workshop 2025 First Call for Papers


FIRST CALL FOR PAPERS
ABSTRACTS DUE DEC 19, 2024
2025 International Image Sensor Workshop Awaji Yumebutai Int. Conf. Center, Hyōgo, Japan
(June 2 - 5, 2025)

The 2025 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. The event is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is the tradition, the 2025 workshop will emphasize an open exchange of information among participants in an informal, secluded setting beside the Awaji Island in Hyōgo, Japan.

The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.

Papers on the following topics are solicited:

Image Sensor Design and Performance
CMOS imagers, CCD imagers, SPAD sensors
New and disruptive architectures
Global shutter image sensors
Low noise readout circuitry, ADC designs
Single photon sensitivity sensors
High frame rate image sensors
High dynamic range sensors
Low voltage and low power imagers
High image quality; Low noise; High sensitivity
Improved color reproduction
Non-standard color patterns with special digital processing
Imaging system-on-a-chip, on-chip image processing
Event-based image sensors

Pixels and Image Sensor Device Physics
New devices and pixel structures
Advanced materials
Ultra miniaturized pixels development, testing, and characterization
New device physics and phenomena
Electron multiplication pixels and imagers
Techniques for increasing QE, well capacity, reducing crosstalk, and improving angular response
Frontside illuminated, backside illuminated, and stacked pixels and pixel arrays
Pixel simulation: optical and electrical simulation, 2D and 3D, CAD for design and simulation, improved models

Application Specific Imagers
Image sensors and pixels for range sensing: LIDAR, TOF, RGBZ, structured light, stereo imaging, etc.
Image sensors with enhanced spectral sensitivity (NIR, UV, IR)
Sensors for DSC, DSLR, mobile, digital video cameras and mirror-less cameras
Array imagers and sensors for multi-aperture imaging, computational imaging, and machine learning
Sensors for medical applications, microbiology, genome sequencing
High energy photon and particle sensors (X-ray, radiation)
Line arrays, TDI, very large format imagers
Multi and hyperspectral imagers
Polarization sensitive imagers

Image Sensor Manufacturing and Testing
New manufacturing techniques
Wafer-on-wafer and chip-on-wafer stacking technologies
Backside thinning
New characterization methods
Packaging and testing: reliability, yield, cost
Defects, noises, and leakage currents
Radiation damage and radiation hard imagers

On-chip Optics and Color Filters
Advanced optical path, color filters, microlens, light guides
Nanotechnologies for Imaging
Wafer level cameras

Submission of abstracts:

An abstract should consist of a single page of maximum 500-words text with up to two pages of illustrations (3 pages maximum), and include authors’ name(s), affiliation, mailing address, telephone number, and e-mail address.

The deadline for abstract submission is 11:59pm, Thursday Dec 19, 2024 (GMT).
To submit an abstract, please go to: https://cmt3.research.microsoft.com/IISW2025
Above website should be open by Aug 1, 2024.

The first time you visit the paper submission site, you'll need to click on "Create Account". Once you create and verify your account with your email address, you will be able to submit abstracts by logging in and clicking “Create New Submission”.

Please visit https://imagesensors.org/CFP2025 for complete instructions and any updates to the abstract and paper submission procedures.

Abstracts will be considered on the basis of originality and quality. High quality papers on work in progress are also welcome. Abstracts will be reviewed confidentially by the Technical Program Committee.

Key Dates:
Authors will be notified of the acceptance of their abstract latest by Feb 10, 2025.
Final-form 4-page paper submission date is Mar 22, 2025.
Presentation material submission date is May 1, 2025.

Location:
The IISW 2025 will be held at the International Conference Center on Awaji Island in Hyōgo Prefecture, Japan. This beautiful hotel is about 1 hour from Kansai International Airport. Limousine Buses chartered by IISW will pick up attendees at JR Shin-Kobe Station and JR Sannomiya Station.

Registration, Workshop fee, and Hotel Reservation:
Registration details and hotel reservation information will be provided in the Final Announcement of the Workshop.

Forthcoming announcements and additional information will be posted on the 2025 Workshop page of the International Image Sensor Society website at: https://www.imagesensors.org/



Thursday, July 11, 2024

Last chance to buy Sony CCD sensors

We shared back in 2015 news of Sony discontinuing their CCD sensors.

The "last time buy" period for these sensors is nearing the end.

Framos: https://www.framos.com/en/news/framos-announces-last-time-buy-deadline-for-sony-ccd-sensors

Taking into consideration current market demand and customer feedback, Sony has decided to revise the “Last Time Buy PO submission” deadline to the End of September 2024. Final shipments to FRAMOS remain unchanged at the end of March 2026. With these changes, FRAMOS invites all customers to submit their final Last Time Buy Purchase Orders to them no later than September 24th, 2024, to ensure timely processing and submission to Sony by the new Last Time Buy deadline date.
Important dates:
 Deadline for Last Time Buy Purchase Orders received by FRAMOS: September 24th, 2024
 Final delivery of accepted Last Time Buy Purchase Orders from FRAMOS: March 31st, 2026 

SVS-Vistek: https://www.svs-vistek.com/en/news/svs-news-article.php?p=svs-vistek-offers-last-time-buy-options-or-replacement-products-for-ccd-cameras

For customers who wish to continue using CCD-based designs, SVS-Vistek has initiated a Last-Time-Buy (LTB) period, effective immediately, followed by a subsequent Last-Time-Delivery (LTD) period. This allows our customers to continue to produce and sell their CCD-based products, ensuring reliable delivery. Orders can be placed until August 31, 2024 (Last-Time-Buy). SVS-Vistek will then offer delivery of LTB cameras until August 31, 2026 (Last-Time-Delivery). We advise our customers individually and try to find the best solution together. 

Wednesday, July 10, 2024

Forbes blog on Obsidian thermal imagers

Link: https://www.forbes.com/sites/davidhambling/2024/05/22/new-us-technology-makes-more--powerful-thermal-imagers-at-lower-cost/

[some excerpts below]

New U.S. Technology Makes More Powerful Thermal Imagers At Lower Cost 

Thermal imaging has been a critical technology in the war in Ukraine, spotting warm targets like vehicles and soldiers in the darkest nights. Military-grade thermal imagers used on big Baba Yaga night bombers are far too expensive for drone makers assembling $400 FPV kamikaze drones who have to rely on lower-cost devices. But a new technology developed by U.S company Obsidian Sensors Inc could transform the thermal imaging market with affordable high-resolution sensors.

...

Older digital cameras were based on CCDs (charge coupled devices), the current generation use more affordable CMOS imaging sensors which produce an electrical charge in response to light. The vast majority of thermal imagers use a different technology: an array of microbolometers, miniature devices whose pixels absorb infrared energy and measure the resulting change in resistance. The conventional design neatly integrates the microbolometers and the circuits which read them on the same silicon chip.

...

John Hong, CEO of Obsidian Sensors based in San Diego believes he has a better approach, which can scale up to high resolution at low cost and, crucially, high volume, at established foundries. The new design does not integrate everything in one unit but separates the bolometer array from the readout circuits. This is more complex but allows a different manufacturing technique to be used.

The readout circuits are still on silicon, but the sensor array is produced on a sheet of glass, leveraging technology perfected for flat-screen TVs and mobile phone displays. Large sheets of glass are far cheaper to process than small wafers of silicon and bolometers made on glass cost about a hundred times less than on silicon.

Hong says the process can easily produce multi-megapixel arrays. Obsidian are already producing test batches of VGA sensors, and plan to move to 1280x1024 this year and 1920x1080 in 2025.
Obsidian has been quietly developing their technology for six years and are now able to produce units for evaluation at a price three to four times lower than comparable models. Further evolution of the manufacturing process will bring prices even lower.

That could bring a 640x480 VGA sensor imager down to well below $200.

...

Hong says they plan to sell a thousand VGA cameras this year on a pilot production run, and are currently raising a series B to hit much larger volumes in 2025 and beyond. That should be just about right to surf the wave of demand in the next few years.

 

The thermal image from Obsidian's sensor (left) shows pedestrians who are invisible in the glare in a digital camera image (right) [Obsidian Sensors]