Tuesday, April 30, 2019

Image Sensors in Jewelry

There happens to be one emerging application that is not sensitive to image quality, pixel size, power consumption, or any other parameter - jewelry. It is not clear how large this market is:

Graphene Photodetectors Overview

Arxiv.org paper "Recent Progress and Future Prospects of 2D-based Photodetectors" by Nengjie Huo and Gerasimos Konstantatos from Barcelona Institute of Science and Technology and ICREA reviews graphene imagers developed in the recent years.

"Conventional semiconductors such as silicon and InGaAs based photodetectors have encountered a bottleneck in modern electronics and photonics in terms of spectral coverage, low resolution, non-transparency, non-flexibility and CMOS-incompatibility. New emerging 2D materials such as graphene, TMDs and their hybrid systems thereof, however, can circumvent all these issues benefiting from mechanical flexibility, extraordinary electronic and optical properties, as well as wafer-scale production and integration. Heterojunction-based photodiodes based on 2D materials offer ultrafast and broadband response from visible to far infrared range. Phototransistors based on 2D hybrid systems combined with other material platforms such as quantum dots, perovskites, organic materials, or plasmonic nanostructures yield ultrasensitive and broadband light detection capabilities. Notably the facile integration of 2D-photodetectors on silicon photonics or CMOS platforms paves the way towards high performance, low-cost, broadband sensing and imaging modalities."

Monday, April 29, 2019

ON Semi Analyst Day and Q1 2019 Results

ON Semi has held Analyst Day on March 9, 2019 and also announces Q1 earnings a few days ago. Few quotes:


From SeekingAlpha earnings call transcript:

"In ADAS applications, our momentum continues to accelerate. We are seeing strong interest from customers in our broad portfolio of automotive image sensor products. Recall that we are the only provider of automotive image sensors with complete portfolio of 1 megapixel, 2 megapixel and 8 megapixel image sensors. The breadth of our portfolio has enabled us to secure many design wins from leading global OEMs and tier-1s."

Saturday, April 27, 2019

Velodyne Abandons its San Jose Mega-Factory Project?

Velodyne closes an agreement with Nikon, under which Sendai Nikon Corporation will manufacture LiDARs for Velodyne with plans to start mass production in the second half of 2019. “Mass production of our high-performance lidar sensors is key to advancing Velodyne’s immediate plans to expand sales in North America, Europe, and Asia,” said Marta Hall, President and CBDO, Velodyne Lidar. “For years, Velodyne has been perfecting lidar technology to produce thousands of lidar units for autonomous vehicles (AVs) and advanced driver assistance systems (ADAS). It is our goal to produce lidar in the millions of units with manufacturing partners such as Nikon."

Compare this last statement with a previous Velodyne PR on its Megafactory: "Located in San Jose, CA, the enormous facility not only has enough space for high-volume manufacturing, but also for the precise distance and ranging alignment process for LiDAR sensors as they come off the assembly line. ...more than one million LiDAR sensors [are] expected to be built in the facility in 2018. That high-volume manufacturing will feed the global demand for Velodyne’s solid-state hybrid LiDAR."

Instead of shipping 1M LiDARs in 2018 alone, "Velodyne has shipped a cumulative total of 30,000 lidar sensors" from the start of the company to the end of March 2019.

One of the reasons expanding in Japan is said to be cost: "Working with Nikon, an expert in precision manufacturing, is a major step toward lowering the cost of our lidar products. Nikon is notable for expertly mass-producing cameras while retaining high standards of performance and uncompromising quality. Together, Velodyne and Nikon will apply the same attention to detail and quality to the mass production of lidar. Lidar sensors will retain the highest standards while at the same time achieving a price that will be more affordable for customers around the world,” Marta Hall says. However, Japan is not a cheap manufacturing location these days. It's not clear how production in Japan makes Velodyne LiDAR cheaper.

The companies are said to continue to investigate further areas of a wide-ranging and multifaceted business alliance.

One major part missing from this PR is a fate of Velodyne Mega-Factory in San-Jose. Half a year ago, the company has appointed a new COO with responsibility to introduce even more automation on the site. It looks like these efforts were not successful enough.

Friday, April 26, 2019

Caeleste MAF HDR GS BSI Rad-Hard Sensor

Caeleste ELFIS imager is said to be the first image sensor ever combining the following features:
  • True HDR (“MAF HDR”, motion artifact free HDR)
  • Global shutter using GS technology,
  • Allowing low noise CDS readout
  • Enabling Global Shutter (IWR) without dark current penalty
  • Backside illumination
  • TDI radiation hard design

It has been developed for ESA in collaboration with LFoundry and Airbus. A whitepaper explains the sensor's operation:

Sony Reports FY2018 Results

Sony results of FY 2018 ended on March 31, 2019 show 9.5% YoY growth in image sensor sales. The company forecasts a 18% sales growth next year:

Wednesday, April 24, 2019

ON Semi Gen3 SiPM LiDAR Demo

ON Semi demos its 3rd Gen SiPM LiDAR design:



Update: The demo has been prepared in collaboration with Phantom Intelligence startup.

Tuesday, April 23, 2019

Tesla Self-Driving Chip Supports 2.5Gpix/s Camera Input

Tesla revealed its Gen. 3 HW chip in its Autonomy Day presentation:

Sony to Delay Automotive 7.42MP Sensor Production to 2020

Nikkei: Sony decided to delay mass production of its IMX424 and IMX324 7.42MP automotive sensor to 2020. The company has shipped the first samples of the sensor in 2017, aiming for the volume production in June 2018. However, the production start has been delayed due to specification changes, additional functions and market trends. Now, the company finally decided when to start volume production.

On Semi too plans to start its 8MP sensor volume production in the early 2020s. Sony presented a demo of its new cameras on a model car - 4 cameras are 7.42MP and another 4 are 2MP:

Monday, April 22, 2019

ResearchInChina: Automotive Thermal Vision is of Little Value

ResearchInChina publishes a report "Global and China Automotive Night Vision System Industry Report, 2019-2025." Few quotes:

"For the automotive sector, night vision system is of little value and seems like “chicken ribs” – tasteless when eaten but a pity to throw away.

In function, night vision system is a special solution for automobiles now that it enables a vehicle to see an object more than 300m ahead at night (compared with a mere 80m offered by headlamps) and gives driver more time to react, ensuring safer driving. ADAS and other technologies (like LiDAR and ordinary optical camera), however, play a part in night driving safety as well. And the stubbornly high price justifies the sluggish demand for night vision systems such as infrared night vision system.

According to the statistics, night vision system was a standard configuration for 58 of vehicle models available on the Chinese market in March 2019, just less than in 2015, of which 18 were Savana (caravans). Audi, Mercedes-Benz and BMW are less enthusiastic about the technology, and just equip it to their luxury models each priced above RMB1 million (a combined 67% of models carrying the system).

In the meantime, the insiders hold such different views on night vision system as follows:

Negative:

“It’s not something that’s really necessary because optical cameras actually do pretty well at night and you have a radar system as backup that is not affected by light,” said Dan Galves, a senior vice president at Intel Corporation’s Mobileye.

Bosch argues that technical advances bring about the decreasing demand for night vision system. One reason is that ordinary camera alone can work outstandingly at night with the maturity of image sensing technology. Also, the progress in technologies for automotive lighting, like LED headlamp, offers a horizon as long as 100-200m. So Bosch has shifted its attention away from night vision solution.

Positive:

Tim LeBeau, the vice president of Seek Thermal, thinks that the current optical radar for autonomous cars cannot detect the heat of an object to ensure whether it is a creature or not, and that the cost of thermal sensors is slashed by about 20 percent a year as they get widely used.

People who detest high beam agree that headlamps delivering 200m beam will interfere with other drivers’ sight, and the solution combining low beam and passive night vision (infrared thermal image) system is the best choice.

Still, some vendors are sparing no efforts in making the technology more feasible for automotive application. Examples include Veoneer whose third-generation night vision system capable of detecting both pedestrians and animals is integrated with rotary LED headlamps which will automatically turn to the front object detected by the system; and Adasky’s Viper system that can classify the obstacles through convolutional neural network-based unique algorithms and display them on the cockpit screen to remind the driver.

Vendors will also work on laser-based night vision, low-light-level night vision, bionic night vision and head-up display (HUD) as well as headlamp fusion.

In brief, as long as price comes down to an affordable level, “the chicken ribs” will become “a delicious homely dish.”
"

Sunday, April 21, 2019

LiDAR News: Livox, Apple, Canon-Pioneer, ON Semi

Livox goes though a learning cycle that incumbent LiDAR companies like Velodyne went a long time ago:

"After hearing from end-users about their specific needs, we’re releasing three new Livox Mid Series firmware for special application testing.

Multi-Return Firmware:

This firmware is designed for situations where the LiDAR laser may hit multiple objects simultaneously and produce multiple returns. It allows users to receive these multiple returns instead of the standard single return.


Short-Blind-Zone Firmware:

This firmware reduces the blind zone from 1 meter down to 0.3 meters which is helpful for shorter range detection applications like interior 3D modeling and mapping.


Threadlike-Noise Filtering Firmware:

This firmware supports processing for threadlike-noise points produced by consecutive return signals and allows you to set the depth of these points to zero."


Livox demos its scanning pattern at Tech.AD Berlin 2019 event:


Reuters reports that Apple has held talks with at least four LiDAR companies as possible suppliers for its self-driving cars, evaluating the companies’ technology while also still working on its own LiDAR design.

Apple is seeking LiDARs that would be smaller, cheaper and more easily mass produced than the current technology. The designs could potentially be made with conventional semiconductor manufacturing. Apple also wants sensors that can see at several hundred meters distance.

Pioneer and Canon announce that the companies have entered into an agreement to co-develop a 3D-LiDAR sensor.

Pioneer has been pursuing the development of compact, high-performance MEMS mirrors that can be produced at a low cost with the aim of mass production from 2020 onwards. In addition to developing object recognition algorithms and vehicle ego-localization algorithms, the company provided its 2018 3D-LiDAR sensor models to companies for testing in September 2018. Additionally, in January 2019, Pioneer established a new organizational structure that integrates autonomous-vehicle-related R&D, technology development and business development to further accelerate the growth of its autonomous vehicle business.

The companies will engage in the joint development of a 3D-LiDAR sensor towards the goal of mass production by Pioneer.

ON Semi fuses SiPM depth map with a regular AR0231 image in this demo:

Saturday, April 20, 2019

NHK R&D Journal Issue on Image Sensing Devices

March 2019 issue of NHK STRL R&D Journal devoted to imaging devices being developed by the company:

Dark Current Reduction In Crystalline Selenium-Based Stacked-Type CMOS Image Sensors
Shigeyuki IMURA, Keitada MINEO, Kazunori MIYAKAWA, Masakazu NANBA,
Hiroshi OHTAKE And Misao KUBOTA
There is a possibility that highly sensitive imaging devices can be acquired by using avalanches (c-Se)-based stacked-type CMOS image sensors. In this visible region. The increase in the dark current in the low-electric field region (non-avalanche region) has been an issue. In this study, we optimized the growth conditions of the tellurium (Te) nucleation We have fabricated a test device on glass substrates and successfully reduced the dark current to below 100 pA, which is used to prevent the se film from peeling, resulting in a reduction of the dark current in the non-avalanche region. / cm2 (by a factor of 1/100) at a reverse-bias voltage of 15 V.


Improvement in Performance of Photocells Using Organic Photoconductive Films Sandwiched Between Transparent Electrodes
Toshikatsu SAKAI, Tomomi TAKAGI, Yosuke HORI, Takahisa SHIMIZU,
Hiroshi OHTAKE And Satoshi AIHARA
We have developed a superior type of image sensor that has high sensitivity with three sensor elements, each of which is sensitive to only primary colors. , for each R / G / B-sensitive photocell sandwiched between transparent ITO electrodes.

3D Integrated Image Sensors With Pixel-Parallel Signal Processing
Masahide GOTO, Yuki HONDA, Toshihisa WATABE, Kei HAGIWARA,
Masakazu NANBA And Yoshinori IGUCHI
Photodiodes, pulse generation circuits and 16-bit pulse counters are three-dimensional. We studied a three-dimensional integrated image sensor that is capable of pixel-parallel signal processing, there by meeting integrated within each pixel by direct bonding of silicon on insulator (SOI) layers with embedded Au electrodes, which provides in-pixel pulse frequency modulation A / D converters. Pixel-parallel video images with Quarter Video Graphics Array (QVGA) resolution were obtained, demonstrating the feasibility of these next-generation image sensors.


The Japanese version of the Journal has much many papers but it's harder to figure out their technical content.

Friday, April 19, 2019

Image Sensors at VLSI Symposia 2019

VLSI Symposia to be held in June this year in Kyoto, Japan, publishes its agenda with many image sensor papers:

A 640x640 Fully Dynamic CMOS Image Sensor for Always-On Object Recognition,
I. Park*, W. Jo*, C. Park*, B. Park*, J. Cheon** and Y. Chae*, *Yonsei Univ. and **Kumoh National Institute of Technology, Korea
This paper presents a 640x640 fully dynamic CMOS image sensor for always-on object recognition. A pixel output is sampled with a dynamic source follower (SF) into a parasitic column capacitor, which is readout by a dynamic single-slope (SS) ADC based on a dynamic bias comparator and an energy efficient two-step counter. The sensor, implemented in a 0.11μm CMOS, achieves 0.3% peak non-linearity, 6.8erms RN and 67dB DR. Its power consumption is only 2.1mW at 44fps and is further reduced to 260μW at 15fps with sub-sampled 320x320 mode. This work achieves the state-of-the-art energy efficiency FoM of 0.7e-·nJ.

A 132 by 104 10μm-Pixel 250μW 1kefps Dynamic Vision Sensor with Pixel-Parallel Noise and Spatial Redundancy Suppression,
C. Li*, L. Longinotti*, F. Corradi** and T. Delbruck***, *iniVation AG, **iniLabs GmbH and ***Univ. of Zurich, Switzerland
This paper reports a 132 by 104 dynamic vision sensor (DVS) with 10μm pixel in a 65nm logic process and a synchronous address-event representation (SAER) readout capable of 180Meps throughput. The SAER architecture allows adjustable event frame rate control and supports pre-readout pixel-parallel noise and spatial redundancy suppression. The chip consumes 250μW with 100keps running at 1k event frames per second (efps), 3-5 times more power efficient than the prior art using normalized power metrics. The chip is aimed for low power IoT and real-time high-speed smart vision applications.

Automotive LIDAR Technology,
M. E. Warren, TriLumina Corporation, USA
LIDAR is an optical analog of radar providing high spatial-resolution range information. It is an essential part of the sensor suite for ADAS (Advanced Driver Assistance Systems), and ultimately, autonomous vehicles. Many competing LIDAR designs are being developed by established companies and startup ventures. Although there are no standards, performance and cost expectations for automotive LIDAR are consistent across the automotive industry. Why are there so many different competing designs? We can look at the system requirements and organize the design options around a few key technologies.

A 64x64 APD-Based ToF Image Sensor with Background Light Suppression Up to 200 klx Using In-Pixel Auto-Zeroing and Chopping,
B. Park, I. Park, W. Choi and Y. C. Chae, Yonsei Univ., Korea
This paper presents a time-of-flight (ToF) image sensor for outdoor applications. The sensor employs a gain-modulated avalanche photodiode (APD) that achieves high modulation frequency. The suppression capability of background light is greatly improved up to 200klx by using a combination of in-pixel auto-zeroing and chopping. A 64x64 APD-based ToF sensor is fabricated in a 0.11μm CMOS. It achieves depth ranges from 0.5 to 2 m with 25MHz modulation and from 2 to 20 m with 1.56MHz modulation. For both ranges, it achieves a non-linearity below 0.8% and a precision below 3.4% at a 3D frame rate of 96fps.

A 640x480 Indirect Time-of-Flight CMOS Image Sensor with 4-tap 7-μm Global-Shutter Pixel and Fixed-Pattern Phase Noise Self- Compensation Scheme,
M.-S. Keel, Y.-G. Jin, Y. Kim, D. Kim, Y. Kim, M. Bae, B. Chung, S. Son, H. Kim, T. An, S.-H. Choi, T. Jung, C.-R. Moon, H. Ryu, Y. Kwon, S. Seo, S.-Y. Kim, K. Bae, S.-C. Shin and M. Ki, Samsung Electronics Co., Ltd., Korea
A 640x480 indirect Time-of-Flight (ToF) CMOS image sensor has been designed with 4-tap 7-μm global-shutter pixel in 65-nm back-side illumination (BSI) process. With novel 4-tap pixel structure, we achieved motion artifact-free depth map. Column fixed-pattern phase noise (FPPN) is reduced by introducing alternative control of the clock delay propagation path in the photo-gate driver. As a result, motion artifact and column FPPN are not noticeable in the depth map. The proposed ToF sensor shows depth noise less than 0.62% with 940-nm illuminator over the working distance up to 400 cm, and consumes 197 mW for VGA, which is 0.64 pW/pixel.

A 128x120 5-Wire 1.96mm2 40nm/90nm 3D Stacked SPAD Time Resolved Image Sensor SoC for Microendoscopy,
T. Al Abbas*, O. Almer*, S. W. Hutchings*, A. T. Erdogan*, I. Gyongy*, N. A. W.Dutton** and R. K. Henderson*, *Univ. of Edinburgh and
**STMicroelectronics, UK
An ultra-compact 1.4mmx1.4mm, 128x120 SPAD image sensor with a 5-wire interface is designed for time-resolved fluorescence microendoscopy. Dynamic range is extended by noiseless frame summation in SRAM attaining 126dB time resolved imaging at 15fps with 390ps gating resolution. The sensor SoC is implemented in STMicroelectronics 40nm/90nm 3D-stacked BSI CMOS process with 8μm pixels and 45% fill factor.

Fully Integrated Coherent LiDAR in 3D-Integrated Silicon Photonics/65nm CMOS,
P. Bhargava*, T. Kim*, C. V. Poulton**, J. Notaros**, A. Yaacobi**, E. Timurdogan**, C. Baiocco***, N. Fahrenkopf***, S. Kruger***, T. Ngai***, Y. Timalsina***, M. R. Watts** and V. Stojanovic*, *Univ. of California, Berkeley, **Massachusetts Institute of Technology and ***College of Nanoscale Science and Engineering, USA
We present the first integrated coherent LiDAR system with experimental ranging demonstrations operating within the eyesafe 1550nm band. Leveraging a unique wafer-scale 3D integration platform which includes customizable silicon photonics and nanoscale CMOS, our system seamlessly combines a high-sensitivity optical coherent detection front-end, a large-scale optical phased array for beamforming, and CMOS electronics in a single chip. Our prototype, fabricated entirely in a 300mm wafer facility, shows that low-cost manufacturing of high-performing solid-state LiDAR is indeed possible, which in turn may enable extensive adoption of LiDARs in consumer products, such as self-driving cars, drones, and robots.

Automotive Image Sensor for Autonomous Vehicle and Adaptive Driver Assistance System,
H. Matsumoto, Sony Corp.
Human vision is the most essential sensor to drive vehicle. Instead of human eyes, CMOS image sensor is the best sensing device to recognize objects and environment around the vehicle. Image sensors are also used in various use cases such as driver and passenger monitor in cabin of vehicle. For these use cases, some special functionalities and specification are needed. In this session the requirements for automotive image sensor will be discussed such as high dynamic range, flicker mitigation and low noise. In the last part the key technology to utilize image sensor, such as image recognition and computer vision will be discussed.

426-GHz Imaging Pixel Integrating a Transmitter and a Coherent Receiver with an Area of 380x470 μm2 in 65-nm CMOS,
Y. Zhu*, P. R. Byreddy*, K. K. O* and W. Choi*, **, *The Univ. of Texas at Dallas and **Oklahoma state Univ., USA
A 426-GHz imaging pixel integrating a transmitter and a coherent receiver using the three oscillators for 3-push within an area of 380x470 μm2 is demonstrated. The TX power is -11.3 dBm (EIRP) and sensitivity is -89.6 dBm for 1-kHz noise bandwidth. The sensitivity is the lowest among imaging pixels operating above 0.3 THz. The pixel consumes 52 mW from a 1.3 V VDD. The pixel can be used with a reflector with 47 dB gain to form a camera-like reflection mode image for an object 5 m away.

Monolithic Three-Dimensional Imaging System: Carbon Nanotube Computing Circuitry Integrated Directly Over Silicon Imager,
T. Srimani, G. Hills, C. Lau and M. Shulaker, Massachusetts Institute of Technology, USA
Here we show a hardware prototype of a monolithic three-dimensional (3D) imaging system that integrates computing layers directly in the back-end-of-line (BEOL) of a conventional silicon imager. Such systems can transform imager output from raw pixel data to highly processed information. To realize our imager, we fabricate 3 vertical circuit layers directly on top of each other: a bottom layer of silicon pixels followed by two layers of CMOS carbon nanotube FETs (CNFETs) (comprising 2,784 CNFETs) that perform in-situ edge detection in real-time, before storing data in memory. This approach promises to enable image classification systems with improved processing latencies.

Record-High Performance Trantenna Based on Asymmetric Nano-Ring FET for Polarization-Independent Large-Scale/Real-Time THz Imaging, E.-S. Jang*, M. W. Ryu*, R. Patel*, S. H. Ahn*, H. J. Jeon*, K. Han** and K. R. Kim*, *Ulsan National Institute of Science and Technology and **Dongguk Univ., Korea
We demonstrate a record-high performance monolithic trantenna (transistor-antenna) using 65-nm CMOS foundry in the field of a plasmonic terahertz (THz) detector. By applying ultimate structural asymmetry between source and drain on a ring FET with source diameter (dS) scaling from 30 to 0.38 micrometer, we obtained 180 times more enhanced photoresponse (∆u) in on-chip THz measurement. Through free-space THz imaging experiments, the conductive drain region of ring FET itself showed a frequency sensitivity with resonance frequency at 0.12 THz in 0.09 ~ 0.2 THz range and polarization-independent imaging results as an isotropic circular antenna. Highly-scalable and feeding line-free monolithic trantenna enables a highperformance THz detector with responsivity of 8.8kV/W and NEP of 3.36 pW/Hz0.5 at the target frequency.

Custom Silicon and Sensors Developed for a 2nd Generation Augmented Reality User Interface,
P. O'Connor, Microsoft, USA.

Thursday, April 18, 2019

Event-Based Cameras Review

Arxiv.org: Zurich University paper "Event-based Vision: A Survey" by G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, D. Scaramuzza compares different event-based cameras:

"Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results in a stream of events, which encode the time, location and sign of the brightness changes. Event cameras posses outstanding properties compared to traditional cameras: very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of microseconds), low power consumption, and do not suffer from motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as high speed and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world."

From Events to Video

Zurich University publishes a video explanations of its paper "Events-to-Video: Bringing Modern Computer Vision to Event Cameras" by Henri Rebecq, René Ranftl, Vladlen Koltun, and Davide Scaramuzza to be presented at IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, June 2019.

Wednesday, April 17, 2019

Chemical Imaging in EUV

Semiconductor Engineering publishes a nice article on photoresist operation in EUV photolithography systems used in advanced processes. It shows how far the chemical imaging, the predecessor of image sensors, can go:

"In the early days of EUV development, supporters of the technology argued that it was “still based on photons,” as opposed to alternatives like electron beam lithography. While that’s technically true, even a casual glance at EUV optics shows that these photons interact with matter differently.

An incoming EUV photon has so much energy that it doesn’t interact with the molecular orbitals to any significant degree. John Petersen, principal scientist at Imec, explained that it ejects one of an atom’s core electrons.

...the photoelectron recombines with the material, ejecting another electron. This cascade of absorption/emission events, with energy dissipating at each step, continues until the electron energy drops below about 30 eV.

Once the electron energy is in the 10 to 20 eV range, Petersen said, researchers see the formation of quantized plasma oscillations, known as plasmons. The plasmons in turn create an electric field, with effects on further interactions that are not yet understood.

Only after energy falls below 5 to 10 eV, where electrons have quantum resonance with molecular orbitals, does the familiar resist chemistry of older technologies emerge. At this level, molecular structure and angular momentum drive further interactions.
"

Tuesday, April 16, 2019

Teledyne e2v Re-Announces 4K 710fps APS-C Sensor with GS

GlobeNewswire: Teledyne e2v announces samples availability of Lince11M sensor half a year after the original announcement. Lince11M is designed for applications that require 4K resolution at very high shutter speed. This standard sensor combines 4K resolution at 710 fps in APS-C format.

SRI to Develop Night Vision Sensor

PRNewswire: SRI International has received an award to deliver digital night vision camera prototypes to support the U.S. Army's IVAS (Integrated Visual Augmentation System) program. SRI will design a low-light-level CMOS sensor and integrate the device into a custom camera module optimized for low size, weight and power (SWAP).

"SRI has been steadily advancing the low-light-level performance of night vision CMOS (NV-CMOS®) image sensors and we are pleased that the IVAS program will incorporate our fourth generation NV-CMOS imagers," said Colin Earle, associate director, Imaging Systems, SRI International.

Monday, April 15, 2019

BAE Announces no-ITAR Restricted 2.3MP 60fps Thermal Sensor

BAE Systems' Sensor Solutions is launching Athena1920 full HD (1920x1200) thermal camera core. Based on uncooled 12µm pixels, the Athena1920 is available now with no ITAR restrictions at 60Hz frame rate:

Sunday, April 14, 2019

All Huawei P30 Cameras Made by Sony

EETimes publishes SystemPlus teardown results of Huawei P30 Pro flagship smartphone:

"Separating Huawei P30 Pro, more than anything else though, is its use of quad cameras. The new smartphone literally has four cameras. They include a main camera, plus cameras for wide-angle, Time-of-Flight and a periscope view. All four use Sony CMOS image sensors. “It’s a full design win for Sony,” said Stéphane Elisabeth, costing analyst expert at SystemPlus Consulting.