Monday, February 28, 2022

Article about Peter Noble and his Early Image Sensors

DoresetEcho publishes an article about Emmy Awardee Peter Noble and his early works including the first TV image by 4096 MOS sensor 001:


"Currently, Mr Noble is writing an anthology of the origins of image-sensor array with buried-photodiode structure, which features the original papers and includes alternative methods to achieve the same result."

Friday, February 25, 2022

PreAct Announces Software-Definable Automotive Flash LiDAR

Oregon-based PreAct Technologies announces T30P flash LiDAR said to be the industry’s first software-definable LiDAR.  Vehicles with software-defined architectures require sensor technology that can support over-the-air updates throughout the life of the vehicle, allowing OEMs to generate ongoing revenue by offering powerful new features and functionality.

We are excited to bring our software-definable flash LiDAR to market, furthering the advancement of autonomous mobility across multiple industries,” said Paul Drysch, CEO of PreAct Technologies.  “We’ve spent the last three years creating a solution that fulfills the need of software-defined vehicles, providing the most value for Tier 1s and OEMs over the long term by making any ADAS application relevant for the entire life of the vehicle.

PreAct’s flash LiDAR architecture is based on modulated waveforms that can be optimized for different applications via over-the-air updates, along with an application stack that resides on the edge.  The flexibility of a software defined LiDAR allows Tier 1 suppliers and OEMs to package one sensor for multiple use cases – everything from true curb detection and gesture recognition to self-parking and automatic door actuation – that can update to meet their changing needs as more user and sensor data become available.

Near field automotive sensors have either been low-precision and low-cost, or high-precision and high-cost,” said Ian Riches, VP for the Global Automotive Practice at Strategy Analytics. “By bringing a high-precision, low-cost sensor to market, PreAct is enabling a huge range of safety and convenience features.  The software-defined characteristics of the T30P will allow these features to improve during the lifetime of the vehicle, unlocking new revenue streams for automakers.

T30P, with a frame rate of 200 fps and QVGA resolution, is also the fastest flash Lidar on the market making it well suited for ground and air robotics or industrial applications – systems which all share a need for fast, accurate and high-resolution sensors that can reliably define and track objects in all environmental conditions.

PreAct’s T30P Flash LiDAR sensor suite will be available in July 2022.


Thursday, February 24, 2022

Counterpoint Forecasts Sony Market Share to Shrink to 39% in 2022

Counterpoint forecasts: 

"The global Camera Images Sensor (CIS) market revenue is expected to grow 7% in 2022 to reach $21.9 billion, largely driven by increasing demand from the smartphone, automotive, industrial and other applications, according to the latest findings by Counterpoint’s Camera Supply Chain Research.

Commenting on the performance of different segments, Associate Director Ethan Qi said, “As the largest CIS end market, the mobile phone segment is expected to contribute 71.4% of the total market revenue in 2022, followed by automobile (8.6%) and surveillance (5.6%).”

Qi added, “With the continued rebound of global smartphone shipments and further upgrades of image sensors, particularly in resolution, the mobile phone segment is expected to see a mid-single-digit YoY increase in CIS revenue. Meanwhile, as vehicles become more intelligent, connected and autonomous, the implementation of view and sensing cameras for ADAS and ADS functions will proliferate, leading to increased CIS content in new vehicles in the coming years. Besides, the surveillance segment is expected to maintain a low-single-digit growth, partially driven by the lasting social distance impact of COVID-19.”

Looking from the vendor perspective, Sony is expected to capture a 39.1% revenue share in 2022, followed by Samsung (24.9%) and OmniVision (12.9%).

Sony has been actively expanding and diversifying its CIS customer base as the largest supplier of image and ToF sensors, both consisting of large-sized pixels, pushing the trend of raising mobile photography to a pro-level DSLR quality. Sony’s CIS revenue is expected to increase 3% YoY in 2022.

Meanwhile, the gap between Sony and Samsung is expected to narrow further as the latter will benefit from its first-mover advantage in providing cost-competitive super-high-resolution image sensors for mid-to-high smartphones and aggressive production capacity expansion.

OmniVision is also expected to see a big jump in CIS revenue in 2022, benefitting from a diversified product portfolio, breakthroughs in super-high-resolution sensors for smartphones and increasing demand from the automobile, surveillance and industrial segments."

Wednesday, February 23, 2022

Sony in Search for Killer Applications for its ToF Sensor

Sony publishes an interview with ist ToF application team members "Time-of-flight (ToF) image sensor for mobile phone applications revolutionizes mobile entertainment content with its capability to accurately capture not only figures and backgrounds, but also body gestures." Few quotes:

"While the contexts were steadily growing for leveraging the technology, there were no definite killer apps for it which people would put to everyday use. This situation resulted in a chicken or egg situation, that smartphone manufacturers were not keen to integrate ToF image sensors for the lack of killer apps while app developers had little incentive to develop apps for it because it was not adopted in many smartphones.

Given this situation, we thought that we should encourage the development of apps that leveraged ToF image sensors to incentivize both smartphone manufactures and app developers.

Sony Semiconductor Solutions Group (hereafter “the Group”) faced the challenge and sought for a solution in developing ground-breaking apps for the ToF image sensor for mobile applications. A large-scale project was launched, connecting teams in Japan and four Chinese cities—Shanghai, Beijing, Shenzhen, and Chengdu. We asked what the project aimed to achieve and how the apps were created over the great distances.

There were also obstacles from the development point of view. Laser emission increases power consumption, and so does depth sensing and processing. For the smartphone manufacturers, it also means more space needed to accommodate the sensor. There are, of course, additional advantages ToF image sensors can bring, but these advantages did not add enough value to extend the scope of application to all smartphone models. This resulted in the current situation that the sensor is installed in some high-end models, but not in other, more popular ones.

That is true, but we have smartphone manufacturers who are interested in integrating the ToF image sensor if there are interesting apps to use it. This was our incentive to take up the challenge and develop apps in order to topple the first domino piece to establish and expand an app market for the sensor."

Tuesday, February 22, 2022

ST Unveils its First iToF Sensor with 0.5MP Resolution

GlobeNewswireST announces a new family of high-resolution iToF sensors for smartphones and other devices.

The 3D family debuts with the VD55H1. This sensor maps three-dimensional surfaces by measuring the distance to over half a million points. Objects can be detected up to five meters from the sensor, and even further with patterned illumination. VD55H1 addresses emerging AR/VR market use cases including room mapping, gaming, and 3D avatars. In smartphones, the sensor enhances the performance of camera-system features including bokeh effect, multi-camera selection, and video segmentation. Face-authentication security is also improved with higher resolution and more accurate 3D images to protect phone unlocking, mobile payment, and any smart system involving secure transactions and access control.

The innovative VD55H1 3D depth sensor reinforces ST’s leadership in Time-of-Flight, and complements our full range of depth sensing technologies,” said Eric Aussedat, ST’s EVP, Imaging Sub-Group GM. “The FlightSense portfolio now comprises direct and indirect ToF products from single-point ranging all-in-one sensors to sophisticated high-resolution 3D imagers enabling future generations of intuitive, smart, and autonomous devices.

VD55H1’s pixel leverages in-house 40nm stacked wafer technology, ensures low power consumption, low noise, and optimized die area. The die contains 75% more pixels than existing VGA sensors, within a smaller die size.

The VD55H1 sensor is now available for lead customers to sample. Volume production maturity is scheduled for the second half of 2022. A reference design and complete software package are available to help accelerate sensor evaluation and project development.

Featuring a 672 x 804 BSI pixel array for iToF depth sensing, the VD55H1 is able to operate with a modulation frequency of 200MHz with more than 85% demodulation contrast at 940nm. This reduces the depth noise by a factor of two over incumbent sensors that typically operate around 100MHz. In addition, multi-frequency operation, an advanced depth-unwrapping algorithm, low pixel noise floor, and high pixel dynamic range ensure measurement accuracy over long ranging distance. Depth accuracy is better than 1% and typical precision is 0.1% of distance.

Other features include a short capture sequence that supports a frame rate up to 120 fps and improves motion-blur robustness. In addition, advanced clock and phase management including spread spectrum clock generator (SSCG) provides multi-device interference mitigation and optimized electromagnetic compatibility.

The power consumption can be reduced to less than 100mW in some streaming modes, to help prolong the runtime of battery-operated devices.

A consumer device form factor reference design for the VD55H1 has been created that includes the illumination system. A supporting fully featured software driver and a library containing an advanced depth-reconstruction image-signal-processing pipeline compatible with Android embedded platforms is also provided.


Monday, February 21, 2022

Sigma Updates on the Next Generation Foveon Sensor Development

Sigma publishes an official statement "Development status of the three-layer image sensor:"

Dear SIGMA customers,

First of all, thank you very much for your continued support and interest in our products.
SIGMA would like to share the development status of the three-layer image sensor as of February 2022 by the following.

The development of the three-layer image sensor is currently underway with the strong leadership of SIGMA’s headquarters in collaboration with research institutes in Japan. The stages of development can be roughly divided into the following:
  • Stage 1: Repeated design simulations of the new three-layer structure to confirm that it will function as intended.
  • Stage 2: Prototype evaluation using a small image sensor with the same pixel size as the product specifications but with a reduced total pixel count to verify the performance characteristics of the image sensor in practice.
  • Stage 3: Final prototype evaluation using a full-frame image sensor with the same specifications as the mass products including the AD converter etc…
We believe that these three stages are necessary in the development, and we are currently in the process of creating the prototype sensor for Stage 2.

Based on the evaluation results of the prototype sensor, we will decide whether to proceed to Stage 3 or to review the design data and re-prototype “Stage 2”. When we proceed to Stage 3, we will verify the mass-producibility of the sensor with research institutes and manufacturing vendors based on the evaluation results, and then make a final decision on whether or not to mass-produce the image sensor.

Although we have not yet reached the stage where we can announce a specific schedule for the mass production of the image sensor, we are determined to do our best to realize a camera that will truly please our customers who are waiting for it, as soon as possible.

Once again, I would like to thank all of you for your continued support of SIGMA.
We will continue to strive for technological development to meet your expectations and trust.

Kazuto Yamaki
Chief Executive Officer, SIGMA Corporation

Vision Sensor-Processor with In-Pixel Memory

KAIST and Samsung foundry publish a Nature paper "Mnemonic-opto-synaptic transistor for in-sensor vision system" by Joon-Kyu Han, Young-Woo Chung, Jaeho Sim, Ji-Man Yu, Geon-Beom Lee, Sang-Hyeon Kim, and Yang-Kyu Choi.

"A mnemonic-opto-synaptic transistor (MOST) that has triple functions is demonstrated for an in-sensor vision system. It memorizes a photoresponsivity that corresponds to a synaptic weight as a memory cell, senses light as a photodetector, and performs weight updates as a synapse for machine vision with an artificial neural network (ANN). Herein the memory function added to a previous photodetecting device combined with a photodetector and a synapse provides a technical breakthrough for realizing in-sensor processing that is able to perform image sensing and signal processing in a sensor. A charge trap layer (CTL) was intercalated to gate dielectrics of a vertical pillar-shaped transistor for the memory function. Weight memorized in the CTL makes photoresponsivity tunable for real-time multiplication of the image with a memorized photoresponsivity matrix. Therefore, these multi-faceted features can allow in-sensor processing without external memory for the in-sensor vision system. In particular, the in-sensor vision system can enhance speed and energy efficiency compared to a conventional vision system due to the simultaneous preprocessing of massive data at sensor nodes prior to ANN nodes. Recognition of a simple pattern was demonstrated with full sets of the fabricated MOSTs. Furthermore, recognition of complex hand-written digits in the MNIST database was also demonstrated with software simulations."

Sunday, February 20, 2022

High-Throughput SPAD Signal Processing

Edinburgh University and ST publish an open access IEEE JSSC paper "A High-Throughput Photon Processing Technique for Range Extension of SPAD-based LiDAR Receivers" by Sarrah M. Patanwala, Istvan Gyongy, Hanning Mai, Andreas Aßmann, Neale A. W. Dutton, Bruce R. Rae, and Robert K. Henderson.

"There has recently been a keen interest in developing LiDAR systems using SPAD sensors. This has led to a variety of implementations in pixel combining techniques and TDC architectures for such sensors. This paper presents a comparison of these approaches and demonstrates a technique capable of extending the range of LiDAR systems with improved resilience to background conditions. A LiDAR system emulator using a reconfigurable SPAD array and FPGA interface is used to compare these different techniques. A Monte Carlo simulation model leveraging synthetic 3D data is presented to visualize the sensor performance on realistic automotive LiDAR scenes."

Saturday, February 19, 2022

dToF Tutorial from Edinburgh University and ST

Edinburgh University publishes "Direct Time-of-Flight Single-Photon Imaging" by Istvan Gyongy, Neale A. W. Dutton, and Robert K. Henderson, also published by IEEE TED.

"This article provides a tutorial introduction to the direct Time-of-Flight (dToF) signal chain and typical artifacts introduced due to detector and processing electronic limitations. We outline the memory requirements of embedded histograms related to desired precision and detectability, which are often the limiting factor in the array resolution. A survey of integrated CMOS dToF arrays is provided highlighting future prospects to further scaling through process optimization or smart embedded processing."

Friday, February 18, 2022

Recent Videos: IIT Delhi, ADI, Omnivision, FLIR, Hamamatsu

IIT Delhi publishes a lecture "From light waves to images: Advancing Science with Pictures" by Kedar Khare:


Analog Devices publish a video on use case of its ADSD3100 platform based on Microsoft ToF sensor:


Omnivision publishes a promotional video for its 200MP OVB0B sensor with 0.61um pixels:

 

Teledyne FLIR demos the usefulness of thermal cameras in automatic emergency braking systems for cars:

 

Hamamatsu publishes a demo of its 8 x 128 pixel ToF sensor:

 

Himax Reports 2021 Results

GlobeNewswire: Himax updates on its imaging business in 2021:

"Himax is pleased to report that the company’s ultralow power AI image sensing total solution successfully entered into mass production in Q4 last year for a major tech name over a mainstream application. The company reached this major milestone just one year after it delivered the first samples, a remarkable achievement and an illustration of the robustness of AI solution. [I'd guess that this major customer is Amazon Ring and the product is video doorbell.]

The company is highly encouraged by the early success it has seen with ultralow power AI image sensing business thus far after a leading customer adopted it for a mainstream application. Himax expects to see more design-wins awarded across a broad customer base and a high variety of applications leading to robust sales growth for this new high margin product line.

Himax’s ultralow power AI image sensing total solution incorporates its ultralow power CMOS image sensor, proprietary AI processor and CNN-based AI algorithm. As reported earlier, the sizable order for a top-tier name customer’s mainstream application successfully entered production in Q4 last year, marking another impressive milestone for company’s new AI business within just one year since its initial release. The company will give further details after the end customer’s official announcement. Himax has also made good progress on this mainstream application with other leading vendors where the number of design-in projects is increasing. In addition to the success story, the second application Himax expects to see significant volume is in automatic meter reading (AMR) where AI total solution has been widely adopted by numerous customers across a wide geographical area in China. Himax’s power-saving AI cameras, deployed over the existing installed base of traditional water meters, enable the water meter to automatically collect consumption data with AI operating locally on the edge. The device transmits only byte-sized metadata to the server for billing and in-time detection of abnormal consumption or leakage, eliminating the need for manual reading. The battery pack has a lifetime of over 5 years, greatly outperforming conventional AMR solutions which usually are in a bulky form with large battery packs and, without local AI capability, have to transmit massive image data to the cloud to perform meter reading.

The company is already seeing accelerated deployment of AI solutions to a wide range of applications, including notebook, home appliances, utility meter, automotive, battery-powered surveillance camera, panoramic video conferencing, and medical, among other things. Moreover, new design-in sockets are on the way as it looks to leverage the collaboration with leading cloud service partners, such as Microsoft Azure and Google TensorFlow, on their edge-to-cloud platform to drive further adoption on applications such as smart home, smart office, healthcare, agriculture, retail and factory automation. Last but not least, Himax is seeing numerous design-in activities of AI solution for endoscope, an area the company is extremely excited about that may represent an extraordinary game changer for the health examination industry. Himax will report more detail in due course. Himax is very encouraged by the traction this relatively new product line has generated in a short amount of time and expect to see increasing sales contribution through 2022 and beyond."

Thursday, February 17, 2022

Intel Heritage in Image Sensors

It turns out that well before the Tower acquisition, in 90s, Intel already manufactured image sensors. Photobit designed it for Intel, Intel manufactured it, and then later Intel decided CMOS image sensors would be a commodity business and got out. Intel was Photobit’s first partner/customer. Intel Capital was an investor in Photobit for strategic purposes.

Yole Predicts that Sony and ST Will Capture 95% of SWIR Imagers Market

Yole Developpement believes that ST and Sony could disrupt the technological landscape with their SWIR imagers:

"In 2021, the SWIR industry’s leading players were SCD, Sensors Unlimited, and Teledyne FLIR, sharing more than 50% of the 11,000 units shipped in the year. These leaders are subsidiaries of leading defense companies that started developing SWIR technology with the support of governments for strategic purposes. They constitute the legacy side of the SWIR industry.

On the other side, STMicroelectronics and Sony, two leaders in the consumer imaging industry started being active players in SWIR with new technologies including quantum dots. Their entrance might be explained by the growing demand from consumer OEM for new integration designs such as under-display 3D sensing in smartphones. If SWIR imagers reach a low price point, shipments could skyrocket to hundreds of millions within a few years. The SWIR industry could emulate the current 3D imaging industry, where STMicroelectronics and Sony share nearly 95% of the 225 million shipments (2020 data)."

Wednesday, February 16, 2022

Peter Noble, Marvin White, and Northrop Grumman Win 2021 Emmy Awards

Peter Noble and Marvin White win 2021 Technology & Engineering Emmy Awards:
  • Correlated Double Sampling for Image Sensors
    • Marvin H. White
    • Northrop Grumman Mission Systems Group
  • Pioneering Development of an Image-Sensor Array with Buried-Photodiode Structure
    • Peter J. W. Noble

Sony "Sense the Wonder" Day

Sony publishes videos from its "Sense the Wonder" Day:

Tuesday, February 15, 2022

Omnivision Unveils 0.56um Pixel

BusinessWire: OMNIVISION announces a major pixel technology breakthrough―the world’s smallest 0.56-µm pixel with high QE, excellent quad phase detection (QPD) autofocus and low power consumption. This ultra-small pixel technology will address the demand for high-resolution and small pixel pitch image sensors for multi-camera mobile devices.

With a pixel size now smaller than the wavelength of red light, OMNIVISION’s R&D team has validated that pixel shrink is no longer limited by the wavelength of light. The 0.56µm pixel design is enabled by a CIS-dedicated 28nm process node and 22nm logic process node at TSMC, with a new pixel transistor layout and 2x4 shared pixel architecture. The pixel is based on OMNIVISION’s PureCel Plus-S stacking technology, and deep photodiode technology is applied to embed the photodiode deeper into the silicon.

It takes great R&D innovation to advance pixel technology, especially at this level where we are going beyond the wavelength of light,” said Lindsay Grant, SVP of Process Engineering at OMNIVISION. “We have not compromised high performance with the smaller die size. In fact, we have demonstrated comparable QPD and QE performance to our 0.61µm pixel in the visible light range.

Grant adds, “OMNIVISION invests heavily in R&D and almost 50 percent of our employees comprise R&D engineers. As a global fabless semiconductor provider, we also work closely with our foundry partners, such as TSMC, to develop new process technology approaches that enable industry-leading innovation like this. This is a remarkable achievement, and I applaud our talented R&D team and our foundry partner for their ability to continuously lead the pixel shrink race.

We are pleased with the results of our deep collaboration with OMNIVISION in delivery of the world’s smallest 0.56-µm pixel using our industry-leading CIS technology,” said Sajiv Dalal, EVP of Business Management, TSMC North America. “TSMC strives to advance semiconductor manufacturing technologies and services to enable the most advanced, state-of-the-art CIS designs. We look forward to our continued partnership with OMNIVISION to help them achieve high performance, superior resolution, and low power consumption goals and accelerate innovation for their differentiated products.

The first 0.56µm pixel die will be implemented in 200MP image sensors for smartphones in Q2 2022, with samples targeted for Q3. Consumers can expect to see new smartphones that contain the world’s smallest pixel available on the market in early 2023.

Intel Gets into CIS Foundry Business through the Acquisition of Tower for $5.4B

BusinessWire: Intel and Tower Semiconductor announce a definitive agreement under which Intel will acquire Tower for approximately $5.4 billion.

Tower’s specialty technology portfolio, geographic reach, deep customer relationships and services-first operations will help scale Intel’s foundry services and advance our goal of becoming a major provider of foundry capacity globally,” said Pat Gelsinger, Intel CEO. “This deal will enable Intel to offer a compelling breadth of leading-edge nodes and differentiated specialty technologies on mature nodes – unlocking new opportunities for existing and future customers in an era of unprecedented demand for semiconductors.

Tower owns 5 fabs directly and another 3 through a joint venture with Nuvoton. 6 of them manufacture image sensors, among other products. For some reason, Tower does not mention BSI processing joint venture with GPixel in China.


Update: Intel Investors Day presentation already shows CIS in the list of its foundry offerings:

Monday, February 14, 2022

Hybrid ToF (hToF) Image Sensor Paper

Shizuoka University publishes a IEEE Open JSSC paper "Hybrid Time-of-Flight Image Sensors for Middle-Range Outdoor Applications" by S. Kawahito, K. Yasutomi, and K. Mars.

"This paper introduces a new series of time-of-flight (TOF) range image sensors that can be used for outdoor middle-range (10m to 100m) applications by employing a small duty-cycle modulated light pulse with a relatively high optical peak power. This set of TOF sensors is referred to here as a hybrid TOF (hTOF) image sensor. The hTOF image sensor is based on the indirect TOF measurement principle but simultaneously uses the direct TOF concept for coarse measurements. Compared to conventional indirect TOF image sensors for outdoor middle-range applications, the hTOF image sensor has a distinct advantage due to the reduction of capturing ambient light charge. To show the potential of the hTOF image sensor for outdoor middle-range operation, a model of estimating distance precision of hTOF image sensors is built and applied it by using possible sensor specifications to estimate the distance precision of the hTOF range camera in 10m, 20m and 40m measurements under the ambient-light condition of 100klux and its feasibility is discussed. In outdoor 10m-range measurements, the advantage of hTOF image sensors compared to the conventional indirect TOF image sensors is discussed by considering the amount of captured ambient-light charge in pixels."