Thursday, July 11, 2024

Last chance to buy Sony CCD sensors

We shared back in 2015 news of Sony discontinuing their CCD sensors.

The "last time buy" period for these sensors is nearing the end.

Framos: https://www.framos.com/en/news/framos-announces-last-time-buy-deadline-for-sony-ccd-sensors

Taking into consideration current market demand and customer feedback, Sony has decided to revise the “Last Time Buy PO submission” deadline to the End of September 2024. Final shipments to FRAMOS remain unchanged at the end of March 2026. With these changes, FRAMOS invites all customers to submit their final Last Time Buy Purchase Orders to them no later than September 24th, 2024, to ensure timely processing and submission to Sony by the new Last Time Buy deadline date.
Important dates:
 Deadline for Last Time Buy Purchase Orders received by FRAMOS: September 24th, 2024
 Final delivery of accepted Last Time Buy Purchase Orders from FRAMOS: March 31st, 2026 

SVS-Vistek: https://www.svs-vistek.com/en/news/svs-news-article.php?p=svs-vistek-offers-last-time-buy-options-or-replacement-products-for-ccd-cameras

For customers who wish to continue using CCD-based designs, SVS-Vistek has initiated a Last-Time-Buy (LTB) period, effective immediately, followed by a subsequent Last-Time-Delivery (LTD) period. This allows our customers to continue to produce and sell their CCD-based products, ensuring reliable delivery. Orders can be placed until August 31, 2024 (Last-Time-Buy). SVS-Vistek will then offer delivery of LTB cameras until August 31, 2026 (Last-Time-Delivery). We advise our customers individually and try to find the best solution together. 

Wednesday, July 10, 2024

Forbes blog on Obsidian thermal imagers

Link: https://www.forbes.com/sites/davidhambling/2024/05/22/new-us-technology-makes-more--powerful-thermal-imagers-at-lower-cost/

[some excerpts below]

New U.S. Technology Makes More Powerful Thermal Imagers At Lower Cost 

Thermal imaging has been a critical technology in the war in Ukraine, spotting warm targets like vehicles and soldiers in the darkest nights. Military-grade thermal imagers used on big Baba Yaga night bombers are far too expensive for drone makers assembling $400 FPV kamikaze drones who have to rely on lower-cost devices. But a new technology developed by U.S company Obsidian Sensors Inc could transform the thermal imaging market with affordable high-resolution sensors.

...

Older digital cameras were based on CCDs (charge coupled devices), the current generation use more affordable CMOS imaging sensors which produce an electrical charge in response to light. The vast majority of thermal imagers use a different technology: an array of microbolometers, miniature devices whose pixels absorb infrared energy and measure the resulting change in resistance. The conventional design neatly integrates the microbolometers and the circuits which read them on the same silicon chip.

...

John Hong, CEO of Obsidian Sensors based in San Diego believes he has a better approach, which can scale up to high resolution at low cost and, crucially, high volume, at established foundries. The new design does not integrate everything in one unit but separates the bolometer array from the readout circuits. This is more complex but allows a different manufacturing technique to be used.

The readout circuits are still on silicon, but the sensor array is produced on a sheet of glass, leveraging technology perfected for flat-screen TVs and mobile phone displays. Large sheets of glass are far cheaper to process than small wafers of silicon and bolometers made on glass cost about a hundred times less than on silicon.

Hong says the process can easily produce multi-megapixel arrays. Obsidian are already producing test batches of VGA sensors, and plan to move to 1280x1024 this year and 1920x1080 in 2025.
Obsidian has been quietly developing their technology for six years and are now able to produce units for evaluation at a price three to four times lower than comparable models. Further evolution of the manufacturing process will bring prices even lower.

That could bring a 640x480 VGA sensor imager down to well below $200.

...

Hong says they plan to sell a thousand VGA cameras this year on a pilot production run, and are currently raising a series B to hit much larger volumes in 2025 and beyond. That should be just about right to surf the wave of demand in the next few years.

 

The thermal image from Obsidian's sensor (left) shows pedestrians who are invisible in the glare in a digital camera image (right) [Obsidian Sensors]


Friday, July 05, 2024

Videos du jour : under display cameras, SPADs

 


Designing Phase Masks for Under-Display Cameras

Diffractive blur and low light levels are two fundamental challenges in producing high-quality photographs in under-display cameras (UDCs). In this paper, we incorporate phase masks on display panels to tackle both challenges. Our design inserts two phase masks, specifically two microlens arrays, in front of and behind a display panel. The first phase mask concentrates light on the locations where the display is transparent so that more light passes through the display, and the second phase mask reverts the effect of the first phase mask. We further optimize the folding height of each microlens to improve the quality of PSFs and suppress chromatic aberration. We evaluate our design using a physically-accurate simulator based on Fourier optics. The proposed design is able to double the light throughput while improving the invertibility of the PSFs. Lastly, we discuss the effect of our design on the display quality and show that implementation with polarization-dependent phase masks can leave the display quality uncompromised.

 

 


Passive Ultra-Wideband Single-Photon Imaging

We consider the problem of imaging a dynamic scene over an extreme range of timescales simultaneously—seconds to picoseconds—and doing so passively, without much light, and without any timing signals from the light source(s) emitting it. Because existing flux estimation techniques for single-photon cameras break down in this regime, we develop a flux probing theory that draws insights from stochastic calculus to enable reconstruction of a pixel’s time-varying flux from a stream of monotonically-increasing photon detection timestamps. We use this theory to (1) show that passive free-running SPAD cameras have an attainable frequency bandwidth that spans the entire DC-to-31 GHz range in low-flux conditions, (2) derive a novel Fourier-domain flux reconstruction algorithm that scans this range for frequencies with statistically-significant support in the timestamp data, and (3) ensure the algorithm’s noise model remains valid even for very low photon counts or non-negligible dead times. We show the potential of this asynchronous imaging regime by experimentally demonstrating several never-seen-before abilities: (1) imaging a scene illuminated simultaneously by sources operating at vastly different speeds without synchronization (bulbs, projectors, multiple pulsed lasers), (2) passive non-line-of-sight video acquisition, and (3) recording ultra-wideband video, which can be played back later at 30 Hz to show everyday motions—but can also be played a billion times slower to show the propagation of light itself.


 
SoDaCam: Software-defined Cameras via Single-Photon Imaging

Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging.

Thursday, July 04, 2024

PetaPixel article on Samsung's 200MP sensor

Full article here: https://petapixel.com/2024/06/27/samsung-announces-worlds-first-200mp-sensor-for-telephoto-cameras/


Samsung Unveils World’s First 200MP Sensor for Smartphone Telephoto Cameras

 


Samsung has announced three new image sensors for main and sub cameras in upcoming smartphones. Among the trio of new chips, Samsung unveiled the world’s first 200-megapixel telephoto camera sensor for mobile devices.

The ISOCELL HP9, the industry’s first 200MP telephoto sensor for smartphones, features a Type 1/1.4 format and 0.56μm pixel size. Samsung explains that the sensor has a proprietary high-refractive microlens that uses a novel material and significantly improves the sensor’s light-gathering capabilities. This works by more precisely directing light to the corresponding RGB color filter. Samsung claims this results in 12% better light sensitivity (based on signal-to-noise ratio 10) and 10% improved autofocus contrast performance compared to Samsung’s prior telephoto sensor. 

“Notably, the HP9 excels in low-light conditions, addressing a common challenge for traditional telephoto cameras. Its Tetra²pixel technology merges 16 pixels (4×4) into a large, 12MP 2.24μm-sized sensor, enabling sharper portrait shots — even in dark settings — and creating dramatic out-of-focus bokeh effects,” the Korean tech giant explains.

When used alongside a new remosaic algorithm, Samsung says its new HP9 sensor offers 2x or 4x in-sensor zoom modes, achieving up to 12x total zoom when paired with a 3x optical zoom telephoto module, “all while maintaining crisp image quality.”

Next is the ISOCELL GNJ, a dual-pixel 50-megapixel image sensor in Type 1/1.57 format. This sensor sports 1.0μm pixels, and each pixel includes a pair of photodiodes, enabling “fast and accurate autofocus, similar to the way human eyes focus.” The sensor also captures complete color information, which Samsung says helps with focusing and image quality.

The sensor utilizes an in-sensor zoom function, which promises good video quality. It also offers benefits for still photography, as Samsung says the in-sensor zoom function can reduce artifacts and moiré.

Thanks to an improved high-transmittance anti-reflective layer (ARL), plus Samsung’s high-refractive microlenses, the GNJ boasts better light transmission and promises consistent image quality. It also has an upgraded pixel isolation material to minimize the crosstalk between adjacent pixels, resulting in more detailed, accurate photos.

As Samsung notes, these improvements also result in a more power-efficient design. The sensor offers a 29% improvement in live view power efficiency and a 34% reduction in power use when shooting 4K/60p video.

Rounding out the three new sensors is the ISOCELL JN5, a 50-megapixel Type 1/2.76 sensor with 0.64μm pixels. Because of its slim optical format, the new JN5 sensor can be used across primary and sub-cameras, including ultra-wide, wide, telephoto, and front-facing camera units.

The sensor includes dual vertical transfer gate (Dual VTG) technology to increase charge transfer within pixels, which reduces noise in extremely low-light conditions. It also leverages Super Quad Phase Detection (Super QPD) to rapidly adjust focus when capturing moving subjects.

Yet another fancifully named feature is dual slope gain (DSG), which Samsung says enhances the JN5’s high-dynamic range (HDR) performance. This works by amplifying analog signals (photons) into two signals, converting them into digital data, and combining them. This sounds similar to dual ISO technology, which expands dynamic range by combining low-gain and high-gain data into a single file.

Wednesday, July 03, 2024

onsemi acquires SWIR Vision Systems

From Businesswire: https://www.businesswire.com/news/home/20240702703913/en/onsemi-Enhances-Intelligent-Sensing-Portfolio-with-Acquisition-of-SWIR-Vision-Systems

onsemi Enhances Intelligent Sensing Portfolio with Acquisition of SWIR Vision Systems

SCOTTSDALE, Ariz.--(BUSINESS WIRE)--As part of onsemi’s continuous drive to provide the most robust, cutting-edge technologies for intelligent image sensing, the company announced today it has completed the acquisition of SWIR Vision Systems®. SWIR Vision Systems is a leading provider of CQD® (colloidal quantum-dot-based) short wavelength infrared (SWIR) technology – a technology that extends the detectable light spectrum to see through objects and capture images that were not previously possible. The integration of this patented technology within onsemi’s industry-leading CMOS sensors will significantly enhance the company’s intelligent sensing product portfolio and pave the way for further growth in key markets including industrial, automotive and defense.

CQD uses nanoparticles or crystals with unique optical and electronic properties that can be precisely tuned to absorb an extended wavelength of light. This technology extends the visibility and detection of systems beyond the range of standard CMOS sensors to SWIR wavelengths. To date, SWIR technology has been limited in adoption due to the high cost and manufacturing complexity of the traditional indium gallium arsenide (InGAas) process. With this acquisition, onsemi will combine its silicon-based CMOS sensors and manufacturing expertise with the CQD technology to deliver highly integrated SWIR sensors at lower cost and higher volume. The result are more compact, cost-effective imaging systems that offer extended spectrum and can be used in a wide array of commercial, industrial and defense applications.

These advanced SWIR sensors are able to see through dense materials, gases, fabrics and plastics, which is essential across many industries, particularly for industrial applications such as surveillance systems, silicon inspection, machine vision imaging and food inspection. In autonomous vehicle imaging, the higher spectra will create better visibility to see through difficult conditions such as extreme darkness, thick fog or winter glare.

SWIR Vision Systems is now a wholly owned subsidiary of onsemi, with its highly skilled team being integrated into the company’s Intelligent Sensing Group. The team will continue to operate in North Carolina. The acquisition is not expected to have any meaningful impact on onsemi’s near to midterm financial outlook.

Cambridge Mechatronics CEO interview: Capturing the smartphone camera market and more

 

In this episode of the Be Inspired series, Andy Osmant, CEO of Cambridge Mechatronics explains the countless use cases for the company’s shape memory alloy (SMA) actuators, from smart phone cameras to insulin pumps, and how they decided which markets to target. Andy also delves into their experience changing business models to also sell semiconductors, and how being part of the Cambridge ecosystem has supported the growth of the business.
 

0:00-3:54 About Cambridge Mechatronics
3:54-5:14 Controlling SMA
5:14-9:15 Supply chains and relationships
9:15-11:56 Other use cases
11:56-15:51 The Cambridge ecosystem
15:51-19:36 Looking ahead

Saturday, June 29, 2024

Sony announces IMX901/902 wide aspect ratio global shutter CIS

Press release: https://www.sony-semicon.com/en/info/2024/2024062701.html

Product page: https://www.sony-semicon.com/en/products/is/industry/gs/imx901-902.html

Sony Semiconductor Solutions to Release 8K Horizontal, Wide-Aspect Ratio Global Shutter Image Sensor for C-mount Lenses That Delivers High Image Quality and High-Speed Performance

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) announced today the upcoming release of the IMX901, a wide-aspect ratio global shutter CMOS image sensor with 8K horizontal resolution and approximately 16.41 effective megapixels. The IMX901 supports C-mount lenses, which are widely used in industrial applications, and offers high image quality and high-speed performance, helping to solve to a variety of industrial challenges.

The new sensor provides high-resolution and wide field of view with 8K horizontal and 2K vertical pixels. In addition, it features Pregius STM, a global shutter technology with a unique pixel structure, to deliver low-noise, high-quality, high-speed, and distortion-free imaging in a compact size.

In addition to this product, SSS will also release the IMX902, which has 6K horizontal and 2K vertical pixels and approximately 12.38 effective megapixels, to expand its product lineup of global shutter image sensors.

In today's logistics systems, where belt conveyors are seeing wider belt widths and faster speeds, there is a growing demand for image sensors that can expand the imaging area for barcode reading and improve imaging performance and efficiency. Typically, multiple cameras are required to capture the entire belt conveyor in the field of view, which can lead to concerns about increased camera system size and costs.

A single camera equipped with the new sensor announced today can capture a wide-range area horizontally, helping to reduce the number of cameras and associated cost required compared to conventional methods. In addition, leveraging SSS's original back-illuminated structure, Pregius S, the new product delivers both distortion-free high-speed imaging and high image quality. The product also features a wide dynamic range exceeding 70 dB and clearly captures fast-moving objects with a high frame rate of 134 fps.

This product, which can capture images in wide aspect ratio with high image quality and high speed, can be used for barcode reading on belt conveyors at logistics facilities, machine vision inspections and appearance inspections to detect fine defects and scratches, and other applications. 

 





Friday, June 21, 2024

Omnivision presents event camera deblurring paper at CVPR 2024

EVS-assisted Joint Deblurring Rolling-Shutter Correction and Video Frame Interpolation through Sensor Inverse Modeling

Event-based Vision Sensors (EVS) gain popularity in enhancing CMOS Image Sensor (CIS) video capture. Nonidealities of EVS such as pixel or readout latency can significantly influence the quality of the enhanced images and warrant dedicated consideration in the design of fusion algorithms. A novel approach for jointly computing deblurred, rolling-shutter artifact corrected high-speed videos with frame rates up to 10000 FPS using inherently blurry rolling shutter CIS frames of 120 FPS to 150 FPS in conjunction with EVS data from a hybrid CIS-EVS sensor is presented. EVS pixel latency, readout latency and the sensor's refractory period are explicitly incorporated into the measurement model. This inverse function problem is solved on a per-pixel manner using an optimization-based framework. The interpolated images are subsequently processed by a novel refinement network. The proposed method is evaluated using simulated and measured datasets, under natural and controlled environments. Extensive experiments show reduced shadowing effect, a 4 dB increment in PSNR, and a 12% improvement in LPIPS score compared to state-of-the-art methods.

 



Wednesday, June 19, 2024

CEA-Leti announces three-layer CIS

CEA-Leti Reports Three-Layer Integration Breakthrough On the Path for Offering AI-Embedded CMOS Image Sensors
 
This Work Demonstrates Feasibility of Combining Hybrid Bonding and High-Density Through-Silicon Vias
 
DENVER – May 31, 2024 – CEA-Leti scientists reported a series of successes in three related projects at ECTC 2024 that are key steps to enabling a new generation of CMOS image sensors (CIS) that can exploit all the image data to perceive a scene, understand the situation and intervene in it – capabilities that require embedding AI in the sensor.
 
Demand for smart sensors is growing rapidly because of their high-performance imaging capabilities in smartphones, digital cameras, automobiles and medical devices. This demand for improved image quality and functionality enhanced by embedded AI has presented manufacturers with the challenge of improving sensor performance without increasing the device size.
 
“Stacking multiple dies to create 3D architectures, such as three-layer imagers, has led to a paradigm shift in sensor design,” said Renan Bouis, lead author of the paper, “Backside Thinning Process Development for High-Density TSV in a 3-Layer Integration”.
 
“The communication between the different tiers requires advanced interconnection technologies, a requirement that hybrid bonding meets because of its very fine pitch in the micrometer & even sub-micrometer range,” he said. “High-density through-silicon via (HD TSV) has a similar density that enables signal transmission through the middle tiers. Both technologies contribute to the reduction of wire length, a critical factor in enhancing the performance of 3D-stacked architectures.”
 
‘Unparalleled Precision and Compactness’
 
The three projects applied the institute’s previous work on stacking three 300 mm silicon wafers using those technology bricks. “The papers present the key technological bricks that are mandatory for manufacturing 3D, multilayer smart imagers capable of addressing new applications that require embedded AI,” said Eric Ollier, project manager at CEA-Leti and director of IRT Nanoelec’s Smart Imager program. The CEA-Leti institute is a major partner of IRT Nanoelec.
 
“Combining hybrid bonding with HD TSVs in CMOS image sensors could facilitate the integration of various components, such as image sensor arrays, signal processing circuits and memory elements, with unparalleled precision and compactness,” said Stéphane Nicolas, lead author of the paper, “3-Layer Fine Pitch Cu-Cu Hybrid Bonding Demonstrator With High Density TSV For Advanced CMOS Image Sensor Applications,” which was chosen as one of the conference’s highlighted papers.
 
The project developed a three-layer test vehicle that featured two embedded Cu-Cu hybrid-bonding interfaces, face-to-face (F2F) and face-to-back (F2B), and with one wafer containing high-density TSVs.
 
Ollier said the test vehicle is a key milestone because it demonstrates both feasibility of each technological brick and also the feasibility of the integration process flow. “This project sets the stage to work on demonstrating a fully functional three-layer, smart CMOS image sensor, with edge AI capable of addressing high performance semantic segmentation and object-detection applications,” he said.
 
At ECTC 2023, CEA-Leti scientists reported a two-layer test vehicle combining a 10-micron high, 1-micron diameter HD TSV and highly controlled hybrid bonding technology, both assembled in F2B configuration. The recent work then shortened the HD TSV to six microns high, which led to development of a two-layer test vehicle exhibiting low dispersion electrical performances and enabling simpler manufacturing.
 
’40 Percent Decrease in Electrical Resistance’
 
“Our 1-by-6-micron copper HD TSV offers improved electrical resistance and isolation performance compared to our 1-by-10-micron HD TSV, thanks to an optimized thinning process that enabled us to reduce the substrate thickness with good uniformity,” said Stéphan Borel, lead author of the paper, “Low Resistance and High Isolation HD TSV for 3-Layer CMOS Image Sensors”.
 
“This reduced height led to a 40 percent decrease in electrical resistance, in proportion with the length reduction. Simultaneous lowering of the aspect ratio increased the step coverage of the isolation liner, leading to a better voltage withstand,” he added.
 
“With these results, CEA-Leti is now clearly identified as a global leader in this new field dedicated to preparing the next generation of smart imagers,” Ollier explained. “These new 3D multi-layer smart imagers with edge AI implemented in the sensor itself will really be a breakthrough in the imaging field, because edge AI will increase imager performance and enable many new applications.”