Thursday, July 28, 2022

Sigma Foveon sensor will be ready in 2022

From PetaPixel:

Sigma’s CEO Kazuto Yamaki has revealed that the company’s efforts in making a full-frame Foveon sensor are on track to be finished by the end of the year. 


Sigma’s Foveon sensors use a proprietary three-layer structure in which red, green, and blue pixels each have their own full layer. In traditional sensors, the three pixels share a single layer in a mosaic arrangement and the camera “fills in” missing colors by examining neighboring pixels.

Since each pixel of a photo is recorded in three colors, the resulting photo should be sharper with better color accuracy and fewer artifacts.


The release had been delayed on at least two occasions in the past due to technical challenges, once in 2020 and again in 2021. The initial announcement about this sensor was made back in 2018. In February 2022, Yamaki indicated that the company was in stage 2 of testing, and the final third stage will involve mass-production testing.

Friday, July 22, 2022

Prophesee interview in EETimes

EETimes has published an interview with CEO of Prophesee about their event sensor technology. Some excerpts below.

 
Prophesee collaborated with Sony on creating the IMX636 event sensor chip.

 

Meaning of "neuromorphic"

Most companies doing neuromorphic sensing and computing have a similar vision in mind, but implementations and strategies will be different based on varying product, market, and investment constraints. ...

... there is a fundamental belief that the biological model has superior characteristics compared to the conventional ...

Markets targeted

... the sector closest to commercial adoption of this technology is industrial machine vision. ...

The second key market for the IMX 636 is consumer technologies, ... the event–based camera is used alongside a full–frame camera, detecting motion ... correct any blur.

Prophesee is also working with a customer on automotive driver monitoring solutions... Applications here include eye blinking detection, tracking or face tracking, and micro–expression detection. 

Commercialization strategy

The company recently released a new evaluation kit (EVK4) for the IMX 636. Metavision (simulator) SDK for event–based vision has also recently been open–sourced ...

 

Future Directions

Prophesee plans to continue development of both hardware and software, alongside new evaluation kits, development kits, and reference designs.

Two future directions... 

further reduction of pixel size (pixel pitch) and overall reduction of the sensor to make it suitable for compact consumer applications such as wearables. 

... facilitating the integration of event–based sensing with conventional SoC platforms.

“The closer you get to the acquisition of the information, the better off you are in terms of efficiency and low latency. You also avoid the need to encode and transmit the data. So this is something that we are pursuing.”

“The ultimate goal of neuromorphic technology is to have both the sensing and processing neuromorphic or event–based, but we are not yet there in terms of maturity of this type of solution,”

Full article here: https://www.eetimes.com/neuromorphic-sensing-coming-soon-to-consumer-products/?

Thursday, July 21, 2022

3D cameras for metaverse

Press release from II-VI Inc. announces joint effort with Artilux on a SWIR 3D camera for the "metaverse".

https://ii-vi.com/news/ii-vi-incorporated-and-artilux-demonstrate-a-3d-camera-for-enhanced-user-experience-in-the-metaverse/


 

PITTSBURGH and HSINCHU, TAIWAN, July 18, 2022 (GLOBE NEWSWIRE) – II‐VI Incorporated (Nasdaq: IIVI), a leader in semiconductor lasers, and Artilux, a leader in germanium silicon (GeSi) photonics and CMOS SWIR sensing technology, today announced a joint demonstration of a next-generation 3D camera with much longer range and higher image resolution to greatly enhance user experience in the metaverse.


Investments in the metaverse infrastructure are accelerating and driving the demand for sensors that enable more realistic and immersive virtual experiences. II-VI and Artilux combined their proprietary technologies in indium phosphide (InP) semiconductor lasers and GeSi sensor arrays, respectively, to demonstrate a miniature 3D camera that operates in the short-wavelength infrared (SWIR), at 1380 nm, resulting in significantly higher performance than existing cameras operating at 940 nm.


“The longer infrared wavelength provides better contrasts and reveals material details that are otherwise not visible with shorter-wavelength illumination, especially in outdoor environments,” said Dr. Julie Sheridan Eng, Sr. Vice President, Optoelectronic Devices & Modules Business Unit, II-VI. “By designing a camera that operates at 1380 nm instead of 940 nm, we can illuminate the scene with greater brightness and still remain well within the margins of eye safety requirements. In addition, the atmosphere absorbs more sunlight at 1380 nm than at 940 nm, which reduces background light interference, greatly improving the signal-to-noise ratio and enabling cameras with longer range and better image resolution.”


“The miniature SWIR 3D camera can be seamlessly integrated into next-generation consumer devices, many of which are under development for augmented-, mixed-, and virtual-reality applications,” said Dr. Neil Na, co-founder and CTO of Artilux. “II‑VI and Artilux demonstrated a key capability that will enable the metaverse to become a popular venue for entertainment, work, and play. The SWIR camera demonstration provides a glimpse of the future of 3D sensing in the metaverse, with displays that can identify, delineate, classify, and render image content, or with avatars that can experience real-time eye contact and facial expressions.” 


II-VI provided the highly integrated SWIR illumination module comprising InP edge-emitting lasers that deliver up to 2 W of output power and optical diffusers, in surface-mount technology (SMT) packages for low-cost and high-quality assembly. Artilux’s camera features a high-bandwidth and high-quantum-efficiency GeSi SWIR sensor array based on a scalable CMOS technology platform. Combined, the products enable a broad range of depth-sensing applications in consumer and automotive markets. 


About II-VI Incorporated
II-VI Incorporated, a global leader in engineered materials and optoelectronic components, is a vertically integrated manufacturing company that develops innovative products for diversified applications in communications, industrial, aerospace & defense, semiconductor capital equipment, life sciences, consumer electronics, and automotive markets. Headquartered in Saxonburg, Pennsylvania, the Company has research and development, manufacturing, sales, service, and distribution facilities worldwide. The Company produces a wide variety of application-specific photonic and electronic materials and components, and deploys them in various forms, including integrated with advanced software to support our customers. For more information, please visit us at www.ii-vi.com.


About Artilux
Artilux, renowned for being the world leader of GeSi photonic technology, has been at the forefront of wide-spectrum 3D sensing and consumer optical connectivity since 2014. Established on fundamental technology breakthroughs, Artilux has been making multidisciplinary innovations covering integrated optics, system architecture to computing algorithm, and emerged as an innovation enabler for smartphones, autonomous driving, augmented reality, and beyond. Our vision is to keep pioneering the frontier of photonic technologies and transform them into enrichment for real life experience. We enlighten the path from information to intelligence. Find out more at www.artiluxtech.com.


Wednesday, July 20, 2022

Review of indirect time-of-flight 3D cameras (IEEE TED June 2022)

C. Bamji et al. from Microsoft published a paper titled "A Review of Indirect Time-of-Flight Technologies" in IEEE Trans. Electron Devices (June 2022).

Abstract: Indirect time-of-flight (iToF) cameras operate by illuminating a scene with modulated light and inferring depth at each pixel by combining the back-reflected light with different gating signals. This article focuses on amplitude-modulated continuous-wave (AMCW) time-of-flight (ToF), which, because of its robustness and stability properties, is the most common form of iToF. The figures of merit that drive iToF performance are explained and plotted, and system parameters that drive a camera’s final performance are summarized. Different iToF pixel and chip architectures are compared and the basic phasor methods for extracting depth from the pixel output values are explained. The evolution of pixel size is discussed, showing performance improvement over time. Depth pipelines, which play a key role in filtering and enhancing data, have also greatly improved over time with sophisticated denoising methods now available. Key remaining challenges, such as ambient light resilience and multipath invariance, are explained, and state-of-the-art mitigation techniques are referenced. Finally, applications, use cases, and benefits of iToF are listed.



Use of time gates to integrate returning light


iToF camera measurement


Modulation contrast vs. modulation frequency used in iToF cameras


Trend of pixel sizes since 2012

Trend of pixel array sizes since 2012

Trend of near infrared pixel quantum efficiencies since 2010


Multigain column readout


Multipath mitigation

DOI link: 10.1109/TED.2022.3145762

Monday, July 18, 2022

Amphibious panoramic bio-inspired camera in Nature Electronics

M. Lee at al. have published a paper titled "An amphibious artificial vision system with a panoramic visual field" in Nature Electronics. This paper is joint work between researchers in Korea (Institute of Basic Science, Seoul National University, Pusan National University) and USA (UT Austin and MIT).

Abstract: Biological visual systems have inspired the development of various artificial visual systems including those based on human eyes (terrestrial environment), insect eyes (terrestrial environment) and fish eyes (aquatic environment). However, attempts to develop systems for both terrestrial and aquatic environments remain limited, and bioinspired electronic eyes are restricted in their maximum field of view to a hemispherical field of view (around 180°). Here we report the development of an amphibious artificial vision system with a panoramic visual field inspired by the functional and anatomical structure of the compound eyes of a fiddler crab. We integrate a microlens array with a graded refractive index and a flexible comb-shaped silicon photodiode array on a spherical structure. The microlenses have a flat surface and maintain their focal length regardless of changes in the external refractive index between air and water. The comb-shaped image sensor arrays on the spherical substrate exhibit an extremely wide field of view covering almost the entire spherical geometry. We illustrate the capabilities of our system via optical simulations and imaging demonstrations in both air and water.








Full paper text is behind a paywall. I could not find a preprint or author copy. However, the supplementary document and figures are freely accessible.
https://www.nature.com/articles/s41928-022-00789-9

Wednesday, July 13, 2022

IEEE International Conference on Computational Photography 2022 in Pasadena (Aug 1-3)


[Jul 16, 2022] Update from program chair Prof. Ioannis Gkioulekas: All paper presentations will be live-streamed on the ICCP YouTube channel: https://www.youtube.com/channel/UClptqae8N3up_bdSMzlY7eA

You can watch them for free, no registration required. You can also use the live stream to ask the presenting author questions.

ICCP will take place in person in Caltech (Pasadena, CA) from August 1 to 3, 2022. The final program is now available here: https://iccp2022.iccp-conference.org/program/

There will be an exciting line up of:
  • three keynote speakers, Shree Nayar, Changhuei Yang, Joyce Farrell;
  • ten invited speakers, spanning areas from acousto-optics and optical computing, to space exploration and environment conservation; and 
  • 24 paper and more than 80 poster and demo presentations. 


List of accepted papers with oral presentations:

#16: Learning Spatially Varying Pixel Exposures for Motion Deblurring
Cindy Nguyen (Stanford University); Julien N. P. Martel (Stanford University); Gordon Wetzstein (Stanford University)

#43: MantissaCam: Learning Snapshot High-dynamic-range Imaging with Perceptually-based In-pixel Irradiance Encoding
Haley M So (Stanford University); Julien N. P. Martel (Stanford University); Piotr Dudek (School of Electrical and Electronic Engineering, The University of Manchester, UK); Gordon Wetzstein (Stanford University)

#47: Rethinking Learning-based Demosaicing, Denoising, and Super-Resolution Pipeline
Guocheng Qian (KAUST); Yuanhao Wang (KAUST); Jinjin Gu (The University of Sydney); Chao Dong (SIAT); Wolfgang Heidrich (KAUST); Bernard Ghanem (KAUST); Jimmy Ren (SenseTime Research; Qing Yuan Research Institute, Shanghai Jiao Tong University)

#54: Physics vs. Learned Priors: Rethinking Camera and Algorithm Design for Task-Specific Imaging
Tzofi M Klinghoffer (Massachusetts Institute of Technology); Siddharth Somasundaram (Massachusetts Institute of Technology); Kushagra Tiwary (Massachusetts Institute of Technology); Ramesh Raskar (Massachusetts Institute of Technology)

#6: Analyzing phase masks for wide etendue holographic displays
Sagi Monin (Technion – Israel Institute of Technology); Aswin Sankaranarayanan (Carnegie Mellon University); Anat Levin (Technion)

#7: Wide etendue displays with a logarithmic tilting cascade
Sagi Monin (Technion – Israel Institute of Technology); Aswin Sankaranarayanan (Carnegie Mellon University); Anat Levin (Technion)

#65: Towards Mixed-State Coded Diffraction Imaging
Benjamin Attal (Carnegie Mellon University); Matthew O’Toole (Carnegie Mellon University)

#19: A Two-Level Auto-Encoder for Distributed Stereo Coding
Yuval Harel (Tel Aviv University); Shai Avidan (Tel Aviv University)

#35: First Arrival Differential LiDAR
Tianyi Zhang (Rice University); Akshat Dave (Rice University); Ashok Veeraraghavan (Rice University); Mel J White (Cornell); Shahaboddin Ghajari (Cornell University); Alyosha C Molnar (Cornell University); Ankit Raghuram (Rice University)

#46: PS2F: Polarized Spiral PSF for single-shot 3D sensing
Bhargav Ghanekar (Rice University); Vishwanath Saragadam (Rice University); Dushyant Mehra (Rice University); Anna-Karin Gustavsson (Rice University); Aswin Sankaranarayanan (Carnegie Mellon University); Ashok Veeraraghavan (Rice University)

#56: Double Your Corners, Double Your Fun: The Doorway Camera
William Krska (Boston University); Sheila Seidel (Boston University); Charles Saunders (Boston University); Robinson Czajkowski (University of South Florida); Christopher Yu (Charles Stark Draper Laboratory); John Murray-Bruce (University of South Florida); Vivek K Goyal (Boston University)

#8: Variable Imaging Projection Cloud Scattering Tomography
Roi Ronen (Technion); Schechner Yoav (Technion); Vadim Holodovsky (Technion)

#31: DIY hyperspectral imaging via polarization-induced spectral filters
Katherine Salesin (Dartmouth College); Dario R Seyb (Dartmouth College); Sarah Friday (Dartmouth College); Wojciech Jarosz (Dartmouth College)

#57: Wide-Angle Light Fields
Michael De Zeeuw (Carnegie Mellon University); Aswin Sankaranarayanan (Carnegie Mellon University)

#55: Computational Imaging using Ultrasonically-Sculpted Virtual Lenses
Hossein Baktash (Carnegie Mellon University); Yash Belhe (University of California, San Diego); Matteo Scopelliti (Carnegie Mellon University); Yi Hua (Carnegie Mellon University); Aswin Sankaranarayanan (Carnegie Mellon University); Maysamreza Chamanzar (Carnegie Mellon University)

#38: Dynamic structured illumination microscopy with a neural space-time model
Ruiming Cao (UC Berkeley); Fanglin Linda Liu (UC Berkeley); Li-Hao Yeh (Chan Zuckerberg Biohub); Laura Waller (UC Berkeley)

#39: Tensorial tomographic differential phase-contrast microscopy
Shiqi Xu (Duke University); Xiang Dai (University of California San Diego); Xi Yang (Duke University); Kevin Zhou (Duke University); Kanghyun Kim (Duke University); Vinayak Pathak (Duke University); Carolyn Glass (Duke University); Roarke Horstmeyer (Duke University)

#42: Style Transfer with Bio-realistic Appearance Manipulation for Skin-tone Inclusive rPPG
Yunhao Ba (UCLA); Zhen Wang (UCLA); Doruk Karinca (University of California, Los Angeles); Oyku Deniz Bozkurt (UCLA); Achuta Kadambi (UCLA)#4: Robust Scene Inference under Dual Image Corruptions
Bhavya Goyal (University of Wisconsin-Madison); Jean-Francois Lalonde (Université Laval); Yin Li (University of Wisconsin-Madison); Mohit Gupta (University of Wisconsin-Madison)

#9: Time-of-Day Neural Style Transfer for Architectural Photographs
Yingshu Chen ( The Hong Kong University of Science and Technology); Tuan-Anh Vu (The Hong Kong University of Science and Technology); Ka-Chun Shum (The Hong Kong University of Science and Technology); Binh-Son Hua (VinAI Research); Sai-Kit Yeung (Hong Kong University of Science and Technology)

#25: MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images
Xiangjun Gao (Beijing institute of technology); Jiaolong Yang (Microsoft Research); Jongyoo Kim (Microsoft Research Asia); Sida Peng (Zhejiang University); Zicheng Liu (Microsoft); Xin Tong (Microsoft)

#26: Differentiable Appearance Acquisition from a Flash/No-flash RGB-D Pair
Hyun Jin Ku (KAIST); Hyunho Ha (KAIST); Joo-Ho Lee (Sogang University); Dahyun Kang (KAIST); James Tompkin (Brown University); Min H. Kim (KAIST)

#17: HiddenPose: Non-line-of-sight 3D Human Pose Estimation
Ping Liu (ShanghaiTech University); Yanhua Yu (ShanghaiTech University); Zhengqing Pan (ShanghaiTech University); Xingyue Peng (ShanghaiTech University); Ruiqian Li (ShanghaiTech University); wang yh (ShanghaiTech University ); Shiying Li (ShanghaiTech University); Jingyi Yu (Shanghai Tech University)

#61: Physics to the Rescue: A Physically Inspired Deep Model for Rapid Non-line-of-sight Imaging
Fangzhou Mu (University of Wisconsin-Madison); SICHENG MO (University of Wisconsin-Madison); Jiayong Peng (University of Science and Technology of China); Xiaochun Liu (University of Wisconsin-Madison); Ji Hyun Nam (University of Wisconsin-Madison); Siddeshwar Raghavan (Purdue University); Andreas Velten (University of Wisconsin-Madison); Yin Li (University of Wisconsin-Madison)

Detailed depth maps from gated cameras

Recent work from Princeton University's computational imaging lab shows a new method for generating highly detailed depth maps from a gated camera. 

This work was presented at the recent IEEE/CVF Computer Vision and Pattern Recognition 2022 conference in New Orleans.

Abstract: Gated cameras hold promise as an alternative to scanning LiDAR sensors with high-resolution 3D depth that is robust to back-scatter in fog, snow, and rain. Instead of sequentially scanning a scene and directly recording depth via the photon time-of-flight, as in pulsed LiDAR sensors, gated imagers encode depth in the relative intensity of a handful of gated slices, captured at megapixel resolution. Although existing methods have shown that it is possible to decode high-resolution depth from such measurements, these methods require synchronized and calibrated LiDAR to supervise the gated depth decoder – prohibiting fast adoption across geographies, training on large unpaired datasets, and exploring alternative applications outside of automotive use cases. In this work, propose an entirely self-supervised depth estimation method that uses gated intensity profiles and temporal consistency as a training signal. The proposed model is trained end-to-end from gated video sequences, does not require LiDAR or RGB data, and learns to estimate absolute depth values. We take gated slices as input and disentangle the estimation of the scene albedo, depth, and ambient light, which are then used to learn to reconstruct the input slices through a cyclic loss. We rely on temporal consistency between a given frame and neighboring gated slices to estimate depth in regions with shadows and reflections. We experimentally validate that the proposed approach outperforms existing supervised and self-supervised depth estimation methods based on monocular RGB and stereo images, as well as supervised methods based on gated images. Code is available at https://github.com/princeton-computationalimaging/Gated2Gated.



An example gated imaging system is pictured in the bottom left and consists of a synchronized camera and a not shown VECSL flash illumination source. The system allows to integrate the scene response for narrow depth ranges as illustrated in the bottom row. Therefore, the overlapping gated slices contain implicit depth information according to the time-of-flight principle at image resolution. In comparison the illustrated LiDAR sensors in the top left send out point wise illumination pulses causing a sparse depth representation depicted in the top row. Our proposed self-supervised Gated2Gated learning technique recovers this dense depth information (middle row) from the shown set of three gated images, by learning from temporal and gated illumination cues.

The paper shows results in a variety of challenging driving conditions such as nighttime, fog, rain and snow.

A. Walia et al., "Gated2Gated: Self-Supervised Depth Estimation from Gated Images", CVPR 2022.

Monday, July 11, 2022

3D Wafer Stacking: Review paper in IEEE TED June 2022 Issue

In IEEE Trans. Electr. Dev. June 2022 issue, in a paper titled "A Review of 3-Dimensional Wafer Level Stacked Backside Illuminated CMOS Image Sensor Process Technologies," Wuu et al. write:

Over the past 10 years, 3-dimensional (3-D) wafer-level stacked backside Illuminated (BSI) CMOS image sensors (CISs) have undergone rapid progress in development and performance and are now in mass production. This review paper covers the key processes and technology components of 3-D integrated BSI devices, as well as results from early devices fabricated and tested in 2007 and 2008. This article is divided into three main sections. Section II covers wafer-level bonding technology. Section III covers the key wafer fabrication process modules for BSI 3-D waferlevel stacking. Section IV presents the device results.




This paper has quite a long list of acronyms. Here is a quick reference:
BDTI = backside deep trench isolation
BSI = backside illumination
BEOL = back end of line
HB = hybrid bonding
TSV = through silicon via
HAST = highly accelerated (temperature and humidity) stress test
SOI = silicon on insulator
BOX = buried oxide

Section II goes over wafer level direct bonding methods.



Section III discusses important aspects of stacked design development for BSI (wafer thinning, hybrid bonding, backside deep trench isolation, pyramid structure to improve quantum efficiency, use of high-k dielectric film to deal with crystal defects, and pixel performance analyses).














Section IV shows some results of early stacked designs.







Full article: https://doi.org/10.1109/TED.2022.3152977

Friday, July 08, 2022

Xiaomi 12s will have a 1" sensor

From PetaPixel:

Xiaomi has announced that it’s upcoming 12S Ultra will use the full size of Sony’s IMX989 1-inch sensor. The phone, which is also co-developed with Leica, will be announced on July 4.



 

Xiaomi’s Lei Jun says that the 1-inch sensor that is coming to the 12S Ultra, crucially, won’t be cropped. How the company plans to deal with physical issues Sony came up against in its phone isn’t clear. Jun also says that Xiaomi didn’t just buy the sensor, but that it was co-developed between the two companies with a total investment cost of $15 million split evenly between them. The fruits of this development will first come to the 12S Ultra before being made available to other smartphone manufacturers, so it’s not exclusive to Xiaomi forever.

... only the 12S Ultra will feature a 1-inch sensor while the 12S and 12S Pro will feature the Sony IMX707 instead.

 

Thursday, July 07, 2022

High resolution ToF module from Analog Devices

Analog Devices has released an industrial-grade megapixel ToF module ADTF3175 and a VGA resolution sensor the ADSD3030 that seeks to bring the highest accuracy ToF technology in the most compact VGA footprint.


The ADTF3175 is a complete Time-of-Flight (ToF) module for high resolution 3D depth sensing and vision systems. Based on the ADSD3100, a 1 Megapixel CMOS indirect Time-of-Flight (iToF) imager, the ADTF3175 also integrates the lens and optical bandpass filter for the imager, an infrared illumination source containing optics, laser diode, laser diode driver and photodetector, a flash memory, and power regulators to generate local supply voltages. The module is fully calibrated at multiple range and resolution modes. To complete the depth sensing system, the raw image data from the ADTF3175 is processed externally by the host system processor or depth ISP.

The ADTF3175 image data output interfaces electrically to the host system over a 4-lane mobile industry processor interface (MIPI), Camera Serial Interface 2 (CSI-2) Tx interface. The module programming and operation are controlled through 4-wire SPI and I2C serial interfaces.

The ADTF3175 has module dimensions of 42mm × 31mm × 15.1mm, and is specified over an operating temperature range of -20°C to 65°C.

Applications:
Machine vision systems
Robotics
Building automation
Augmented reality (AR) systems

Price:
$197 in 1,000 Unit Quantities




The ADSD3030 is a CMOS 3D Time of Flight (ToF)-based 3D depth and 2D visual light imager that is available for integration into 3D sensor systems. The functional blocks required for read out, which include analog-to-digital converters (ADCs), amplifiers, pixel biasing circuitry, and sensor control logic, are built into the chip to enable a cost-effective and simple implementation into systems.

The ADSD3030 interfaces electrically to a host system over a mobile industry processor interface (MIPI), Camera Serial Interface 2 (CSI-2) interface. A lens plus optical band-pass filter for the imager and an infrared light source plus an associated driver are required to complete the working subsystem.

Applications:
Smartphones
Augmented reality (AR) and virtual reality (VR)
Machine vision systems (logistics and inventory)
Robotics (consumer and industrial)


Wednesday, July 06, 2022

Labforge releases new 20.5T ops/s AI machine vision camera

Labforge has designed and developed a smart camera called Bottlenose which supports 20.5 trillion operations/second processing power and on-board AI, depth, feature points & matching, and a powerful ISP. The target audience is robotics and automation. They have built the camera around a Toshiba Visconti-5 processor. The current models are available as both stereo and monocular versions with IMX577 Sony image sensors. For future models there will be a range of resolutions and shutter options available. 






Tuesday, July 05, 2022

Sony releases new sensors IMX487, IMX661

IMX487 UV 8.13MP

[Advertised as "new product launch" but this has been around for a while.]

Global shutter CMOS image sensor specialized for the UV spectrum

With the structure specially designed for the properties of the UV wavelengths coupled with Pregius S technology, the image sensor can capture undistorted images of moving objects within a UV range of 200–400 nm and at a high frame rate of 193 fps (operated in the 10-bit mode). This image sensor has a potential to expand the scope of application from the conventional use of UV cameras in the inspection of semiconductors, etc. to areas that require high-speed capability, such as sorting of recycled materials.

Low noise

This image sensor has adopted the component materials dedicated for UV range imaging, and a special structure has been developed for its light receiving area. These make it possible to maintain high UV sensitivity while significantly minimizing noises to produce high quality images.

Smaller pixels

The pixels are miniaturized down to 2.74 um while maintaining high UV sensitivity, realizing a small multi-pixel sensor of the 2/3 type with approximately 8.13 megapixels. It serves well with factory automation, but also for many other purposes, notably for outdoor use for infrastructure inspections, by virtue of its portability and high resolution.






IMX661 127MP

The IM661 is a diagonal 56.73 mm (Type 3.6) CMOS active pixel type solid-state image sensor with a square pixel array and 127 M effective pixels. This chip features a global shutter with variable charge-integration time. This chip operates with analog 3.3 V, digital 1.2 V, and interface 1.8 V quadruple power supply. (Applications: FA cameras)





Monday, July 04, 2022

Samsung's ISOCELL HP3 sensor

Samsung has published details about its now 200MP sensor 'ISOCELL HP3'.

https://semiconductor.samsung.com/image-sensor/mobile-image-sensor/isocell-hp3/

Press release: https://news.samsung.com/global/samsung-unveils-isocell-image-sensor-with-industrys-smallest-0-56%CE%BCm-pixel


Samsung Electronics, a world leader in advanced semiconductor technology, today introduced the 200MP ISOCELL HP3, the image sensor with the industry’s smallest 0.56-micrometer (μm)-pixels.

“Samsung has continuously led the image sensor market trend through its technology leadership in high resolution sensors with the smallest pixels,” said JoonSeo Yim, Executive Vice President of Sensor Business Team at Samsung Electronics. “With our latest and upgraded 0.56μm 200MP ISOCELL HP3, Samsung will push on to deliver epic resolutions beyond professional levels for smartphone camera users.”

Epic Resolution Beyond Pro Levels

Since its first 108MP image sensor roll-out in 2019, Samsung has been leading the trend of next-generation, ultra-high-resolution camera development. Through the steady launch of new image sensors and advancements in performance, the company is once again forging ahead with the 0.56μm 200MP ISOCELL HP3.

The ISOCELL HP3, with a 12 percent smaller pixel size than the predecessor’s 0.64μm, packs 200 million pixels in a 1/1.4” optical format, which is the diameter of the area that is captured through the camera lens. This means that the ISOCELL HP3 can enable an approximately 20 percent reduction in camera module surface area, allowing smartphone manufacturers to keep their premium devices slim.

The ISOCELL HP3 comes with a Super QPD auto-focusing solution, meaning that all of the sensor’s pixels are equipped with auto-focusing capabilities. In addition, Super QPD uses a single lens over four-adjacent pixels to detect the phase differences in both horizontal and vertical directions. This paves way for a more accurate and quicker auto focusing for smartphone camera users.

The sensor also allows users to take videos in 8K at 30 frames-per-second (fps) or 4K at 120fps, with minimal loss in the field of view when taking 8K videos. Combined with the Super QPD solution, users can take movie-like cinematic footage with their mobile devices.

Ultimate Low Light Experience Through ‘Tetra2pixel’

The ISOCELL HP3 also provides an ultimate low-light experience, with the Tetra2pixel technology that combines four pixels into one to transform the 0.56μm 200MP sensor into a 1.12μm 50MP sensor, or a 12.5MP sensor with 2.24μm-pixels by combining 16 pixels into one. The technology enables the sensor to simulate a large-sized pixel sensor to take brighter and more vibrant shots even in dimmed environments, like in-doors or during nighttime.

To maximize the dynamic range of the mobile image sensor, the ISOCELL HP3 adopts an improved Smart-ISO Pro feature. The technology merges image information made from the two conversion gains of Low and High ISO mode to create HDR images. The upgraded version of the technology comes with a triple ISO mode — Low, Mid and High — that further widens the sensor’s dynamic range. In addition, the improved Smart-ISO Pro enables the sensor to express images in over 4 trillion colors (14-bit color depth), 64 times more colors than the predecessor’s 68 billion (12-bit). Furthermore, by supporting staggered HDR along with Smart-ISO Pro, the ISOCELL HP3 can switch between the two solutions depending on the filming environment to produce high-quality HDR images.

Samples of the Samsung ISOCELL HP3 are currently available, and mass production is set to begin this year.


Effective Resolution 16,320 x 12,288 (200M)

Pixel Size 0.56μm

Optical Format 1/1.4"

Color Filter Super QPD Tetra2pixel, RGB Bayer Pattern

Normal Frame Rate 7.5 fps @ full 200 MP, 27 fps @ 50 MP, and 120 fps @ 12.5 MP

Video Frame Rate 30 fps @ 8K, 120 fps @ 4K, and 480 fps @ FHD

Shutter Type Electronic rolling shutter

ADC Accuracy 10-bits

Supply Voltage 2.2 V for analog, 1.8 V for I/O, and 0.9 V for digital core supply

Operating Temperature -20℃ to +85℃

Interface 4 lanes (2.5Gbps per lane) D-PHY / 3 lanes (4.0Gsps per lane) C-PHY

Chroma Tetra2pixel

Auto Focus RGB Bayer Pattern

HDR Smart-ISO Pro (iDCG), Staggered HDR

Output Formats RAW10/12/14

Analog Gain x128 with High Conversion Gain

Product Status Samples Available