Lists

Friday, April 30, 2021

1/f and RTS Noise Theories

Nature paper "Noise suppression beyond the thermal limit with nanotransistor biosensors" by Yurii Kutovyi, Ignacio Madrid, Ihor Zadorozhnyi, Nazarii Boichuk, Soo Hyeon Kim, Teruo Fujii, Laurent Jalabert, Andreas Offenhaeusser, Svetlana Vitusevich & Nicolas Clément from Forschungszentrum Jülich (Germany) and University of Tokyo (Japan) overviews the recent theories of 1/f noise in mos transistors and some ideas to reduce it.

The reduction ideas are not very practical in image sensor usage context, but the authors predicts that the carrier trap-based RTN and 1/f noise will become less common when the transistor dimensions scale down to 10s of nanometers. However, the 1/f noise still stays in the form of a gate dielectric polarization noise, even in very small mosfets. High-K dielectrics have this noise reduced, to a degree.

"While the actual transistor gates in processors reach the sub-10 nm range for optimum integration and power consumption, studies on design rules for the signal-to-noise ratio (S/N) optimization in transistor-based biosensors have been so far restricted to 1 µm2 device gate area, a range where the discrete nature of the defects can be neglected. In this study, which combines experiments and theoretical analysis at both numerical and analytical levels, we extend such investigation to the nanometer range and highlight the effect of doping type as well as the noise suppression opportunities offered at this scale. In particular, we show that, when a single trap is active near the conductive channel, the noise can be suppressed even beyond the thermal limit by monitoring the trap occupancy probability in an approach analog to the stochastic resonance effect used in biological systems."

Thursday, April 29, 2021

Xiaomi Phone Features Liquid Lens from Nextlens

IMVE reports that "Xiaomi Mix uses a liquid lens from Swiss firm Nextlens, sister company to Optotune, which makes liquid lenses for industrial use. Nextlens said the camera system has focusing speeds measured in thousands of a second, and close focusing up to 3cm."


ToF Flying Pixel Elimination

 Princeton University and King Abdullah University of Science and Technology publish arxiv.org paper "Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in Time-of-Flight Imaging" by Ilya Chugunov, Seung-Hwan Baek, Qiang Fu, Wolfgang Heidrich, and Felix Heide.

"We introduce Mask-ToF, a method to reduce flying pixels (FP) in time-of-flight (ToF) depth captures. FPs are pervasive artifacts which occur around depth edges, where light paths from both an object and its background are integrated over the aperture. This light mixes at a sensor pixel to produce erroneous depth estimates, which can adversely affect downstream 3D vision tasks. Mask-ToF starts at the source of these FPs, learning a microlens-level occlusion mask which effectively creates a custom-shaped sub-aperture for each sensor pixel. This modulates the selection of foreground and background light mixtures on a per-pixel basis and thereby encodes scene geometric information directly into the ToF measurements. We develop a differentiable ToF simulator to jointly train a convolutional neural network to decode this information and produce high-fidelity, low-FP depth reconstructions. We test the effectiveness of Mask-ToF on a simulated light field dataset and validate the method with an experimental prototype. To this end, we manufacture the learned amplitude mask and design an optical relay system to virtually place it on a high-resolution ToF sensor. We find that Mask-ToF generalizes well to real data without retraining, cutting FP counts in half."

Wednesday, April 28, 2021

Sony Forecasts Higher Revenue but Lower Profit

Sony reports its annual results and forecasts for the next fiscal year. The company expects to have higher image sensor sales but lower profits in its next year:
  • FY20 sales decreased 5% year-on-year to 1 trillion 12.5 billion yen primarily due to lower sales of image sensors for mobile.
  • Operating income decreased a significant 89.7 billion yen year-on-year to 145.9 billion yen primarily due to an increase in research and development expenses and depreciation, as well as the impact of the decrease in sales.
  • FY21 sales are expected to increase 12% year-on-year to 1 trillion 130 billion yen and operating income is expected to decrease 5.9 billion yen to 140 billion yen.
  • In FY21, we expect that our market share on a volume basis will return to a similar level as it was in the fiscal year ended March 31, 2020, thanks to our efforts to expand our customer base in the mobile sensor business, and we will manage the business in a more proactive manner while keeping an eye on risk.
  • We plan to increase research expenses in FY21 by approximately 15%, or 25 billion yen, year-on-year to expand the type of products we sell and to shift to higher value-added models from the fiscal year ending March 31, 2023 (“FY22”).
  • We expect image sensor capital expenditures to be 285 billion yen, part of which was postponed from the previous fiscal year.
  • We plan to shift to higher value-added products that leverage Sony’s stacked technology in preparation for an improvement in the product mix from FY22, and we will concentrate our investment on production capacity necessary to produce them.
  • The other day, we held a completion ceremony for our new Fab 5 building at our Nagasaki Factory. Expansion of production capacity is progressing according to plan and we will build, expand and equip facilities in-line with the pace of expansion of our business going forward.
  • Shortages of semiconductors have become an issue recently, but, with the cooperation of our partners, we have already secured enough supply of logic semiconductors used in our image sensors to cover our production plan for this fiscal year.
  • However, there is a possibility that the semiconductor shortage will be prolonged, so we are accelerating the shift to higher value-added products that we have been advancing heretofore.
  • We are also continuing to proactively pursue mid- to long-term initiatives in the automotive and 3D sensing areas and will explain more details at the IR Day scheduled for next month.

THz Imager Uses Photodiode for Scene Illumination (Not a Mistake, for Illumination!)

OSA Optics Express publishes a paper "Ultra-low phase-noise photonic terahertz imaging system based on two-tone square-law detection" by Sebastian Dülme, Matthias Steeg, Israa Mohammad, Nils Schrinski, Jonas Tebart, and Andreas Stöhr from University of Duisburg-Essen, Germany.

"In this paper, we demonstrate a phase-sensitive photonic terahertz imaging system, based on two-tone square-law detection with a record-low phase noise. The system comprises a high-frequency photodiode (PD) for THz generation and a square-law detector (SLD) for THz detection. Two terahertz of approximately 300 GHz tones, separated by an intermediate frequency (IF) (7 GHz–15 GHz), are generated in the PD by optical heterodyning and radiated into free-space. After transmission through a device-under-test, the two-tones are self-mixed inside the SLD. The mixing results in an IF-signal, which still contains the phase information of the terahertz tones. To achieve ultra-low phase-noise, we developed a new mixing scheme using a reference PD and a low-frequency electrical local oscillator (LO) to get rid of additional phase-noise terms. In combination with a second reference PD, the output signal of the SLD can be down-converted to the kHz region to realize lock-in detection with ultra-low phase noise. The evaluation of the phase-noise shows the to-date lowest reported value of phase deviation in a frequency domain photonic terahertz imaging and spectroscopy system of 0.034°. Consequently, we also attain a low minimum detectable path difference of 2 µm for a terahertz difference frequency of 15 GHz. This is in the same range as in coherent single-tone THz systems. At the same time, it lacks their complexity and restrictions caused by the necessary optical LOs, photoconductive antennas, temperature control and delay lines."

Noise in Polarization Sensors

OSA publishes a paper "Analysis of signal-to-noise ratio of angle of polarization and degree of polarization" by Yingkai Chen, Zhongmin Zhu, Zuodong Liang, Leanne E. Iannucci, Spencer P. Lake, and Viktor Gruev from University of Illinois at Urbana-Champaign and Washington University in St. Louis.

"Recent advancements in nanofabrication technology has led to commercialization of single-chip polarization and color-polarization imaging sensors in the visible spectrum. Novel applications have arisen with the emergence of these sensors leading to questions about noise in the reconstructed polarization images. In this paper, we provide theoretical analysis for the input and output referred noise for the angle and degree of linear polarization information. We validated our theoretical model with experimental data collected from a division of focal plane polarization sensor. Our data indicates that the noise in the angle of polarization images depends on both incident light intensity and degree of linear polarization and is independent of the incident angle of polarization. However, noise in degree of linear polarization images depends on all three parameters: incident light intensity, angle and degree of linear polarization. This theoretical model can help guide the development of imaging setups to record optimal polarization information."

Tuesday, April 27, 2021

iPhone 12 Pro Max Rear Camera Reverse Engineering

SystemPlus publishes a reverse engineering of Apple iPhone 12 Pro Max rear camera. One can see a high density of PDAF pixels on the SEM picture below:

BBC on Smartphone Optics Innovations

BBC publishes an article "Smaller and better smartphone cameras are on the way" reviewing startups and academic work on lens size reduction, zoom, and AF:

Ambarella Presentation

March 2021 Ambabella presentation quotes interesting market data on automotive and security cameras:

Monday, April 26, 2021

Counterpoint on Smartphone Camera Trends

Counterpoint Research lists smartphone camera trends:
  • Higher resolution sensors
  • Large pixels and sensors
  • Multiple cameras
  • AI processing integration
Senior Analyst Ethan Qi said, “AI has enabled a series of exciting camera features such as dynamic object and facial recognition, light recognition and lighting effects, night mode enhancement, AI-based anti-distortion, AI-based stabilization, as well as an intelligent combination of multi-camera lens.

Always-On News: Qualcomm, Microsoft, Himax, Cadence

TinyML Summit 2021 publishes a panel discussion "Always-on AI vision: The path to disruptive, high-scale applications." The discussion participants are:
  • Moderator: Jeff HENCKELS, Director, Product Management & Business Development, Qualcomm
  • Peter BERNARD, Sr. Director, Silicon and Telecom, Azure Edge Devices, Platform & Services, Microsoft
  • Lian Jye SU, Principal Analyst, ABI Research
  • Edwin PARK, Principal Engineer, QUALCOMM Inc
  • Evan PETRIDIS, Chief Product Officer, EVP of Systems Engineering, Eta Compute
  • Tony CHIANG, Sr. Director of Marketing, Himax Imaging

Cadence announces two new vision DSP IP cores. Tensilica Vision Q8 DSP delivers 2X performance and memory bandwidth compared to the previous generation core and energy efficiency for high-end vision and imaging applications in the automotive and mobile markets. The Tensilica Vision P1 DSP is optimized for always-on and smart sensor applications in the consumer market, providing an energy-efficient solution.

Tensilica Vision P1 DSP features and capabilities include:
  • Optimized for always-on applications including smart sensors, AR/VR glasses and IoT/smart home devices
  • 128-bit SIMD with 400 giga operations per second (GOPS) offers one-third the power and area plus 20 percent higher frequency compared to the widely deployed Vision P6 DSP
  • Architecture optimized for small memory footprint and operation in low-power mode

Sunday, April 25, 2021

Column-Parallel Sigma-Delta ADC Thesis

Université Paris-Saclay, France, publishes a PhD Thesis "Calibration of a two-step Incremental Sigma-Delta Analog-to-Digital Converter" by Li Huang.

"In the context of High Definition imagers, a trend is to integrate a bank of analogto-digital converters adjacent to the pixel matrix. The disadvantage is a constraint on the form factor of the converter. An incremental inverter-based Sigma-Delta converter was designed during previous work while respecting these constraints. But the post-layout of the circuit resulted in a performance degradation namely a resolution of 9 bits instead of the expected 14 bits. A calibration method was therefore necessary. This thesis proposes several correction methods implemented by digital filters applied on the output bits and on combinations of the output bits to take account of non-linear phenomena observed in post-layout simulation. The methods have been validated from the post-layout simulation results and achieve 14-bit resolution. To go further, the thesis also proposes a model of the circuit defects at the level of the integrators which are the most critical part of the circuit. This model, which implements parasitic capacitances, joins the post-layout simulation results with a very high precision, which makes it possible to consider ways of improvement for a future design."

Saturday, April 24, 2021

Time-Based SPAD HDR Imaging Claimed to be Better than Dull Photon Counting

University of Wisconsin-Madison, USA, and Politecnico di Milano, Italy, publish Arxiv.org paper "Passive Inter-Photon Imaging" by Atul Ingle, Trevor Seets, Mauro Buttafava, Shantanu Gupta, Alberto Tosi, Mohit Gupta, and Andreas Velten.

"Digital camera pixels measure image intensities by converting incident light energy into an analog electrical current, and then digitizing it into a fixed-width binary representation. This direct measurement method, while conceptually simple, suffers from limited dynamic range and poor performance under extreme illumination -- electronic noise dominates under low illumination, and pixel full-well capacity results in saturation under bright illumination. We propose a novel intensity cue based on measuring inter-photon timing, defined as the time delay between detection of successive photons. Based on the statistics of inter-photon times measured by a time-resolved single-photon sensor, we develop theory and algorithms for a scene brightness estimator which works over extreme dynamic range; we experimentally demonstrate imaging scenes with a dynamic range of over ten million to one. The proposed techniques, aided by the emergence of single-photon sensors such as single-photon avalanche diodes (SPADs) with picosecond timing resolution, will have implications for a wide range of imaging applications: robotics, consumer photography, astronomy, microscopy and biomedical imaging."


Omron and Politecnico di Milano Develop SPAD-based Rangefinder

Politecnico di Milano, Italy, and Omron, Japan, publish a MDPI paper "Spot Tracking and TDC Sharing in SPAD Arrays for TOF LiDAR" by by Vincenzo Sesta, Fabio Severini, Federica Villa, Rudi Lussana, Franco Zappa, Ken Nakamuro, and Yuki Matsui.

"In this work, we study a novel architecture for Single Photon Avalanche Diode (SPAD) arrays suitable for handheld single point rangefinders, which is aimed at the identification of the objects’ position in the presence of strong ambient background illumination. The system will be developed for an industrial environment, and the array targets a distance range of about 1 m and a precision of few centimeters. Since the laser spot illuminates only a small portion of the array, while all pixels are exposed to background illumination, we propose and validate through Monte Carlo simulations a novel architecture for the identification of the pixels illuminated by the laser spot to perform an adaptive laser spot tracking and a smart sharing of the timing electronics, thus significantly improving the accuracy of the distance measurement. Such a novel architecture represents a robust and effective approach to develop SPAD arrays for industrial applications with extremely high background illumination."

Friday, April 23, 2021

Characterization and Modeling of Image Sensor Helps to Achieve Lossless Image Compression

EETimes-Europe: Swiss startup Dotphoton claims to achieve 10x lossless image compression:

"Dotphoton’s Jetraw software starts before the image is created and uses the information of the image sensor’s noise performance to efficiently compress the image data. The roots of the image compression date back to the research questions of quantum physics. For example, whether effects such as quantum entanglement can be made visible for the human eye.

Bruno Sanguinetti, CTO and co-founder of Dotphoton, explained, “Experimental setups with CCD/CMOS sensors for the quantification of the entropy and the relation between signal and noise showed that even with excellent sensors, the largest part of the entropy consists of noise. With a 16-bit sensor, we typically detected 9-bit entropy, which could be referred back solely to noise, and only 1 bit that came from the signal. It is a finding from our observations that good sensors virtually ‘zoom’ into the noise.”

Dotphoton showed that, with their compression method, image files are not affected by loss of information even with compression by a factor of ten. In concrete terms, Dotphoton uses information about the sensor’s own temporal and spatial noise."


The company's Dropbox comparison document dated by January 2020 benchmarks its DPCV algorithm vs other approaches:


"We rely on the calibration data present in the Dotphoton files to improve SNR without introducing artefacts or affecting delicate signals.

— per-pixel calibration and linearization. Even for high-end cameras, each pixel may have a different efficiency, offset and noise structure. Our advanced calibration method perfectly captures this information, which then allows both to correct sensor defects and to better evaluate whether an observed feature arises from signal or from noise.

— quantitatively-accurate amplitude noise reduction. Many de-noising techniques produce visually stunning results but affect the quantitative properties of an image. Our noise reduction methods, on the other hand, are targeted at scientific applications, where the quantitative properties of an image are important and where producing no artefacts is critical.

— color noise reduction using amplitude data and spectral calibration data

Dotphoton CV is a lossless image compression algorithms, however, it relies on data having been pre-processed in-camera or in the driver. This pre-processing does modify the original raw data, and therefore introduces a small amount of loss. Pre-processing is adapted to the specific camera model, and noise sources that can be corrected are corrected, noise from sources that cannot be corrected is replaced with noise that has the same statistical distribution, and therefore the output data presents no artefacts or bias. The maximum loss introduced by pre-processing is equivalent to having taken an image with an ’ISO’ setting 20% higher. In some situations (e.g. un-homogenous sensors) correcting systematic errors may result in a Dotphoton-compressed image that is of higher quality than the original raw data."

Xilinx Releases AI Vision Starter Kit, Pinnacle Adds HDR ISP

BusinessWire: Xilinx  introduces the Kria portfolio of adaptive system-on-modules (SOMs), production-ready small form factor embedded boards that enable rapid deployment in edge-based vision AI applications. Coupled with a complete software stack and pre-built, production-grade accelerated applications, Kria adaptive SOMs are expected to become a new method of bringing adaptive computing to vision AI and software developers.

The Kria KV260 Vision AI Starter Kit is priced at a low $199. When ready to move to deployment, customers can seamlessly transition to the Kria K26 production SOM, including commercial and industrial variants priced at $250 or $350, respectively.


PRWeb:  Pinnacle Imaging Systems, a developer of ISP and HDR video solutions, offers its Denali 3.0 HDR ISP along with a new HDR sensor module for the just-launched Xilinx Kria K26 SOM and KV260 Vision AI Starter Kit.

Demosaicing for Quad-Bayer Sensors

Axiv.org paper "Beyond Joint Demosaicking and Denoising: An Image Processing Pipeline for a Pixel-bin Image Sensor" by SMA Sharif, Rizwan Ali Naqvi, and Mithun Biswas from Rigel-IT, Bangladesh, and Sejong University, South Korea, applies CNN to demosaic quad Bayer image:

"Pixel binning is considered one of the most prominent solutions to tackle the hardware limitation of smartphone cameras. Despite numerous advantages, such an image sensor has to appropriate an artefact-prone non-Bayer colour filter array (CFA) to enable the binning capability. Contrarily, performing essential image signal processing (ISP) tasks like demosaicking and denoising, explicitly with such CFA patterns, makes the reconstruction process notably complicated. In this paper, we tackle the challenges of joint demosaicing and denoising (JDD) on such an image sensor by introducing a novel learning-based method. The proposed method leverages the depth and spatial attention in a deep network. The proposed network is guided by a multi-term objective function, including two novel perceptual losses to produce visually plausible images. On top of that, we stretch the proposed image processing pipeline to comprehensively reconstruct and enhance the images captured with a smartphone camera, which uses pixel binning techniques. The experimental results illustrate that the proposed method can outperform the existing methods by a noticeable margin in qualitative and quantitative comparisons. Code available: https://github.com/sharif-apu/BJDD_CVPR21."