Monday, December 10, 2018

Gigajot Raises $4m

Gigajot files a form on completion of $4m fundrising. Not much more info has been revealed so far.

CEA-Leti Curved Sensor Technology Flyer

CEA-Leti Pixcurve technology flyer promotes the technology:

"Pixcurve is a proof of concept, introducing Leti’s latest curving technology for various optical components, such as visible imagers, µdisplays, bolometers and IR detectors. This technology addresses companies’ growing interest in a range of curved optical components that will help them achieve higher levels of performance and compensation for optical aberrations, while minimizing the vignetting effect and enhancing field of view. It makes cameras, imagers or microdisplays even more compact and easy to assemble."

Facial Recognition Controversy

The Verge, Independent, Seattle Times: AI Now Institute consisting of Microsoft, Google and New York University employees publishes "AI Now Report 2018" talking about dangers of facial recognition for society. The group calls on governments to regulate the use of AI and facial recognition technologies before they can undermine basic civil liberties.

Microsoft President Brad Smith posted a similar message in the company's blog:

"We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

After substantial discussion and review, we have decided to adopt six principles to manage these issues at Microsoft. We are sharing these principles now, with a commitment and plans to implement them by the end of the first quarter in 2019.
"


ACLU: Department of Homeland Security published details of a U.S. Secret Service plan to test the use of facial recognition in and around the White House. The ultimate goal seems to be to give the Secret Service the ability to track “subjects of interest” in public spaces.

Voice of America, KCRA: Atlanta International Airport, which is the Delta Airlines hub, has become the first in the US to permit passengers to use facial recognition technology to get on flights. After the first check-in, passengers can also use face recognition to pass through security and to get on the plane. Delta says the system prevents the need for travelers to present their passport up to four times during the usual check-in process.


Singapore's Changi airport, Amsterdam's Schiphol, and Aruba International Airport already offer biometric check-in and boarding capability at some gates and terminals. Airports in Japan are rolling out facial recognition boarding facilities at several airports this year. China's Hongqiao International Airport is also using facial recognition for security screening. London's Heathrow plans to start testing an end-to-end facial recognition program next year.

FedScoop: A recent NIST research says that facial recognition accuracy has improved dramatically over the last 5 years:

"The technology has undergone an “industrial revolution” that’s made certain algorithms about 20 times better at searching databases and finding matches."

Sunday, December 09, 2018

CCD Dark Current Might Have Traces of Dark Matter

In the past, pixel dark current has been used for various purposes: identifying traps and defects (dark current spectroscopy), generating random numbers, measuring temperature, forensic picture analysis, random telegraph noise analysis, etc. One could think that nothing else can be in it. However, there appears to be one more thing. A recent Fermi Lab paper examines CCD dark current for the traces of Dark Matter.

Arxiv.org paper "SENSEI: First Direct-Detection Constraints on sub-GeV Dark Matter from a Surface Run" by Michael Crisler, Rouven Essig, Juan Estrada, Guillermo Fernandez, Javier Tiffenberg, Miguel Sofo Haro, Tomer Volansky, and Tien-Tien Yu:

"The Sub-Electron-Noise Skipper CCD Experimental Instrument (SENSEI) uses the recently developed Skipper-CCD technology to search for electron recoils from the interaction of sub-GeV dark matter particles with electrons in silicon. We report first results from a prototype SENSEI detector, which collected 0.019 gram-days of commissioning data above ground at Fermi National Accelerator Laboratory. These commissioning data are sufficient to set new direct-detection constraints for dark matter particles with masses between ~500 keV and 4 MeV."

Yonit Hochberg (Hebrew University of Jerusalem) review "Direct Detection of Dark Matter" explains the detection principle (DM means Dark Matter in the slides):


The Skipper CCD used in this experiment has been presented in 2017 Arxiv.org paper "Single-electron and single-photon sensitivity with a silicon Skipper CCD" by Javier Tiffenberg, Miguel Sofo-Haro, Alex Drlica-Wagner, Rouven Essig, Yann Guardincerri, Steve Holland, Tomer Volansky, and Tien-Tien Yu. The group was able to achieve an impressive performance, such as pixel dark current of 1 electron in 3 years:

"We have developed a non-destructive readout system that uses a floating-gate amplifier on a thick, fully depleted charge coupled device (CCD) to achieve ultra-low readout noise of 0.068 e- rms/pix. This is the first time that discrete sub-electron readout noise has been achieved reproducibly over millions of pixels on a stable, large-area detector. This allows the precise counting of the number of electrons in each pixel, ranging from pixels with 0 electrons to more than 1500 electrons. The resulting CCD detector is thus an ultra-sensitive calorimeter. It is also capable of counting single photons in the optical and near-infrared regime. Implementing this innovative non-destructive readout system has a negligible impact on CCD design and fabrication, and there are nearly immediate scientific applications. As a particle detector, this CCD will have unprecedented sensitivity to low-mass dark matter particles and coherent neutrino-nucleus scattering, while astronomical applications include future direct imaging and spectroscopy of exoplanets."

Saturday, December 08, 2018

Ambient Light Rejection in SPAD-based LiDAR

MDPI Special Issue The International SPAD Sensor Workshop publishes "Background Light Rejection in SPAD-Based LiDAR Sensors by Adaptive Photon Coincidence Detection" paper by Maik Beer, Jan F. Haase, Jennifer Ruskowski, and Rainer Kokozinski from Fraunhofer Institute for Microelectronic Circuits and Systems, Duisburg, and University Duisburg-Essen.

"In this paper we present a novel method based on the adaptive adjustment of photon coincidence detection to suppress the background light and simultaneously improve the dynamic range. A major disadvantage of fixed parameter coincidence detection is the increased dynamic range of the resulting event rate, allowing good measurement performance only at a specific target reflectance. To overcome this limitation we have implemented adaptive photon coincidence detection. In this technique the parameters of the photon coincidence detection are adjusted to the actual measured background light intensity, giving a reduction of the event rate dynamic range and allowing the perception of high dynamic scenes. We present a 192 × 2 pixel CMOS SPAD-based LiDAR sensor utilizing this technique and accompanying outdoor measurements showing the capability of it. In this sensor adaptive photon coincidence detection improves the dynamic range of the measureable target reflectance by over 40 dB."

Friday, December 07, 2018

Smartsens Unveils NIR-Enhanced 4MP Sensor

PRNewswire: SmartSens announces the latest addition, a 1/1.8-inch 4MP BSI SC4210, to its SmartClarity product line of security and surveillance products.

With the rapid development of artificial intelligence and IoT, application scenarios such as smart security, smart industry and intelligent driving have become increasingly popular. For example, in the China market alone, 20 million sets of intelligent monitoring systems equipped with AI technology have been installed.

According to SmartSens, "Since SmartSens' beginning, we have always taken customer needs and industry application development as the core of our technological innovation. We launched SC4210 to meet the industry's evolving needs for emerging applications in key areas such as AI and IoT. We leveraged our extensive product development experience in CMOS image sensors to develop SC4210 -- an innovative image sensing product based on advanced BSI pixel technology."

The new SC4210 is said to have many technical advantages, including:

  • Large-size pixel structure and high sensitivity: SC4210 uses BSI pixel process to achieve a 3.0μm large-size pixel structure and a 1/1.8″ optical size, with sensitivity of up to 4800mV/Lux*s and maximum signal-to-noise ratio of 43dB.
  • Ultra-low-light performance: with SmartSens' unique pixel architecture, SC4210 is leading the market with SNR1s of 0.21 lux.
  • Wide dynamic range: SC4210 supports ultra-high dynamic range (over 100dB).
  • NIR enhancement: SC4210 nearly doubles the NIR QE in 850nm to 940nm band.
  • High frame rate: SC4210 can run at up to 60fps at full 4MP resolution.

The SC4210 CMOS sensor is aimed to applications such as professional security surveillance cameras, face recognition smart cameras, industrial cameras, high-end traffic recorders, motion cameras and video teleconferencing systems. SC4210 is now in mass production.


In an unrelated note, Chinese site wxwenku reports that Smartsense 4um BSI global shutter pixel in SC130GS sensor achieves 40% QE at 940nm wavelength:

Imec PbS QD Photodiodes for CMOS Integration

IEEE Sensors Council video "NIR Sensors Based on Photolithographically Patterned PbS QD Photodiodes for CMOS Integration" by Epimitheas Georgitzikis, Pawel Malinowski, Luis Moreno Hagelsieb, Vladimir Pejovic, Griet Uytterhoeven, Stefano Guerrieri, Andreas Süss, Celso Cavaco, Konstantinos Chatzinis, Jorick Maes, Zeger Hens, Paul Heremans, and David Cheyns from Imec:

"Colloidal quantum dots based on lead sulfide are very attractive materials for the realization of novel infrared image sensors combining low cost synthesis and processing, deposition over large area and on any substrate. This work describes the building blocks that will enable the integration of QD photodiodes on top of a CMOS ROIC. Photodetectors with high detectivity and low dark current are demonstrated. Furthermore, photolithographic patterning of the thin-film stack is introduced for the first time, showing the feasibility of high pixel pitch, opening the way towards high resolution monolithic infrared imagers."

Linear SPAD Array for HDR Imaging

IEEE Sensors Council publishes "A 128×1 Pixels, High Dynamic Range SPAD Imager in 0.18 µm CMOS Technology" presentation by Cheng Mao, Xiangshun Kong, Haowen Ma, Limin Zhang, Feng Yan, and Xiaofeng Bu from Nanjing University, China.

"SPAD imager is proposed to achieve high dynamic range image. A SPAD chip with 128×1 pixels in 0.18 µm CMOS process is presented to show the feasibility. The chip design and the image method are detailed illustrated. The experiment results show that the SPAD imager can achieve 89 dB high dynamic range image, which is about 20 dB higher than that using CCD and CMOS image sensors, showing the superiority of the proposed method."

Thursday, December 06, 2018

Continuation: Deep Neural Network Search for Better CFA and Demosaicing Algorithm

Thanks to Offline Dreams mentioned another machine learning CFA pattern optimization in the comment to my yesterday's post. "Deep Joint Design of Color Filter Arrays and Demosaicing" paper by Bernardo Henz, Eduardo S. L. Gastal, and Manuel M. Oliveira from Brazilian Instituto de Informática – UFRGS differs from the previous post in a number of ways:
  • both noisy and noiseless cases are explored
  • CFA pattern is optimized together with demosaicing algorithm
  • different CFA colors were a part of optimization too

"We present a convolutional neural network architecture for performing joint design of color filter array (CFA) patterns and demosaicing. Our generic model allows the training of CFAs of arbitrary sizes, optimizing each color filter over the entire RGB color space. The patterns and algorithms produced by our method provide high-quality color reconstructions. We demonstrate the effectiveness of our approach by showing that its results achieve higher PSNR than the ones obtained with state-of-the-art techniques on all standard demosaicing datasets, both for noise-free and noisy scenarios. Our method can also be used to obtain demosaicing strategies for pre-defined CFAs, such as the Bayer pattern, for which our results also surpass even the demosaicing algorithms specifically designed for such a pattern."


The machine learning optimization picked quite different patterns from the Bayer CFA, both in color and in size:

Pixon Unveils ExDRA Software

PRNewswire: Pixon Imaging announces its new Extended Dynamic Range Architecture (ExDRA) - an software technique that improves low-light-level imaging in cell phone and other CMOS/CCD -based cameras. The technique utilizes charge binning, combining a full megapixel image of bright objects with a higher-sensitivity binned image of faint objects, all performed in one frame time. ExDRA delivers a low light performance that is an order of magnitude (or better) than current methods.

For example, Google's new Pixel 3 "Night Sight", employs up to 15 images to improve low-light sensitivity. This mimics the sensitivity of a longer exposure, but requires stationary objects, and cannot approach the sensitivity of the algorithmically simpler ExDRA. Alternative performance improving methods employed by other cell phone manufacturers require the use of multiple cameras, adding hardware costs and software complexity.

"The ExDRA technique captures both the high-resolution and high-sensitivity images simultaneously from a single CMOS/CCD sensor," reports Rick Puetter, Pixon's Chief Scientist. "This makes it possible to produce images and videos of scenes in low light with exceptional clarity and uniformity."

ExDRA is comprised entirely of software and is suited for integration into mobile phone cameras. It can also be implemented as an App, with little lead time, for use with existing devices.

CEA-Leti Demos Curved Sensor Technology

BusinessWire: CEA-Leti demonstrates its curved sensor technology, called Pixcurve, that requires fewer lens elements in digital cameras, shrinks camera size by half, and lowers costs – while improving image quality.

Pixcurve advantages:
  • Form Factor
    Reducing the number of lens elements in digital cameras from 10 to six reduces the size of the final compound lens by 60%. The overall length of the optical system also is shorter.
  • Improved Performance
    Curved image sensors reduce—and in some cases completely eliminate—optical aberrations like curvature of field and the vignetting effect. They also deliver increased brightness and a wider field of view.
  • Cost
    Reducing the number of lens elements and eliminating aspheric lens elements, which will be unnecessary, will lower the cost of systems integrating Pixcurve technology.
  • Assembly
    Fewer components means quicker and easier assembly—a major advantage for manufacturers.

Sick Sensor Technology

German 3D laser scanner maker Sick reveals that it's using a custom designed image sensor in its cameras:


Thanks to TL for the pointer!

Qualcomm Snapdragon 855 Enhances Camera Features

Qualcomm announces its 5G-ready 7nm Snapdragon 855 platform featuring an enhanced imaging functionality:

"Snapdragon 855 sets new standards for capturing stunning photos and videos. The new Qualcomm Spectra™ 380 ISP integrates numerous hardware accelerated computer vision (CV) capabilities, enabling the world’s first announced CV-ISP to provide cutting-edge computational photography and video capture features while at the same time offering up to 4x power savings. The CV-ISP includes hardware-based depth sensing which enables video capture, object classification, and object segmentation all in real-time in 4K HDR at 60fps. This means a user can capture a video and accurately replace selected objects or backgrounds in the scene in real-time all with 4K HDR resolution using over 1 billion shades of color. Even further, the new Qualcomm Spectra 380 ISP is the first announced ISP to support video recording using HDR10+, so the more than 1 billion shades of color can be captured with exceptional contrast and visual brilliance. Finally, in order to efficiently store this amazing content, Snapdragon 855 adds hardware acceleration for HEIF file format encoding, reducing file sizes by 50% for efficiently saving and sending user generated content."

Qualcomm Spectra 380 Image Signal Processor:

  • Dual 14-bit CV-ISPs; 22MP @30 fps concurrent dual cameras; 48MP @ 30 fps single camera
  • Hardware CV functions including object detection & tracking (Histogram of Oriented Gradients, Harris Corner Detection, Normalized Cross Correlation, Linear classification and optical flow) and stereo depth processing
  • Advanced HDR solution including improved zzHDR and 3-exposure Quad Color Filter Array (QCFA) HDR
  • 4K60 HDR video capture (HDR10, HDR10+ and HLG) with Portrait Mode (bokeh), 10-bit color depth and Rec. 2020 color gamut
  • Hardware-based Multi-Frame Noise Reduction (MFNR) for snapshot and Motion Compensated Temporal noise Filtering (MCTF) for video
  • Hardware-based Electronic Image Stabilization (EIS) solution within camera subsystem
  • A new modular ISP design with more flexibility to tap in and out of the imaging pipeline both in the RAW and YUV pixel domains
  • High frame rate capture for slow motion video (720p @ 480fps)
  • HEIF photo capture, HEVC (H.265) video capture


Update: Qualcomm publishes its SD855 camera presentation. Few additional slides from it:

Wednesday, December 05, 2018

Unconstrained Machine Learning Search Confirms that Bayer CFA is Optimal

Occipital HW Leader Evan Fletcher did an interesting work on searching for an optimal CFA pattern among arbitrary large sized patterns in an unconstrained machine learning search. The result might surprise the companies proposing better CFA mosaics, such as Fujifilm:

"After training for ~24 hours, the learned color filter array looks quite familiar:


The network seems to have learned something very similar to a RGGB Bayer pattern, complete with 2×2 repetition and the pentile arrangement of the green pixels! This was quite surprising, especially given that there is no spatial constraint on arrangement or repetition in this network design whatsoever.

This optimization appears to have independently confirmed the venerable Bayer pattern as a good choice for a mosaic function – it was fascinating to see a familiar pattern arise from an unconstrained optimization.
"

Apple and Samsung to Integrate ToF Cameras in their 2019 Phones

PhoneArena quotes WSJ and Korean media claiming that next year Samsung Galaxy 5G S10 smartphone is expected to have a rear ToF camera. Later in the year, Samsung A-series smartphones will feature ToF cameras too. Samsung ToF module suppliers are said to be SEMCO and Patron.

Next year's Apple iPhone is to feature a rear ToF camera as well, said to be sourced from LG.

Huawei Smartphone to use Sony ToF Sensor for 3D Imaging

Bloomberg reports that an oncoming Huawei smartphone possibly incorporate Sony ToF sensor:

"The phone, code-named Princeton internally, will be announced this month and go on sale within a few weeks, according to one of the people who requested anonymity discussing private plans. The technology uses sensors developed by Sony Corp. that can accurately measure distances by bouncing light off surfaces.

Besides generating pictures that can be viewed from numerous angles, Huawei’s new camera can create 3-D models of people and objects that can be used by augmented-reality apps, according to one of the people. The new camera will also let developers control apps and games in new ways, such as hand gestures, the person said, who added that some of the details may change as developers work with the technology.
"

Tuesday, December 04, 2018

Gpixel Unveils Full-Frame 51MP 80fps Global Shutter Sensor

Gpixel announces a 51MP, 8424 x 6032 resolution, 35 mm full-frame global shutter image sensor. The GMAX4651 is capable of capturing full resolution images at frame rates of up to 80 fps in standard 12-bit read out mode and 40 fps in dual gain HDR mode.

With more than 83dB DR, 1/50,000 shutter efficiency and wide angular response, the GMAX4651 is aimed to advanced optical inspection and machine vision applications (Automated Optical Inspection (AOI) and Flat Panel Display (FPD) inspection) as well as high-end 8k/4k video broadcast and aerial imaging applications.

The GMAX4651 is a really unique addition to our GMAX series, outperforming existing CCD solutions on all performance levels,” commented Wim Wuyts, Chief Commercial Officer of Gpixel. “It combines the dual gain HDR used in our successful GSENSE sensors, with excellent global shutter pixel technology and unprecedented frame rates. All together this will set a new industry standard for high-end industrial inspection and the video industry.

Based on a 4.6 µm charge domain global shutter technology with true CDS, the GMAX4651 delivers more than 25 ke- linear FWC and up to 50 ke- when using 1x2 binning and ultra-low noise read out down to 1.5 e-RMS. Under dual gain HDR mode, where the sensor reads out the same exposure with two different gain settings for off-chip HDR reconstruction, DR can reach 83dB.

GMAX4651 mono sensor sample and evaluation kit will be commercially available in March 2019. Color samples will be made available in Q2 2019 and volume production is targeted for Q3 2019.

Sheba Microsystems Promises MEMS AF, Zoom, and OIS

Toronto, Canada-based startup Sheba Microsystems promises MEMS-based AF, zoom, and OIS:





Some demo videos are published in the company's Youtube channel.

Monday, December 03, 2018

Event-Driven Sensor in Structured Light 3D Camera

Arxiv.org paper "Event-Based Structured Light for Depth Reconstruction using Frequency Tagged Light Patterns" by T. Leroux, S.-H. Ieng, and R. Benosman from University of Pittsburgh, Carnegie Mellon University, and Sorbonne University proposes to use (apparently) Prophesee sensor for a structured light 3D camera. Although the paper appears a bit rushed to publication and needs some proofreading, the presented ideas look nice:

"This paper presents a new method for 3D depth estimation using the output of an asynchronous time driven image sensor. In association with a high speed Digital Light Processing projection system, our method achieves real-time reconstruction of 3D points cloud, up to several hundreds of hertz. Unlike state of the art methodology, we introduce a method that relies on the use of frequency tagged light pattern that make use of the high temporal resolution of event based sensors. This approach eases matching as each pattern unique frequency allow for any easy matching between displayed patterns and the event based sensor. Results are shown on the real scenes."

Orbbec 3D Depth Sensing in Oppo Find X

SystemPlus publishes a teardown report of Orbbec face recognition solution in Oppo Find X smartphone:

"Located in the Find X’s front side, around the speaker, the 3D system is packaged in one metal enclosure. The system features a dot projector, a flood illuminator, a red/green/blue camera and a NIR camera sensor. An additional SoC component is soldered on the main board to process the signal from the latter devices.

The system uses standard components found on the market. That includes a GS image sensor featuring 3µm size pixels and standard resolution of 1 megapixel and a vertical cavity surface emitting laser (VCSEL). The system is therefore very cost efficient compared to other solutions. The camera and dot projector module assembly uses standard wire bonding to connect the sensor or the VCSEL dies, along with an optical module comprising four lenses for both modules.
"

ST Officially Announces 2-Memories GS Sensors for in-Cabin Cameras

After few months of presentations at different conferences and exhibitions, STMicroelectronics has officially unveiled two new automotive global shutter image sensors, the VG5661 and VG5761, aimed to driver monitoring.

ST’s VG5661 and VG5761 are 1.6MP and 2.3MP automotive global shutter sensors. ST has created a unique automotive global shutter pixel with two memory cells while keeping a small pixel size of 3.2μm. The two memory zones allow double-image storage, enabling linear HDR up to 98dB or background removal without lag effects and with no need for additional processing by the host system. This offloads the host processor and reduces instances of correction artefacts. In addition, HDR and high MTF in NIR further minimizes interference from natural light sources.

The VG5661 and VG5761 are supplied in standard BGA packages, or as bare die for direct integration in automotive OEM systems produced in high volumes. They are qualified to AEC-Q100 grade 2 and include complex safety-integrity features as required for an ASIL-B camera system in accordance with the automotive safety standard ISO 26262.

Sunday, December 02, 2018

Sony Polarsens Sensors

Lucid Vision Labs presentation has some data on Sony polarization sensors:

Photoneo Presentation

Photoneo presentation at Vision Show in Stuttgart, Germany, on Nov. 6-8, 2018 shows its Parallel Structured Light 3D camera concepts: