Monday, May 20, 2019

Omnivision Announces 2.8um HDR DCG Split Pixel with LED Flicker Mitigation

PRNewswire: OmniVision announces the OX01D10, a 1MP image sensor for automotive applications. This sensor brings together split-pixel and dual conversion gain (DCG) technology, artifact-free motion capture, HDR of up to 120dB, as well as LED flicker mitigation (LFM).

"The OX01D10 delivers low power and high performance in a small form factor," said Andy Hanvey, automotive marketing director at OmniVision. "We provide the industry's leading LFM performance over the full automotive temperature range, which meets the needs of OEMs that are increasingly requiring cameras to mitigate the flicker from LED lighting in vehicles, signs, buildings and a wide variety of other outdoor illumination."

The OX01D10 consumes less than 200mW at 30fps, has advanced ASIL features, and HDR of 120dB without LFM (110dB in LFM mode). AEC-Q100 Grade 2 certified samples and evaluation kits are available now.

ST Imaging Roadmap

ST 2019 Capital Markets Day brings us an update on the company's imaging business:

Sunday, May 19, 2019

Zoom To Learn, Learn To Zoom

Arxiv.org paper "Zoom To Learn, Learn To Zoom" by Xuaner Cecilia Zhang, Qifeng Chen, Ren Ng, and Vladlen Koltun from UC Berkeley, HKUST, and Intel Labs claims a significant improvement over the earlier digital zoom algorithms:

"This paper shows that when applying machine learning to digital zoom for photography, it is beneficial to use real, RAW sensor data for training. Existing learning-based super-resolution methods do not use real sensor data, instead operating on RGB images. In practice, these approaches result in loss of detail and accuracy in their digitally zoomed output when zooming in on distant image regions. We also show that synthesizing sensor data by resampling high-resolution RGB images is an oversimplified approximation of real sensor data and noise, resulting in worse image quality. The key barrier to using real sensor data for training is that ground truth high-resolution imagery is missing. We show how to obtain the ground-truth data with optically zoomed images and contribute a dataset, SR-RAW, for real-world computational zoom. We use SR-RAW to train a deep network with a novel contextual bilateral loss (CoBi) that delivers critical robustness to mild misalignment in input-output image pairs. The trained network achieves state-of-the-art performance in 4X and 8X computational zoom."

Oxford University Thesis on Single-Slope ADCs

University of Oxford PhD thesis "Investigations of time-interpolated single-slope analog-to-digital converters for CMOS image sensors" by Deyan Levski explores time stretching and other concepts improving SS-ADC resolution and speed:

"The focus of the presented investigations here is to shed light on methods in Time-to-Digital Converter interpolation of single-slope ADCs. By using high-factor time-interpolation, the resolution of single-slope converters can be increased without sacrificing conversion time or power.

This work emphasizes on solutions for improvement of multiphase clock interpolation schemes, following an all-digital design paradigm. Presented is a digital calibration scheme which allows a complete elimination of analog clock generation blocks, such as PLL or DLL in Flash TDC-interpolated single-slope converters.
"

Saturday, May 18, 2019

Oxford University Thesis on Log Sensors

Oxford University, UK, publishes a PhD Thesis "Integrating logarithmic wide dynamic range CMOS image sensors" by Mus'ab B Shaharom:

"Conventional CMOS image sensors with a logarithmic response attempt to address the limited dynamic range of the linear digital image sensors by exploiting the subthreshold operation of a transistor in a pixel. This results in CMOS pixels that are able to capture light intensities of more than six decades (120 dB). However, the approach comes at the expense of high fixed pattern noise (FPN) and slow response.

The work presented in this thesis describes a five all nMOS transistor (5T) pixel architecture that aims to achieve wide dynamic range. This feature is obtained using a time-varying reference voltage that is applied to one of the transistors of the pixel. The reference voltage varies in a logarithmic fashion in order to modulate the effective integration time of the pixel.
"

Friday, May 17, 2019

Assorted News: Dialog, Nissan, San Francisco, NIT

Dialog Semiconductor announces its Configurable Mixed-signal Integrated Circuit (CMIC) device with industry-leading LDO regulator performance, the SLG51000. The SLG51000 features high PSRR and lowe output voltage noise and is aimed to power camera and sensor systems.

Features:
  • Highest PSRR of 73dB at 1MHz
  • Lowest output voltage noise of 10┬ÁV rms
  • 7 channels of LDOs
  • Small 1.675mm x 2.075mm WLCSP package


Reuters: Nissan joins Tesla in saying that self-driving cars can work with no LiDAR:

"Nissan Motor Co Ltd said on Thursday it would, for now, stick to self-driving technology which uses radar sensors and cameras, avoiding lidar or light-based sensors because of their high cost and limited capabilities.

“At the moment, lidar lacks the capabilities to exceed the capabilities of the latest technology in radar and cameras,” Tetsuya Iijima, general manager of advanced technology development for automated driving, told reporters at Nissan’s headquarters.

“It would be fantastic if lidar technology was at the level that we could use it in our systems, but it’s not. There’s an imbalance between its cost and its capabilities.”



New York Times, Vox, BBC: San Francisco becomes the first US city that has banned face recognition technology by the government authorities:

"The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage."

NIT publishes a video demo of its SWIR HDR camera:

Sony and Microsoft to Cooperate in "Intelligent Image Sensor Solutions"

Reuters: Sony and Microsoft will partner on new innovations to enhance customer experiences in their direct-to-consumer entertainment platforms and AI solutions.

As part of the memorandum of understanding, Sony and Microsoft will explore collaboration in the areas of semiconductors and AI. For semiconductors, this includes potential joint development of new intelligent image sensor solutions. By integrating Sony’s cutting-edge image sensors with Microsoft’s Azure AI technology in a hybrid manner across cloud and edge, as well as solutions that leverage Sony’s semiconductors and Microsoft cloud technology, the companies aim to provide enhanced capabilities for enterprise customers. In terms of AI, the parties will explore incorporation of Microsoft’s advanced AI platform and tools in Sony consumer products, to provide highly intuitive and user-friendly AI experiences.

Going forward, the two companies will share additional information when available.

Kenichiro Yoshida, President and CEO, Sony,
and Satya Nadella, CEO, Microsoft

Update: Nikkei reports that "the two companies will consider combining image sensors from Sony -- which controls half the global market -- with Microsoft AI technology to develop electronic "eyes" for self-driving vehicles."

Thursday, May 16, 2019

MultiVu Raises $7M for 3D FaceID with Single Camera

PRNewswire, CTech: Isreal-based MultiVu, developing 3D imaging solutions using a single sensor and deep learning derived algorithms, announces the completion of a $7M seed round led by OurCrowd, Cardumen Capital and Hong Kong based investment firm Junson Capital. MultiVu will use the funding to complete development of its first sensor product for 3D Face Authentication applications.

MultiVu's 3D image camera is based on a single sensor as opposed to existing solutions using two sensors and a light projector. The solution produces both pictures and video streams as needed. MultiVu's single sensor solution is promised to be inexpensive, compact and energy efficient. MultiVu method is based on state of the art deep learning algorithms, which makes it a 100% passive solution (no illumination), operating as a front facing camera, using depth and RGB data, all in a single shot.

Doron Nevo, CEO of MultiVu said "The technology, which passed the proof-of-concept stage will bring 3D Face Authentication and affordable 3D imaging to the mobile, automotive, industrial and medical markets. We are excited to be given the opportunity to commercialize this technology."

MultiVu's technology is based on four years of research conducted by David Mendlovic and his team from the Tel Aviv University.

Light-In-Flight Capture

MDPI Sensors paper "Light-In-Flight Imaging by a Silicon Image Sensor: Toward the Theoretical Highest Frame Rate" by Takeharu Goji Etoh, Tomoo Okinaka, Yasuhide Takano, Kohsei Takehara, Hitoshi Nakano, Kazuhiro Shimonomura, Taeko Ando, Nguyen Ngo, Yoshinari Kamakura, Vu Truon Son Dao, Anh Quang Nguyen, Edoardo Charbon, Chao Zhang, Piet De Moor, Paul Goetschalckx, and Luc Haspeslagh from Kindai University, Ritsumeikan University, Osaka University, Hanoi University of Science and Technology, EPFL, Delft University, and IMEC presents a further improvement of high frame rate sensors:

"Light in flight was captured by a single shot of a newly developed backside-illuminated multi-collection-gate image sensor at a frame interval of 10 ns without high-speed gating devices such as a streak camera or post data processes. This paper reports the achievement and further evolution of the image sensor toward the theoretical temporal resolution limit of 11.1 ps derived by the authors. The theoretical analysis revealed the conditions to minimize the temporal resolution. Simulations show that the image sensor designed following the specified conditions and fabricated by existing technology will achieve a frame interval of 50 ps. The sensor, 200 times faster than our latest sensor will innovate advanced analytical apparatuses using time-of-flight or lifetime measurements, such as imaging TOF-MS, FLIM, pulse neutron tomography, PET, LIDAR, and more, beyond these known applications."

Cadence Tensilica Vision Q7 DSP

Cadence expands the high end of its Tensilica Vision DSP IP family with the introduction of Vision Q7 delivering up to 1.82 tera operations per second (TOPS). Escalating demand for image sensors in edge applications is driving growth of the embedded vision market. Today’s vision use cases demand a mix of both vision and AI operations, and edge SoCs require highly flexible, high-performance vision and AI solutions operating at low power. In addition, edge applications that include an imaging camera demand a vision DSP capable of performing pre- or post-processing before any AI task.