Friday, May 24, 2019

Low-End Laptops to Keep VGA Camera Forever

PRNewswire: "Analysts predict that 20% of embedded notebook cameras will remain at VGA resolutions for the foreseeable future, due to cost considerations in the entry-level market," said Jason Chiang, product marketing manager at OmniVision. "By introducing this new OV0VA10 SoC with advanced pixel technology, OmniVision is increasing the performance and extending the viability of the VGA notebook camera market."

OmniVision OV0VA10 SoC integrates a VGA image sensor and signal processor in a single chip-scale package. The SoC's OmniPixel 3-HS allows entry level, thin-bezel notebook designers to incorporate the VGA camera with excellent low light image capture for applications such as videoconferencing. Additionally, it offers 30% lower power consumption than the leading competitor.

Additionally, its OmniPixel 3-HS enhances color performance with symmetric pixel design to eliminate color shading and optimize the signal-to-noise ratio. This SoC's integrated image sensor has a 1/10" optical format and 2.2um pixel size, enabling a 4mm camera module in the Y dimension for the latest entry level notebooks with thinner bezels. Additionally, it is manufactured using 200mm wafer process and is offered in an 8" chip-scale package with a DVP interface.

FT: 15% of Sony CIS Sales Go to Huawei

FT discusses the US ban on components sale to Huawei. As Panasonic and Toshiba halt shipping their chips to the Chinese company, a question arises about Sony image sensors. They are used in all Huawei flagship phones and are difficult to substitute.

When contacted, Sony executives declined to comment on that. Jefferies analyst Atul Goyal estimates that Sony image sensor sales to Huawei is about 15% of the company's total CIS production. Oppo and Vivo use Sony sensors too, so if Huawei loses its market share to them, it should not impact Sony much. However, if Samsung takes over Huawei's products, Sony sales might suffer.

Aurora Buys Blackmore

Techcrunch, Wired, Forbes: Aurora, an AI self-driving startup that recently raised $530M in Round B, acquires Montana-based 10 year-old FMCW Doppler LiDAR developer Blackmore. The 70 Blackmore employees will join the 250 Aurora's engineers and technicians. So far, Blackmore has raises $21.5M in tow financing rounds.

We're really the pioneers in FM lidar,” Blackmore founder and CEO Randy Reibel says. “The easiest analogy is AM radio versus FM radio. If you think back to the days of broadcast radio, AM radio has a lot of interference. It’s staticky. When you work with an FM lidar system, or an FM radio, you don’t have that interference. That’s a huge deal in this space, especially from a safety perspective. … As we start to pack the roads with hundreds and hundreds of lidars in close proximity, they’re going to crosstalk. With an FM lidar, you don’t have any of those interference challenges.

That kind of consolidation will likely continue, Reibel predicted, in part because it’s challenging for LiDAR companies to “go it alone.” AV companies are particularly protective of their tech and opening the door to an outside LiDAR company takes convincing.

Aurora said it is not interested in manufacturing hardware, whether it’s cars or LiDARs. The company will work with automotive Tier 1 suppliers and other partners as it scales.

Thursday, May 23, 2019

Update on Ominivision and Superpix Acquisition by Will Semi

From Deloitte doc: "In 2018, Shanghai Will Semiconductor acquired a 85% stake in Beijing SuperPix Micro Technology and a 96% stake in Beijing OmniVision Technologies for USD2.18bn to take advantage of the high-end technics of OmniVision and cost control capabilities of SuperPix."

See here the previous reports on that.

Yanta Research on 2018 CIS Market Shares

Korea-based Yanta Research publishes its view on CIS market shares with Hynix taking #4 spot on the chart:

Omnivision VGA NIR Camera Module

PRNewswire: OmniVision announces the OVM7251 VGA CameraCubeChip module built on the 3um OmniPixel 3-GS global shutter pixel. The module is available in an 850nm version for AR/VR eye tracking, and a 940nm version for machine vision and 3D sensing in mobile facial authentication.

Until now, most camera modules for these applications have been built with rolling shutters, which have latency issues. Meanwhile, global shutter modules have previously been too large and expensive,” said Aaron Chiang, marketing director at OmniVision. “Our new OVM7251 overcomes these challenges by providing a cost effective VGA module with global shutter performance in a wafer-level, reflowable form factor.

The OVM7251’s sleep current consumption is 5mA, and during active mode. The module is available now for sampling and volume production, along with an evaluation kit.

Fast Imaging in the Dark

Pixart and National Chiao Tung University, Taiwan publish an open access paper "Fast Imaging in the Dark by using Convolutional Network" by Mian Jhong Chiu, Guo-Zhen Wang, and Jen-Hui Chuang presented at 2019 IEEE International Symposium on Circuits and Systems (ISCAS):

"While fast imaging in low-light condition is crucial for surveillance and robot applications, it is still a formidable challenge to resolve the seemingly inevitable high noise level and low photon count issues. A variety of image enhancement methods such as de-blurring and de-noising have been proposed in the past. However, limitations can still be found in these methods under extreme low-light condition. To overcome such difficulty, a learning-based image enhancement approach is proposed in this paper. In order to support the development of learning-based methodology, we collected a new low-lighting dataset (less than 0.1 lux) of raw short-exposure (6.67 ms) images, as well as the corresponding long-exposure reference images. Based on such dataset, we develop a light-weight convolutional network structure which is involved with fewer parameters and has lower computation cost compared with a regular-size network. The presented work is expected to make possible the implementation of more advanced edge devices, and their applications."

Wednesday, May 22, 2019

Sony Opens Design Centers in Norway, Germany, and Switzerland

Sony posts the following message at LinkedIn: "Sony Semiconductor Solutions, the largest image sensor supplier in the world, is establishing new design centers across Europe (Norway, Germany, Switzerland).

The design centers will focus on the design and the development of image sensors targeting a plethora of markets including mobile, automotive and industrial. The European design teams will work closely with Sony Semiconductor design and manufacturing teams in Japan.

We are starting a wide recruitment campaign all over Europe with the purpose of hiring analogue and digital engineers as well as engineering managers and field application engineers.
"

Thanks to AB for the pointer!

Ge-on-Si SPAD Publications

As noted in comments to my previous post on Heriot Watt University Ge-on-Si SPADs, there is a similar work in Eduardo Charbon group in EPFL and Delft University. The most recent development is presented in open-access IEEE TED paper "CMOS-Compatible PureGaB Ge-on-Si APD Pixel Arrays" by Amir Sammak, Mahdi Aminian, Lis K. Nanver, and Edoardo Charbon:

"Pure gallium and pure boron (PureGaB) Ge-on-Si photodiodes were fabricated in a CMOS compatible process and operated in linear and avalanche mode. Three different pixel geometries with very different area-to-perimeter ratios were investigated in linear arrays of 300 pixels with each a size of 26 x 26 mu m(2). The processing of anode contacts at the anode perimeters leaving oxide covered PureGaB-only light-entrance windows, created perimeter defects that increased the vertical Ge volume but did not deteriorate the diode ideality. The dark current at 1 V reverse bias was below 35 mu A/cm(2) at room temperature and below the measurement limit of 2.5 x 10(-2) mu A/cm(2) at 77 K. Spread in dark current levels and optical gain, that reached the range of 10(6) at 77 K, was lowest for the devices with largest perimeter. All device types were reliably operational in a wide temperature range from 77 K to room temperature. The spectral sensitivity of the detectors extended from visible to the telecom band with responsivities of 0.15 and 0.135 A/W at 850 and 940 nm, respectively."


Leeds University publishes a PhD Thesis "Electronic Transport Properties of Silicon-Germanium Single Photon Avalanche Detectors" by Helen Rafferty.

"Single photon avalanche detectors (SPADs) have uses in a number of applications, including time-of-flight ranging, quantum key distribution and low-light sensing. Germanium has an absorption edge at the key communications wavelengths of 1.3-1.55um, and can be grown epitaxially on silicon, however, SiGe SPADs exhibit a number of performance limitations, including low detection efficiencies, high dark counts and afterpulsing. Unintentional doping may affect electronic performance, and band-to-band tunnelling at high operational voltages SPADs may lead to noise currents. Additionally, defects in the Si/Ge interface lead to trap states within the bandgap and contribute to afterpulsing.

This work investigates a range of critical performance parameters in SiGe SPADs. The effect of intentional and unintentional doping in SPADs on electric fields, potential profiles and carrier transport in the device is investigated, and optimal dopant profiles for a SiGe SPAD discussed. The dependence of band-to-band tunnelling currents in Ge on bias voltage, Ge thickness and temperature is investigated, and these currents are compared to other sources of noise currents in SPADs. DFT calculations of misfit dislocation structures in Ge are undertaken, to establish electronic bandstructures and optimised geometries for these defects, and identify trap states in the bandgap, which may contribute to afterpulsing and dark counts in SPADs. A number of directions for continuing work are identified, to progress understanding of noise currents and afterpulsing in SPADs.
"

Himax and Emza Announce Human-Aware Vision for Notebooks

GlobeNewsWire: Himax and its wholly-owned subsidiary, Emza Visual Sense announce WiseEye 2.0 NB, an intelligent vision solution for notebook computers. It is said to be the industry’s first ultra-low power, AI-based intelligent visual sensor that adds the advanced human presence awareness functionality for notebooks while supporting always-on operation. The solution has been tested and well received by the leading global chipmaker and Quanta Computer, the world’s largest original design manufacturer (ODM) of notebook computers, for inclusion in their next-generation mainstream notebook platforms.

"We are excited that the WiseEye 2.0 solution’s coverage has expanded from IoT devices to notebook computers as it opens up new growth opportunities in the high-end notebook ecosystem. It’s a real win-win situation for OEMs, ODMs and Himax/Emza. Our unique technology consists of Himax’s CMOS image sensor and Emza’s AI-based computer vision algorithm running on an Himax-designed ASIC, all catered for ultra-low power consumption to enable always-on operation of the end device. The partnership with the world’s leading chipmaker and notebook ODM allows us to closely engage with multiple global notebook OEMs, targeting their next generation product launches for the 2020 back to school season,” said Jordan Wu, President and Chief Executive Officer of Himax Technologies.

The key features of the WiseEye 2.0 NB intelligent visual sensor for human presence detection include:

  • Enhanced AI-enabled User Experience: a combination of ultra-low power image sensor and energy efficient CV image processing algorithm, augmented with AI-based machine learning, enable automatic wake up of the notebook from standby mode or locking the screen based on specific human behavior or movements. This is a significant improvement over the existing solutions that do not function when the notebook is in sleep mode.
  • Extended Battery Life: AI-based always-on camera (AoS) can detect user engagement levels based on presence and face posing, enabling power management of the display and maximizing battery life.
  • Improved Privacy and Security: WiseEye 2.0 NB can detect the presence of additional humans in the field of view and send an alert to the user.
  • Expanded FOV: Optimized for 60-90 deg. HFOV and flexible VFOV as opposed to currently used simple sensors which are sensitive to screen angles. Wider FOV can enable early detection and sensing of flexible movement even when users are close to the screen.
  • Increased Distance Detection: High accuracy sensing of human presence from up to 5 meters away enables a quick response to user detection even when approaching the device at high speeds.
  • Production Friendly Technology: Does not need strict tolerances in mounting versus solutions that require calibration due to limited FOV.
  • Tiny Form Factor: The Himax 2-in-1 (AoS and RGB) sensor is the first hybrid CMOS sensor specifically for notebook computers. The sensor combines high quality HD image capabilities with ultra-low-power visual sensing, for AI context awareness applications. The new CMOS sensor will be available at end of 2019
  • Privacy Awareness: The sensor image is processed entirely on the dedicated WiseEye 2.0 processor, co-located with the CMOS image sensor, so that the image is never transmitted to the main platform. This architecture is specially designed to meet the highest privacy standards.

Expanding our industry leading intelligent vision solutions into notebook computing is a great achievement,” said Yoram Zylberberg, CEO of Emza Visual Sense, “applying ultra low-power machine learning AI for notebooks is the key especially while device operation is suspended to extend the life of the battery. Leveraging the AI benefits that we developed for IoT and now applying it to notebook is a great demonstration of the agility of our solution and our readiness to adapt the technology for specific customer requirements.

ON Semi Demos Super-Exposure Technology

ON Semi demos its Hayabusa super-exposure technology for automotive HDR with LFM:

Tuesday, May 21, 2019

Sony Strategy

Sony held a corporate strategy meeting FY19 and set the directions for its CIS business:
  • In imaging, Sony was able to deliver a stable supply of high value-added product to a market that is evolving not only toward higher resolution, but also toward multiple sensors per camera and larger sized sensors, while, at the same time, maintaining its number one market share position in CMOS sensors on a revenue basis.
  • Achieved steady development in the automotive and sensing parts of the business.
  • We expect to leverage the superior technology Sony has developed in this business to maintain our industry-leading position going forward.
  • Approximately 80% of CMOS sensor sales are to smartphones. Although this market has matured, demand for sensors continues to grow due to adoption of multiple sensors and larger sized sensors in smartphones. Demand for Time-of-Flight sensors in smartphones is also expected to increase.
  • Although investment in greater production capacity over the next few years is necessary, CMOS sensor production capacity does not become obsolete, resulting in high return on investment in the long term.
  • Initiatives in long-term growth prospects such as automotive sensors and Edge AI.
  • Expand business through fields such as distance measurement and automotive. Sony's automotive sensors are receiving positive external feedback.
  • Stacked CMOS image sensors to be made more intelligent by embedding AI functionality to the logic layer.
  • Sony will also actively pursue alliances with partner companies. Recently announced an MOU with Microsoft to collaborate in the area of AI.

Also, Sony 2019 IR Day updates on the company's imaging business directions:

Magik Eye Announces Invertible Light 3D Sensing

BusinessWire: Magik Eye reveals Invertible Light, a new method for depth sensing that is said to enable the smallest, fastest and most power-efficient 3D sensing. “While Structured Light, Time of Flight and Stereo Scoping Imaging are the primary methods today, Invertible Light aims to transform 3D sensing in the coming age of robotics and machine vision for the masses,” said Takeo Miyazawa, Founder & CEO of Magik Eye.

Current methods such as Structured Light have been around for more than 25 years and are based on legacy design. They fundamentally require the projection of a specific or random pattern to measure the distance to an object in 3D. The result is significant power usage, multiple components and complexity for production. All of this ultimately translates into higher cost for the consumer. Invertible Light by contrast projects a regular dot pattern on an object using only a projector and an image sensor. The result of this breakthrough in optics and mathematics is the smallest, fastest & most power-efficient 3D sensing.

Omnivision Unveils 5MP RGB-IR Sensor for Laptops

PRNewswire: OmniVision announced the OV5678 said to be the industry’s first 5MP RGB-IR image sensor for 2-in-1 convertible laptops. This sensor enables a single camera with greater accuracy for IR Windows Hello facial authentication as well as RGB images for selfies and videoconferencing.

Previously, Windows Hello facial authentication was not commonly found in 2-in-1 convertible laptops, as it required a second camera for IR functionality,” said Jason Chiang, product marketing manager at OmniVision. “The OV5678 eliminates the need for a second camera by combining RGB and IR capabilities in a single 5MP sensor, saving space while increasing value.

To ensure high quality color images, the OV5678 is built on OmniVision’s 1.12 micron PureCel Plus pixel architecture with deep trench isolation for greatly reduced color crosstalk. Additionally, its buried color filter array (BCFA) has a high tolerance for collecting light with various incident light angles.

The PureCel Plus architecture also utilizes thicker silicon to improve QE when capturing images using NIR light outside the visible spectrum. This is accomplished with only 1.3MP, which is a quarter of the OV5678 sensor’s full resolution. This IR performance enables machine vision applications such as Windows Hello facial authentication. It can also be used to perform eye tracking for reduced power consumption when the user is not viewing the screen. Eye tracking can also enable user warnings about eye fatigue from looking at the screen for an extended period of time.

The OV5678 is available now for samples and volume production

Ge-on-Si SPAD Sensors

Gerald Buller leading Single-Photon Group in Heriot Watt University, UK, presents Ge-on-Si SPAD devices which are supposed to solve the low IR detection efficiency problems of the regular Si-based SPADs:


Recently, the group has published a Nature paper "High performance planar germanium-on-silicon single-photon avalanche diode detectors" by Peter Vines, Kateryna Kuzmenko, Jarosław Kirdoda, Derek C. S. Dumas, Muhammad M. Mirza, Ross W. Millar, Douglas J. Paul, and Gerald S. Buller:

"In the short-wave infrared, semiconductor-based single-photon detectors typically exhibit relatively poor performance compared with all-silicon devices operating at shorter wavelengths. Here we show a new generation of planar germanium-on-silicon (Ge-on-Si) single-photon avalanche diode (SPAD) detectors for short-wave infrared operation. This planar geometry has enabled a significant step-change in performance, demonstrating single-photon detection efficiency of 38% at 125 K at a wavelength of 1310 nm, and a fifty-fold improvement in noise equivalent power compared with optimised mesa geometry SPADs. In comparison with InGaAs/InP devices, Ge-on-Si SPADs exhibit considerably reduced afterpulsing effects. These results, utilising the inexpensive Ge-on-Si platform, provide a route towards large arrays of efficient, high data rate Ge-on-Si SPADs for use in eye-safe automotive LIDAR and future quantum technology applications."

Emberion Article

InVision: Emberion publishes an article on its graphene-based image sensors:

"The key features of the graphene-based VIS-SWIR sensor technology are the wide spectral response range, excellent sensitivity and noise performance, and large dynamic operation range. The spectral response range starts from 400nm and extends initially up to 1,800nm, in the future extending to even longer wavelengths. The properties of graphene, namely the high mobility of the charge carriers and the maximal surface to volume ratio of a 2D material, result into a low noise, high internal gain and non-saturating response behavior. The linear and full dynamic operation ranges are 60 and 120dB, respectively. The light absorbing materials pose a trade-off between the operation speed and sensitivity. Currently, the frame rate is limited to 100fps but is expected to increase in future products. In respect to Specific Detectivity (D*) and Noise Equivalent Irradiance (NEI) performance, these novel sensors are on par with InGaAs photodiodes in SWIR and outperform them in VIS spectral domain at 30fps operation speed. The graphene-based sensors can be operated in room temperature but the optimal performance is achieved with a one-stage Peltier cooling element. The sensors will offer cost-wise an attractive alternative to InGaAs sensors. Therefore, this new sensor technology will allow product concepts and applications which have previously been prohibited by the high cost of SWIR sensors and which require a wider spectral response range."

Monday, May 20, 2019

Omnivision Announces 2.8um HDR DCG Split Pixel with LED Flicker Mitigation

PRNewswire: OmniVision announces the OX01D10, a 1MP image sensor for automotive applications. This sensor brings together split-pixel and dual conversion gain (DCG) technology, artifact-free motion capture, HDR of up to 120dB, as well as LED flicker mitigation (LFM).

"The OX01D10 delivers low power and high performance in a small form factor," said Andy Hanvey, automotive marketing director at OmniVision. "We provide the industry's leading LFM performance over the full automotive temperature range, which meets the needs of OEMs that are increasingly requiring cameras to mitigate the flicker from LED lighting in vehicles, signs, buildings and a wide variety of other outdoor illumination."

The OX01D10 consumes less than 200mW at 30fps, has advanced ASIL features, and HDR of 120dB without LFM (110dB in LFM mode). AEC-Q100 Grade 2 certified samples and evaluation kits are available now.

ST Imaging Roadmap

ST 2019 Capital Markets Day brings us an update on the company's imaging business:

Sunday, May 19, 2019

Zoom To Learn, Learn To Zoom

Arxiv.org paper "Zoom To Learn, Learn To Zoom" by Xuaner Cecilia Zhang, Qifeng Chen, Ren Ng, and Vladlen Koltun from UC Berkeley, HKUST, and Intel Labs claims a significant improvement over the earlier digital zoom algorithms:

"This paper shows that when applying machine learning to digital zoom for photography, it is beneficial to use real, RAW sensor data for training. Existing learning-based super-resolution methods do not use real sensor data, instead operating on RGB images. In practice, these approaches result in loss of detail and accuracy in their digitally zoomed output when zooming in on distant image regions. We also show that synthesizing sensor data by resampling high-resolution RGB images is an oversimplified approximation of real sensor data and noise, resulting in worse image quality. The key barrier to using real sensor data for training is that ground truth high-resolution imagery is missing. We show how to obtain the ground-truth data with optically zoomed images and contribute a dataset, SR-RAW, for real-world computational zoom. We use SR-RAW to train a deep network with a novel contextual bilateral loss (CoBi) that delivers critical robustness to mild misalignment in input-output image pairs. The trained network achieves state-of-the-art performance in 4X and 8X computational zoom."

Oxford University Thesis on Single-Slope ADCs

University of Oxford PhD thesis "Investigations of time-interpolated single-slope analog-to-digital converters for CMOS image sensors" by Deyan Levski explores time stretching and other concepts improving SS-ADC resolution and speed:

"The focus of the presented investigations here is to shed light on methods in Time-to-Digital Converter interpolation of single-slope ADCs. By using high-factor time-interpolation, the resolution of single-slope converters can be increased without sacrificing conversion time or power.

This work emphasizes on solutions for improvement of multiphase clock interpolation schemes, following an all-digital design paradigm. Presented is a digital calibration scheme which allows a complete elimination of analog clock generation blocks, such as PLL or DLL in Flash TDC-interpolated single-slope converters.
"

Saturday, May 18, 2019

Oxford University Thesis on Log Sensors

Oxford University, UK, publishes a PhD Thesis "Integrating logarithmic wide dynamic range CMOS image sensors" by Mus'ab B Shaharom:

"Conventional CMOS image sensors with a logarithmic response attempt to address the limited dynamic range of the linear digital image sensors by exploiting the subthreshold operation of a transistor in a pixel. This results in CMOS pixels that are able to capture light intensities of more than six decades (120 dB). However, the approach comes at the expense of high fixed pattern noise (FPN) and slow response.

The work presented in this thesis describes a five all nMOS transistor (5T) pixel architecture that aims to achieve wide dynamic range. This feature is obtained using a time-varying reference voltage that is applied to one of the transistors of the pixel. The reference voltage varies in a logarithmic fashion in order to modulate the effective integration time of the pixel.
"