Friday, September 20, 2019

Autosens Brussels Awards

Autosens Brussels announces its awards in several categories, some of them image sensing-related:

Most Exciting Start-up – sponsored by Sony Semiconductor Solutions
  • Winner: TriEye
  • Silver: Outsight
  • Silver: WaveSense
Best in Class Perception System – sponsored by Varroc Lighting Systems
  • Winner: OmniVision Technologies – OAX4010 ISP ASIC
  • Silver: General Motors – Transparent Trailer for 2020 GMC Sierra and Silverado HD
  • Silver: Innoviz – InnovizOne LiDAR
Most Innovative In-Cabin Application
  • Winner: Daimler – MBUX Interior Assistant
  • Silver: Eyeris – In-vehicle Scene Understanding AI
  • Silver: Seeing Machines – FOVIO Driver Monitoring Technology

Thursday, September 19, 2019

Wednesday, September 18, 2019

Interview with Eric Fossum

Art19 publishes an hour-long interview with Eric Fossum:

OmniVision's Automotive SoC Claimed to Have Industry's Best Low-Light Performance, Lowest Power, and Smallest Size

PRNewswire: OmniVision announces 1.3MP OX01F10 SoC, an 1/4" 3.0um pixel image sensor with ISP for automotive rear view camera (RVC) and surround view system (SVS).

"Analysts predict that SVS and RVC will continue to hold the majority share in the automotive camera market, with over 50% of the total market volume through 2023. SVS, in particular, is expected to double its growth between now and 2023 due to increased customer adoption," said Andy Hanvey, director of automotive marketing at OmniVision. "Our OX01F10 SoC provides the best option for automotive designers responding to this growing consumer demand for better RVCs, along with the expansion of SVS into the mainstream market. Additionally, this SOC's functional safety features allow module providers to create a single platform for both the viewing cameras and the machine vision applications that require ASIL B."

OmniVision's dual conversion gain (DCG) technology is employed in this SoC to achieve a high dynamic range of 120dB with only two captures, as opposed to three required by the competition, which minimizes motion artifacts while reducing power consumption and boosting low-light performance. The OX01F10 features less than 300mW typical power consumption, which is said to be 30% lower than competitors.

Its integrated ISP features:
  • Lens chromatic aberration correction
  • Advanced noise reduction and local tone mapping
  • Optimizations for the on-chip image sensor's PureCel Plus technology

PureCel Plus technology provides what is said to be an industry best SNR1 of 0.19 lux. This results in the OX01F10 performing better than the competition across challenging lighting conditions. The OX01F10 SoC is AEC-Q100 Grade 2 certified and samples are available now.

Tuesday, September 17, 2019

Outsight Develops 3D LiDAR-Spectrometer-on-Wheels for Automotive Applications

BusinessWire: Outsight launches its 3D Semantic Camera for autonomous driving and other industries. Outsight founders, Raul Bravo, co-founder and CEO of former company Dibotics and Cedric Hutchings, co-founder of Withings and former VP of Nokia Technologies, joined forces to create a new entity that aims to combine the software assets of Dibotics with 3D sensor technology. Together with Dibotics’ other co-founder Oliver Garcia and Scott Buchter, co-founder of Lasersec, the four have assembled a global team of top talent in San Francisco, Paris, and Helsinki to turn their vision into reality.

We are excited to unveil our 3D Semantic Camera that brings an unprecedented solution for a vehicle to detect road hazards and prevent accidents.” - Cedric Hutchings, CEO and Co-founder of Outsight.

Outsight's 3D Semantic Camera is said to be able to bring Full Situation Awareness and new levels of safety/reliability for currently man-controlled machines like Level 1- 3 ADAS, Construction/Mining equipment, Helicopters, etc, but also accelerate the emergence of fully automated Smart Machines like Level 4- 5 Self Driving Cars, Robots, Drones, Autonomous flying taxis etc.

"Our 3D Semantic Camera is not only able to tackle current driving safety problems , but bring driving safety to new levels. With being able to unveil the full reality of the world by providing information that was previously invisible, we at Outsight are convinced that a whole new world of applications will be unleashed. This is just the beginning." - Raul Bravo, President and Co-founder of Outsight.

The technology is the very first of its kind to be intended to provide Full Situation Awareness in a single device. It’s a mass-producible, “all in one solution” technology with the ability to simultaneously perceive and comprehend the environment from hundreds of meters, including the key chemical composition of objects (Skin, Cotton, Ice, Snow, Plastic, Metal, Wood...).

This is partly made possible through the development of a low powered, long range and eye-safe broadband laser that allows for material composition to be identified through active hyperspectral analysis. Combined with its 3D SLAM on Chip capability (Simultaneous Localization and Mapping), Outsight's technology is able to unveil the Full Reality of the world in real-time. Outsight's 3D Semantic Camera is capable of providing actionable information and object classification through the onboard SoC that does not rely on “Machine Learning”, resulting in lower power consumption and bandwidth needed. This new approach eliminates the need for massive data sets for training and the guesswork is eliminated through actually “measuring” the objects. Being able to determine the material of an object adds a new level of confidence to determine what the camera is actually seeing.

It’s able to not only see and measure, but comprehend the world, as it provides the position, the size and the full velocity of all moving objects in its surroundings, providing valuable information for path planning and decision making. The 3D Semantic Camera can provide important information regarding road conditions and can, for example, identify black ice and other hazardous road conditions. This feature is vital for safety in ADAS systems for example. The system can also quickly identify pedestrians and bicyclists through its material identification capabilities.

Outsight has already started joint development programs with key OEMs and Tier1 providers in Automotive, Aeronautics and Security-Surveillance markets and will progressively open the technology to other partners in Q1-2020.

Thanks to JB for the info!

Synopsys, Himax Announce AI Vision Processor

GlobeNewswire: Himax announces the WiseEye WE-I Plus, an AI accelerator-embedded ASIC platform solution to develop and deploy CNN-based machine learning (ML) models on AIoT applications including smart home appliances and surveillance systems.

The WiseEye WE-I Plus ASIC adopts a programmable processor with enhanced DSP features and a power-efficient CDM, HOG and JPEG hardware accelerator for real-time motion detection, object detection, and image processing. To address the issue of rising security risk surrounding AIoT applications, the WiseEye WE-I Plus ASIC is equipped with comprehensive hardware and software integrated security solutions such as security boot, security OTA and security metadata output over TLS. In order to meet the demand for ultra-low power and long battery life, in addition to low-power-driven ASIC design, the embedded LDO and multi-state PMU have been purposely built to support shutdown, AoS (always on sensing) and CV efficient operation modes. Furthermore, an associated software library with a comprehensive tool chain is provided for efficient implementation of ML technology when processing captured data from image, voice and ambient sensors.

The demand for battery-powered smart devices with AI-enabled intelligent sensing is rapidly growing, especially in markets such as home appliances, door lock, TV, notebook and building control or security. Our WiseEye WE-I Plus ASIC platform solution can be used with popular ML frameworks for the development of a wide range of applications in audio, video and signal processing where power is a strict constraint and on-device memory is limited. We are receiving positive feedbacks from our partners and leading industry players,” said Jordan Wu, President and CEO of Himax.

The chip is based on Synopsys ARC EV7x Vision Processor IP:

Sony Officially Rejects Call to Spin-off Image Sensor Business

PRNewswire: Sony publishes "Letter from the CEO to Sony’s Shareholders and All Stakeholders" rejecting the possibility of spin-off image sensor business:

"...On June 13, 2019, Third Point LLC (“Third Point”) issued a public letter to investors suggesting that Sony should consider spinning-off and publicly listing our semiconductor business, which would effectively separate Sony into an entertainment company and a semiconductor (technology) company. We appreciate Third Point’s strong interest in Sony and welcome the fact that many people have been reminded of the value and further growth opportunities of that business.

Sony’s Board and management team, along with external financial and legal advisors in Japan and the U.S., conducted an extensive analysis of Third Point’s recommendations. Following this review, Sony’s Board, which is comprised of a majority of independent outside directors with diverse experience in a variety of industries, unanimously concluded that retaining the semiconductor business (now called the Imaging & Sensing Solutions (“I&SS”) business) is the best strategy for enhancing Sony’s corporate value over the long term. This is based on the fact that the I&SS business is a crucial growth driver for Sony that is expected to create even more value going forward through its close collaboration with the other businesses and personnel within the Sony Group. The Board also reaffirmed that to maintain and further strengthen its own competitiveness, it would be best for the I&SS business to stay within the Sony Group.

In its letter, Third Point described our semiconductor business, which is centered on image sensors, as a “Japanese crown jewel and technology champion.” Sony’s Board and management team share this view and are excited about the immense potential the I&SS business brings Sony. We expect it to not only further expand its current global number one position in imaging applications, but also continue to grow in new and rapidly developing markets such as the Internet of Things (“IoT”) and autonomous driving. We also expect it will contribute to the creation of a safer and more reliable society through its innovative technology.

While Sony’s Board and management team do not agree with Third Point’s recommendation to spin-off and publicly list the I&SS business, we will continue to proactively evaluate Sony’s business portfolio, pursue asset optimization within each business, and supplement our public disclosures as we execute on our strategy to increase shareholder value over the long term.

...Our strategy for future growth of the I&SS business is to develop AI sensors which make our sensors more intelligent by embedding artificial intelligence (AI) into the sensors themselves. We envisage AI and sensing being used across a wide range of applications such as IoT, autonomous driving, games and advanced medicine, and believe there is a potential for image sensors to evolve from the hardware they are today, to a solutions and platforms business.

...Our analysis, which was carried out in collaboration with outside financial advisors, also identified multiple meaningful sources of dis-synergy if the I&SS business was to separate from Sony and operate as a publicly listed independent company. These dissynergies include increased patent licensing fees, reduced ability to attract talent, increased costs and management resources as a publicly listed company, and tax inefficiencies, in addition to the time required for making the public listing.

Monday, September 16, 2019

NHK Future TV Technology Relies in 3D Vision

NHK STRL presentation from May 2019 talks about the company's vision for 2030-40 TV technology where 3D imaging takes a central role:

CNN Processor in Every Pixel

Manchester and Bristol Universities, UK, publish paper "A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays" by Laurie Bose, Jianing Chen, Stephen J. Carey, Piotr Dudek, and Walterio Mayol-Cuevas (see the video presentation in an earlier post.)

"We present a convolutional neural network implementation for pixel processor array (PPA) sensors. PPA hardware consists of a fine-grained array of general-purpose processing elements, each capable of light capture, data storage, program execution, and communication with neighboring elements. This allows images to be stored and manipulated directly at the point of light capture, rather than having to transfer images to external processing hardware. Our CNN approach divides this array up into 4x4 blocks of processing elements, essentially trading-off image resolution for increased local memory capacity per 4x4 "pixel". We implement parallel operations for image addition, subtraction and bit-shifting images in this 4x4 block format. Using these components we formulate how to perform ternary weight convolutions upon these images, compactly store results of such convolutions, perform max-pooling, and transfer the resulting sub-sampled data to an attached micro-controller. We train ternary weight filter CNNs for digit recognition and a simple tracking task, and demonstrate inference of these networks upon the SCAMP5 PPA system. This work represents a first step towards embedding neural network processing capability directly onto the focal plane of a sensor."

Omnivision Connects Arm ISP IP with its Automotive Sensor

PRNewswire: OmniVision has combined its OX03A1Y sensor with FPGA-based Arm Mali-C71 ISP for a dual-mode automotive camera module.

"OmniVision's dual-mode image sensor showcases the Mali-C71's ability to process multiple real-time inputs with one pipeline, capturing both human display and computer vision images with a single image sensor, at the highest possible quality," said Tom Conway, director of product management, automotive and IoT Line of Business, Arm.

"Arm's ISP intellectual property is an important part of the automotive ecosystem, and they are a key partner for OmniVision," said Celine Baron, staff automotive product marketing manager at OmniVision. "This collaboration demonstrates the high performance that can be achieved by combining our premium 2.5MP image sensor with Arm's ISP, for automotive applications that need both computer vision and human displays from a single camera module."

OmniVision and Arm used an FPGA emulating the Mali-C71 ISP to simultaneously process images captured by the OX03A1Y sensor for both computer vision and human displays. This sensor uses an RCCB clear color filter pattern to capture high quality images in all lighting conditions. The Mali-C71 then processes the data concurrently, outputting two simultaneous image signals for both human viewing and machine vision.

The OX03A1Y is the industry's first image sensor to feature a 3.2µm pixel with 120dB HDR, dual conversion gain (DCG) and an RCCB color filter. DCG provides motion free HDR to ~85dB, for the best images when vehicles are in motion. The RCCB color filter allows in more light, which, in combination with OmniBSI-2 pixel, produces low-light performance with SNR1 at 0.09 lux, all the while with low power consumption. This is the first sensor to integrate all three capabilities. Additionally, the OX03A1Y is shipping in volume to automotive customers.

The OX03A1Y is available in a small 8.0 x 7.2mm chip-scale package, which is 35% smaller than competing image sensors. Additionally, this image sensor's power consumption is 20% lower than the competition.

The 2.5MP OX03A1Y image sensor integrates advanced ISO 26262 ASIL B functional safety features.

Sunday, September 15, 2019

QIS Sensors to Help NASA Missions

EurekAlert: NASA is awarding a team of researchers from Rochester Institute of Technology and Dartmouth College a grant to develop a detector capable of sensing and counting single photons for future astrophysics missions. The detector leverages Quanta Image Sensor (QIS) technology and measures every photon that strikes the image sensor. While other sensors have been developed to see single photons, the QIS has several advantages including the ability to operate at room temperature, resistance to radiation and the ability to run on low power.

"This will deliver critical technology to NASA, its partners and future instrument principal investigators," said Don Figer, director of RIT's Center for Detectors, the Future Photon Initiative and principal investigator for the grant. "The technology will have a significant impact for NASA space missions and ground-based facilities. Our detectors will provide several important benefits, including photon counting capability, large formats, relative immunity to radiation, low power dissipation, low noise radiation and pickup, lower mass and more robust electronics."

The project's co-investigators include RIT Assistant Professor Michael Zemcov and Dartmouth Professor Eric R. Fossum. Fossum has focused on inventing the QIS technology while RIT is leading application-specific development that leverages their expertise in astrophysics.

"We're excited for this collaboration with RIT to build upon Dartmouth's proof-of-concept QIS technology to research and develop instrument-grade sensors that can detect single photons in the dimmest possible light," Fossum said. "This has tremendous implications for astrophysics and enables NASA scientists to collect light from extremely distance objects."

The researchers will develop the technology over the next two years. The Center for Detectors will publish results, reports and data processing and analysis software on their website at

Thursday, September 12, 2019

CCD vs CMOS in Display QC Application

Radiant Vision, a Konica Minolta company, publishes an interesting comparison of CCD and CMOS cameras in display quality control applications:

Wednesday, September 11, 2019

Harvest Imaging Forum is 75% Full

Harvest Imaging Forum to be held in December 2019, in Delft, the Netherlands, is quickly approaching a fully booked status. More than 75 % of the seats have been sold. The Forum topics this year are:

  • "On-Chip Feature Extraction for Range-Finding and Recognition Applications" by Makoto IKEDA (Tokyo University, Japan)
  • "Direct ToF 3D Imaging : from the Basics to the System" by Matteo PERENZONI (FBK, Trento, Italy)

Image Sensors for Machine Vision

ON Semi publishes a webinar "The Current State of Machine Vision Technology: Image Sensor Challenges and Selection."

BusinessWire: ON Semi also announces a 0.3MP machine vision sensor with 2.2um BSI pixels, the 1/10-inch ARX3A0. The new sensor has 1:1 aspect ratio and features ON Semiconductor’s NIR+ technology.

The 560 x 560 pixel sensor can operate at 360fps speed. It consumes less than 19 mW when capturing images at 30 fps, and 2.5 mW when capturing 1 fps.

Gianluca Colli, VP and GM, Consumer Solution Division of Image Sensor Group at ON Semiconductor said: “As we approach an era where Artificial Intelligence (AI) is becoming an integral part of vision-based systems, it becomes clear that we now share this world with a new kind of intelligence. The ARX3A0 has been designed for that new breed of machine, where vision is as integral to their operation as it is ours.

Tuesday, September 10, 2019

MCT and Microbolometric Imagers in China

China has achieved a lot of advances in cooled MCT and microbolometric imagers, including high resolution up to 2.7K x 2.7K and pixel size down to 10um. These are imagers from Norinco, CETC, iRay, GST, HikVision, and Dali presented at CIOE Show held in Shenzhen, China, last week:

-Norinco picture removed due to the absence of publishing permission-

Thanks to AB for the info!

Monday, September 09, 2019

Sony Unveils 61MP Full-Frame and 26MP APS-C Sensors for Security Applications

Sony unveils 4 new sensors for security and surveillance applications: IMX415-AAMR, IMX455AQK-K, IMX533CQK-D, IMX571BQR-J

Sunday, September 08, 2019

UBS: Galaxy S10 5G Cameras Cost $73

IFNews: According to UBS report, Samsung Galaxy S10 5G cameras, including ToF ones, cost $73. The cameras are the 2nd most expensive component after the display:

Front cameras:
  • Selfie Camera
  • ToF Depth Camera

Rear cameras:
  • Telephoto Camera
  • Wide-angle Camera
  • Ultra Wide Camera
  • ToF Depth Camera

Saturday, September 07, 2019

Huawei Kirin 990 5G Camera Features

HuaweiCentral: Huawei presents its new mobile processor Kirin 990 5G at IFA 2019 in Berlin, Germany. One of its most impressive imaging features is the AI-based ability to determine the heart rate and breath rate just from a selfie camera video stream:

Another impressive feature is a real-time video segmentation:

More pictures form the company's IFA presentation:

Friday, September 06, 2019

DARPA Starts Curved IR Imagers Program

DARPA FOcal arrays for Curved Infrared Imagers (FOCII) program is created to expand upon the current commercial trend for visible sensor arrays by extending the capability to both large and medium format midwave (MWIR) and/or longwave (LWIR) infrared detectors. The program seeks to develop and demonstrate technologies for curving existing state-of-the-art large format, high performance IR FPAs to a small radius of curvature (ROC) to maximize performance, as well as curve smaller format FPAs to an extreme ROC to enable the smallest form factors possible while maintaining exquisite performance.

FOCII will address this challenge through two approaches to fabricating a curved FPA. The first involves curving existing state-of-the-art FPAs, while keeping the underlying design intact. The focus of the research will be on achieving significant performance improvements over existing, flat FPAs, with a target radius of curvature of 70mm. The fundamental challenge researchers will work to address within this approach is to mitigate the mechanical strain created by curving the FPGA, particularly in silicon, which is very brittle.

The second approach will focus on achieving an extreme ROC of 12.5 mm to enable a transformative reduction in the size and weight compared to current imagers. Unlike the first approach, researchers will explore possible modifications to the underlying design, including physical modifications to the silicon that could relieve or eliminate stress on the material and allow for creating the desired curvature in a smaller sized FPA. This approach will also require new methods to counter the effects of any modifications during image reconstruction in the underlying ROIC algorithm.

Thanks to TL for the link!

Thursday, September 05, 2019

LiDAR News: Lumotive, LeiShen, CoreDAR, Hitachi

GlobeNewswire: Lumotive, a Bill Gates-funded LiDAR startup, used Himax’s LCOS display with Lumotive’s patented Liquid Crystal Metasurfaces (LCMs) to improve the performance, reliability and cost of LiDAR systems. Other LiDAR sensors utilize MEMS mirrors or optical phased arrays. However, both of these approaches lack performance due to the small optical aperture of MEMS mirrors and the low efficiency of phased arrays. In a first for LiDAR, Lumotive leverages Himax’s unique, tailor-made LCOS process to convert semiconductor chips into dynamic displays that steer laser pulses based on the light-bending principles of metamaterials.

Lumotive’s LiDAR systems offer performance advantages, including a combination of:
  • Large optical aperture (25 x 25 mm) which delivers long range
  • 120-degree FoV with high angular resolution
  • Fast, random-access beam steering

Leishen Intelligent System presents its broad range of low-cost LiDARs. An automotive grade hybrid LiDAR CH16 3D is priced at $599 in quantities of 10,000:

Update: LeiShen kindly sent me their price list for small quantity purchases:

CoreDAR presents its tiny LiDAR concept:

Hitachi presents its view on LiDAR's role in smart city applications:

Wednesday, September 04, 2019

Samsung Exynos 980 Supports 108MP Camera

Samsung 5G 8nm Exynos 980 mobile processor supports up to 108MP camera:

"For advanced photography, the Exynos 980 delivers compelling camera performances with resolution support for up to 108-megapixels (Mp). The advanced image signal processor (ISP) supports up to five individual sensors and is able to process three concurrently for richer multi-camera experiences. Along with the NPU, the AI-powered camera is able to detect and understand scenes or objects, according to which the camera will then make optimal adjustments to its settings.

For an immersive multimedia experience, the Exynos 980’s multi-format codec (MFC) supports encoding and decoding of 4K UHD video at 120 frames per second (fps). HDR10+ support with dynamic mapping also offers more detailed and illuminant colors in video content.

sCMOS Sensors: Fairchild Imaging vs GPixel paper "Evaluation of scientific CMOS sensors for sky survey applications" by S.Karpov, A.Bajat, A.Christov, M.Prouza from Czech Academy of Sciences compares Andor cameras based on Fairchild Imaging CIS2051 (Neo camera) and GPixel GSense400BSI (Marana camera) sCMOS sensors:

Scientific CMOS image sensors are a modern alternative for a typical CCD detectors, as they offer both low read-out noise, large sensitive area, and high frame rates. All these makes them promising devices for a modern wide-field sky surveys. However, the peculiarities of CMOS technology have to be properly taken into account when analyzing the data. In order to characterize these, we performed an extensive laboratory testing of Andor Marana sCMOS camera. Here we report its results, especially on the temporal stability and linearity, and compare it to the previous versions of Andor sCMOS cameras. We also present the results of an on-sky testing of this sensor connected to a wide-field lens, and discuss its applications for an astronomical sky surveys.

Tuesday, September 03, 2019

e2v Announces Fast Sensors

GlobeNewswire: Teledyne e2v announces its Flash CMOS sensor family, tailored for 3D laser profiling/displacement applications and high speed, high resolution inspection.

The new Flash sensors feature a 6μm CMOS global shutter pixel which effectively combines high resolution and fast frame rate. They are available in a 4k or 2k horizontal resolution, with respective frame rates of 1800fps and 1500fps (8 bits), and respective readout speeds of 61.4Gbps and 25.6Gbps (the best Gbps/price ratio in the market). The sensors come in a µPGA ceramic package fitting in standard optical formats, APS-like optics in the 4k and C-Mount in the 2k.

Yoann Lochardet, Marketing Manager for 3D at Teledyne e2v said, “We are very pleased to announce the release of the new Flash family of CMOS sensors which were developed after listening closely to the requirements of leading companies in the market. These new sensors feature a unique set of characteristics targeted at 3D laser triangulation applications including; high resolution, very high frame rate, very high readout speed, HDR capability and a large set of additional features. All these capabilities allow our customers to solve the most challenging application demands in 3D laser profiling/displacement such as quality control and 3D measurement.

Evaluation Kits and samples of Flash 2K and Flash 4K are now available.

Omnivision Announces 2.2um Global Shutter Pixel with 40% QE at 940nm

PRNewswire: OmniVision announces the smallest-ever pixel size of 2.2um for a BSI, GS image sensor. The new OG01A sensor combines PureCel Plus-S pixel and Nyxel NIR technology to achieve QE of 40% at 940nm and 60% at 850nm.

The OG01A is well-suited to multiple machine-vision applications, including AR/VR headsets, drones, robots, and SLAM, as well as facial authentication in smartphones and other consumer electronics. This technology is also ideal for automotive in-cabin driver state monitoring and eye tracking.

"The OG01A has the industry's smallest global shutter pixel and provides the best NIR performance in a GS sensor," said Devang Patel, senior staff marketing manager for the security and emerging segments at OmniVision. "There is a growing need for global shutter technology to accurately capture images of moving objects, along with excellent NIR performance and small size, in camera applications such as AR/VR headsets, drones, robots and smartphones. The OG01A delivers the industry's best combination of features for these applications."

The 1.3MP OG01A sensor provides 1280x1024 resolution at 120 fps and 640x480 resolution at 240 fps in a compact 1/5 inch optical format. Samples are available now.