Sunday, September 22, 2019

RED Camera Sensor Conspiracy Theories

Jinni.Tech publishes its investigation of who supplies RED camera sensors:

Thanks to ED for the pointer!

Epticore and Opnous

There is a number of ToF startups in China. Opnous has been been mentioned here a couple of months ago. Apparently, the company has licensed Brookman (Japan) ToF sensor. A recent company presentation tells more about the company technology and differentiation:

Epticore Microelectronics is another startup in China developing ToF technology. The company's presentation shows its products and plans:

Saturday, September 21, 2019

WLO for Next Generation Under-Display Fingerprint Modules

IFNews: GF Securities report on CIS industry shows the next generation under-display fingerprint module that uses WLO to reduce thickness. GF Securities forecasts the ultra-thin screen fingerprint sensors shipments of 94M units in 2020, accounting for 7% of global smartphone shipments, respectively.

Espros pToF Presentation

Espros presentation at MEMS Consulting seminar in China gives interesting info on the company's long range solution for automotive LiDARs:

Friday, September 20, 2019

ToF Sensors in Mobile Devices

TheElec reports that Sony ToF sensors inside LG Innotek modules will be used in Apple 2020 models of iPad and iPhone. Samsung Galaxy S10 5G and Galaxy Note 10+ use Sony ToF sensors too.

LG G8 ThinQ smartphones use ToF sensor from PMD-Infineon combined with ams VCSEL:

Autosens Brussels Awards

Autosens Brussels announces its awards in several categories, some of them image sensing-related:

Most Exciting Start-up – sponsored by Sony Semiconductor Solutions
  • Winner: TriEye
  • Silver: Outsight
  • Silver: WaveSense
Best in Class Perception System – sponsored by Varroc Lighting Systems
  • Winner: OmniVision Technologies – OAX4010 ISP ASIC
  • Silver: General Motors – Transparent Trailer for 2020 GMC Sierra and Silverado HD
  • Silver: Innoviz – InnovizOne LiDAR
Most Innovative In-Cabin Application
  • Winner: Daimler – MBUX Interior Assistant
  • Silver: Eyeris – In-vehicle Scene Understanding AI
  • Silver: Seeing Machines – FOVIO Driver Monitoring Technology

Thursday, September 19, 2019

Wednesday, September 18, 2019

Interview with Eric Fossum

Art19 publishes an hour-long interview with Eric Fossum:

OmniVision's Automotive SoC Claimed to Have Industry's Best Low-Light Performance, Lowest Power, and Smallest Size

PRNewswire: OmniVision announces 1.3MP OX01F10 SoC, an 1/4" 3.0um pixel image sensor with ISP for automotive rear view camera (RVC) and surround view system (SVS).

"Analysts predict that SVS and RVC will continue to hold the majority share in the automotive camera market, with over 50% of the total market volume through 2023. SVS, in particular, is expected to double its growth between now and 2023 due to increased customer adoption," said Andy Hanvey, director of automotive marketing at OmniVision. "Our OX01F10 SoC provides the best option for automotive designers responding to this growing consumer demand for better RVCs, along with the expansion of SVS into the mainstream market. Additionally, this SOC's functional safety features allow module providers to create a single platform for both the viewing cameras and the machine vision applications that require ASIL B."

OmniVision's dual conversion gain (DCG) technology is employed in this SoC to achieve a high dynamic range of 120dB with only two captures, as opposed to three required by the competition, which minimizes motion artifacts while reducing power consumption and boosting low-light performance. The OX01F10 features less than 300mW typical power consumption, which is said to be 30% lower than competitors.

Its integrated ISP features:
  • Lens chromatic aberration correction
  • Advanced noise reduction and local tone mapping
  • Optimizations for the on-chip image sensor's PureCel Plus technology

PureCel Plus technology provides what is said to be an industry best SNR1 of 0.19 lux. This results in the OX01F10 performing better than the competition across challenging lighting conditions. The OX01F10 SoC is AEC-Q100 Grade 2 certified and samples are available now.

Tuesday, September 17, 2019

Outsight Develops 3D LiDAR-Spectrometer-on-Wheels for Automotive Applications

BusinessWire: Outsight launches its 3D Semantic Camera for autonomous driving and other industries. Outsight founders, Raul Bravo, co-founder and CEO of former company Dibotics and Cedric Hutchings, co-founder of Withings and former VP of Nokia Technologies, joined forces to create a new entity that aims to combine the software assets of Dibotics with 3D sensor technology. Together with Dibotics’ other co-founder Oliver Garcia and Scott Buchter, co-founder of Lasersec, the four have assembled a global team of top talent in San Francisco, Paris, and Helsinki to turn their vision into reality.

We are excited to unveil our 3D Semantic Camera that brings an unprecedented solution for a vehicle to detect road hazards and prevent accidents.” - Cedric Hutchings, CEO and Co-founder of Outsight.

Outsight's 3D Semantic Camera is said to be able to bring Full Situation Awareness and new levels of safety/reliability for currently man-controlled machines like Level 1- 3 ADAS, Construction/Mining equipment, Helicopters, etc, but also accelerate the emergence of fully automated Smart Machines like Level 4- 5 Self Driving Cars, Robots, Drones, Autonomous flying taxis etc.

"Our 3D Semantic Camera is not only able to tackle current driving safety problems , but bring driving safety to new levels. With being able to unveil the full reality of the world by providing information that was previously invisible, we at Outsight are convinced that a whole new world of applications will be unleashed. This is just the beginning." - Raul Bravo, President and Co-founder of Outsight.

The technology is the very first of its kind to be intended to provide Full Situation Awareness in a single device. It’s a mass-producible, “all in one solution” technology with the ability to simultaneously perceive and comprehend the environment from hundreds of meters, including the key chemical composition of objects (Skin, Cotton, Ice, Snow, Plastic, Metal, Wood...).

This is partly made possible through the development of a low powered, long range and eye-safe broadband laser that allows for material composition to be identified through active hyperspectral analysis. Combined with its 3D SLAM on Chip capability (Simultaneous Localization and Mapping), Outsight's technology is able to unveil the Full Reality of the world in real-time. Outsight's 3D Semantic Camera is capable of providing actionable information and object classification through the onboard SoC that does not rely on “Machine Learning”, resulting in lower power consumption and bandwidth needed. This new approach eliminates the need for massive data sets for training and the guesswork is eliminated through actually “measuring” the objects. Being able to determine the material of an object adds a new level of confidence to determine what the camera is actually seeing.

It’s able to not only see and measure, but comprehend the world, as it provides the position, the size and the full velocity of all moving objects in its surroundings, providing valuable information for path planning and decision making. The 3D Semantic Camera can provide important information regarding road conditions and can, for example, identify black ice and other hazardous road conditions. This feature is vital for safety in ADAS systems for example. The system can also quickly identify pedestrians and bicyclists through its material identification capabilities.

Outsight has already started joint development programs with key OEMs and Tier1 providers in Automotive, Aeronautics and Security-Surveillance markets and will progressively open the technology to other partners in Q1-2020.

Thanks to JB for the info!

Synopsys, Himax Announce AI Vision Processor

GlobeNewswire: Himax announces the WiseEye WE-I Plus, an AI accelerator-embedded ASIC platform solution to develop and deploy CNN-based machine learning (ML) models on AIoT applications including smart home appliances and surveillance systems.

The WiseEye WE-I Plus ASIC adopts a programmable processor with enhanced DSP features and a power-efficient CDM, HOG and JPEG hardware accelerator for real-time motion detection, object detection, and image processing. To address the issue of rising security risk surrounding AIoT applications, the WiseEye WE-I Plus ASIC is equipped with comprehensive hardware and software integrated security solutions such as security boot, security OTA and security metadata output over TLS. In order to meet the demand for ultra-low power and long battery life, in addition to low-power-driven ASIC design, the embedded LDO and multi-state PMU have been purposely built to support shutdown, AoS (always on sensing) and CV efficient operation modes. Furthermore, an associated software library with a comprehensive tool chain is provided for efficient implementation of ML technology when processing captured data from image, voice and ambient sensors.

The demand for battery-powered smart devices with AI-enabled intelligent sensing is rapidly growing, especially in markets such as home appliances, door lock, TV, notebook and building control or security. Our WiseEye WE-I Plus ASIC platform solution can be used with popular ML frameworks for the development of a wide range of applications in audio, video and signal processing where power is a strict constraint and on-device memory is limited. We are receiving positive feedbacks from our partners and leading industry players,” said Jordan Wu, President and CEO of Himax.

The chip is based on Synopsys ARC EV7x Vision Processor IP:

Sony Officially Rejects Call to Spin-off Image Sensor Business

PRNewswire: Sony publishes "Letter from the CEO to Sony’s Shareholders and All Stakeholders" rejecting the possibility of spin-off image sensor business:

"...On June 13, 2019, Third Point LLC (“Third Point”) issued a public letter to investors suggesting that Sony should consider spinning-off and publicly listing our semiconductor business, which would effectively separate Sony into an entertainment company and a semiconductor (technology) company. We appreciate Third Point’s strong interest in Sony and welcome the fact that many people have been reminded of the value and further growth opportunities of that business.

Sony’s Board and management team, along with external financial and legal advisors in Japan and the U.S., conducted an extensive analysis of Third Point’s recommendations. Following this review, Sony’s Board, which is comprised of a majority of independent outside directors with diverse experience in a variety of industries, unanimously concluded that retaining the semiconductor business (now called the Imaging & Sensing Solutions (“I&SS”) business) is the best strategy for enhancing Sony’s corporate value over the long term. This is based on the fact that the I&SS business is a crucial growth driver for Sony that is expected to create even more value going forward through its close collaboration with the other businesses and personnel within the Sony Group. The Board also reaffirmed that to maintain and further strengthen its own competitiveness, it would be best for the I&SS business to stay within the Sony Group.

In its letter, Third Point described our semiconductor business, which is centered on image sensors, as a “Japanese crown jewel and technology champion.” Sony’s Board and management team share this view and are excited about the immense potential the I&SS business brings Sony. We expect it to not only further expand its current global number one position in imaging applications, but also continue to grow in new and rapidly developing markets such as the Internet of Things (“IoT”) and autonomous driving. We also expect it will contribute to the creation of a safer and more reliable society through its innovative technology.

While Sony’s Board and management team do not agree with Third Point’s recommendation to spin-off and publicly list the I&SS business, we will continue to proactively evaluate Sony’s business portfolio, pursue asset optimization within each business, and supplement our public disclosures as we execute on our strategy to increase shareholder value over the long term.

...Our strategy for future growth of the I&SS business is to develop AI sensors which make our sensors more intelligent by embedding artificial intelligence (AI) into the sensors themselves. We envisage AI and sensing being used across a wide range of applications such as IoT, autonomous driving, games and advanced medicine, and believe there is a potential for image sensors to evolve from the hardware they are today, to a solutions and platforms business.

...Our analysis, which was carried out in collaboration with outside financial advisors, also identified multiple meaningful sources of dis-synergy if the I&SS business was to separate from Sony and operate as a publicly listed independent company. These dissynergies include increased patent licensing fees, reduced ability to attract talent, increased costs and management resources as a publicly listed company, and tax inefficiencies, in addition to the time required for making the public listing.

Monday, September 16, 2019

NHK Future TV Technology Relies in 3D Vision

NHK STRL presentation from May 2019 talks about the company's vision for 2030-40 TV technology where 3D imaging takes a central role:

CNN Processor in Every Pixel

Manchester and Bristol Universities, UK, publish paper "A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays" by Laurie Bose, Jianing Chen, Stephen J. Carey, Piotr Dudek, and Walterio Mayol-Cuevas (see the video presentation in an earlier post.)

"We present a convolutional neural network implementation for pixel processor array (PPA) sensors. PPA hardware consists of a fine-grained array of general-purpose processing elements, each capable of light capture, data storage, program execution, and communication with neighboring elements. This allows images to be stored and manipulated directly at the point of light capture, rather than having to transfer images to external processing hardware. Our CNN approach divides this array up into 4x4 blocks of processing elements, essentially trading-off image resolution for increased local memory capacity per 4x4 "pixel". We implement parallel operations for image addition, subtraction and bit-shifting images in this 4x4 block format. Using these components we formulate how to perform ternary weight convolutions upon these images, compactly store results of such convolutions, perform max-pooling, and transfer the resulting sub-sampled data to an attached micro-controller. We train ternary weight filter CNNs for digit recognition and a simple tracking task, and demonstrate inference of these networks upon the SCAMP5 PPA system. This work represents a first step towards embedding neural network processing capability directly onto the focal plane of a sensor."

Omnivision Connects Arm ISP IP with its Automotive Sensor

PRNewswire: OmniVision has combined its OX03A1Y sensor with FPGA-based Arm Mali-C71 ISP for a dual-mode automotive camera module.

"OmniVision's dual-mode image sensor showcases the Mali-C71's ability to process multiple real-time inputs with one pipeline, capturing both human display and computer vision images with a single image sensor, at the highest possible quality," said Tom Conway, director of product management, automotive and IoT Line of Business, Arm.

"Arm's ISP intellectual property is an important part of the automotive ecosystem, and they are a key partner for OmniVision," said Celine Baron, staff automotive product marketing manager at OmniVision. "This collaboration demonstrates the high performance that can be achieved by combining our premium 2.5MP image sensor with Arm's ISP, for automotive applications that need both computer vision and human displays from a single camera module."

OmniVision and Arm used an FPGA emulating the Mali-C71 ISP to simultaneously process images captured by the OX03A1Y sensor for both computer vision and human displays. This sensor uses an RCCB clear color filter pattern to capture high quality images in all lighting conditions. The Mali-C71 then processes the data concurrently, outputting two simultaneous image signals for both human viewing and machine vision.

The OX03A1Y is the industry's first image sensor to feature a 3.2┬Ám pixel with 120dB HDR, dual conversion gain (DCG) and an RCCB color filter. DCG provides motion free HDR to ~85dB, for the best images when vehicles are in motion. The RCCB color filter allows in more light, which, in combination with OmniBSI-2 pixel, produces low-light performance with SNR1 at 0.09 lux, all the while with low power consumption. This is the first sensor to integrate all three capabilities. Additionally, the OX03A1Y is shipping in volume to automotive customers.

The OX03A1Y is available in a small 8.0 x 7.2mm chip-scale package, which is 35% smaller than competing image sensors. Additionally, this image sensor's power consumption is 20% lower than the competition.

The 2.5MP OX03A1Y image sensor integrates advanced ISO 26262 ASIL B functional safety features.

Sunday, September 15, 2019

QIS Sensors to Help NASA Missions

EurekAlert: NASA is awarding a team of researchers from Rochester Institute of Technology and Dartmouth College a grant to develop a detector capable of sensing and counting single photons for future astrophysics missions. The detector leverages Quanta Image Sensor (QIS) technology and measures every photon that strikes the image sensor. While other sensors have been developed to see single photons, the QIS has several advantages including the ability to operate at room temperature, resistance to radiation and the ability to run on low power.

"This will deliver critical technology to NASA, its partners and future instrument principal investigators," said Don Figer, director of RIT's Center for Detectors, the Future Photon Initiative and principal investigator for the grant. "The technology will have a significant impact for NASA space missions and ground-based facilities. Our detectors will provide several important benefits, including photon counting capability, large formats, relative immunity to radiation, low power dissipation, low noise radiation and pickup, lower mass and more robust electronics."

The project's co-investigators include RIT Assistant Professor Michael Zemcov and Dartmouth Professor Eric R. Fossum. Fossum has focused on inventing the QIS technology while RIT is leading application-specific development that leverages their expertise in astrophysics.

"We're excited for this collaboration with RIT to build upon Dartmouth's proof-of-concept QIS technology to research and develop instrument-grade sensors that can detect single photons in the dimmest possible light," Fossum said. "This has tremendous implications for astrophysics and enables NASA scientists to collect light from extremely distance objects."

The researchers will develop the technology over the next two years. The Center for Detectors will publish results, reports and data processing and analysis software on their website at

Thursday, September 12, 2019

CCD vs CMOS in Display QC Application

Radiant Vision, a Konica Minolta company, publishes an interesting comparison of CCD and CMOS cameras in display quality control applications:

Wednesday, September 11, 2019

Harvest Imaging Forum is 75% Full

Harvest Imaging Forum to be held in December 2019, in Delft, the Netherlands, is quickly approaching a fully booked status. More than 75 % of the seats have been sold. The Forum topics this year are:

  • "On-Chip Feature Extraction for Range-Finding and Recognition Applications" by Makoto IKEDA (Tokyo University, Japan)
  • "Direct ToF 3D Imaging : from the Basics to the System" by Matteo PERENZONI (FBK, Trento, Italy)

Image Sensors for Machine Vision

ON Semi publishes a webinar "The Current State of Machine Vision Technology: Image Sensor Challenges and Selection."

BusinessWire: ON Semi also announces a 0.3MP machine vision sensor with 2.2um BSI pixels, the 1/10-inch ARX3A0. The new sensor has 1:1 aspect ratio and features ON Semiconductor’s NIR+ technology.

The 560 x 560 pixel sensor can operate at 360fps speed. It consumes less than 19 mW when capturing images at 30 fps, and 2.5 mW when capturing 1 fps.

Gianluca Colli, VP and GM, Consumer Solution Division of Image Sensor Group at ON Semiconductor said: “As we approach an era where Artificial Intelligence (AI) is becoming an integral part of vision-based systems, it becomes clear that we now share this world with a new kind of intelligence. The ARX3A0 has been designed for that new breed of machine, where vision is as integral to their operation as it is ours.

Tuesday, September 10, 2019

MCT and Microbolometric Imagers in China

China has achieved a lot of advances in cooled MCT and microbolometric imagers, including high resolution up to 2.7K x 2.7K and pixel size down to 10um. These are imagers from Norinco, CETC, iRay, GST, HikVision, and Dali presented at CIOE Show held in Shenzhen, China, last week:

-Norinco picture removed due to the absence of publishing permission-

Thanks to AB for the info!

Monday, September 09, 2019

Sony Unveils 61MP Full-Frame and 26MP APS-C Sensors for Security Applications

Sony unveils 4 new sensors for security and surveillance applications: IMX415-AAMR, IMX455AQK-K, IMX533CQK-D, IMX571BQR-J

Sunday, September 08, 2019

UBS: Galaxy S10 5G Cameras Cost $73

IFNews: According to UBS report, Samsung Galaxy S10 5G cameras, including ToF ones, cost $73. The cameras are the 2nd most expensive component after the display:

Front cameras:
  • Selfie Camera
  • ToF Depth Camera

Rear cameras:
  • Telephoto Camera
  • Wide-angle Camera
  • Ultra Wide Camera
  • ToF Depth Camera