Tuesday, November 13, 2018

AIT Uses Dynamic Vision Sensor in Panoramic Scanner

Austrian Institute of Technology presents its version of DVS - Dynamic Vision Sensor:

"Unlike conventional image sensors the chip has no pixel readout clock but signals the detected changes instantaneously. This information is signalled as so-called “events” that contain the information of the responding pixels x-y addresses (address-event) in the imager array and the associated timestamp via a synchronous timed addressevent-representation (TAER) interface. The sensor can produce two types of events for each pixel: “On”-events for a relative increase in light intensity and “Off”-events for a relative decrease (see diagram)."


AIT also makes a 360deg 3D scanner with its DVS sensor:


Thanks to TL for the links!

White Light Inteferometric 3D Imager

Heliotis HeliInspect H8 3D camera uses the company's next generation 3D image sensor HeliSens S4:


Thanks to TL for the flyer!

Monday, November 12, 2018

Forza on Image Sensor Verification Challenges

Forza CAD Manager, Kevin Johnson, presents "CMOS Image Sensor Verification Challenges for Safety Critical Applications" at Mentor Graphics U2U conference:

Andanta SWIR to Green Photon Converter

Andanta presents InGaAs PD array combined with green LED array in a single module, effectively converting SWIR photons to green photons with 3% efficiency:


Thanks to AB for the pointer!

Quantum Dot SWIR Cameras

SWIR Vision Systems introduces the Acuros family of low cost SWIR cameras featuring CQD sensing technology:


Thanks to AB for the pointer!

ams Pre-Releases Endoscopic Imagers

BusinessWire: ams pre-releases the NanEyeM (already announced last week) and NanEyeXS for single-use endoscopes in minimally invasive surgery.

The new 1mm2 NanEyeM offers a 100kpixel readout over an LVDS digital interface at a maximum rate of 49 fps at 62MHz. The NanEyeM, which is supplied as a Micro Camera Module (MCM) including a cable up to 2m long, features a custom multi-element lens which improves the effective resolution of the sensor and reduces distortion. Compared to the earlier NanEye 2D sensor, which has a single-element lens, the new NanEyeM offers improved MTF of more than 50% in the corners, lower distortion of less than 15%, and lower color aberration of less than 1Px.

The new NanEyeXS from ams has a 0.46mm2 footprint, making it one of the world´s smallest image sensors. It produces a digital output in 40kpixel resolution at a maximum rate of 55 fps at 28MHz. Like the NanEyeM, the NanEyeXS is supplied as an MCM.

The NanEyeM is also available in surface-mount chip form.

Medical endoscopy is a rapidly growing market and the demand for single-use devices is expected to increase, creating a clear need for cost-effective imaging solutions that offer a level of performance and image quality equal to that seen in reusable endoscopes. The NanEyeM and NanEyeXS modules were designed to meet this market need by offering a full package approach with exceptional imaging capabilities while retaining a cost competitive edge in high volumes for single-use endoscopes and catheter-based applications,” said Dina Aguiar, marketing manager for the NanEye products at ams. “These new additions to the NanEye family will complement the award winning NanEye 2D, which pioneered the technological evolution of medical endoscopy. ams thus reinforces its position in the rapidly growing market for disposable endoscopy with unique products that will help further revolutionize patient care.

The NanEyeXS and NanEyeM image sensors will be available for sampling in January 2019.

Saturday, November 10, 2018

2018 Harvest Imaging Forum Agenda

Albert Theuwissen announces agenda of Harvest Imaging Forum, to be held on December 6-7 in Delft, the Netherlands.

Day 1 of the forum is devoted to "Efficient embedded deep learning for vision applications," presented by Marian VERHELST (KU Leuven, Belgium):
  1. Introduction into deep learning
    From neural networks (NN) to deep NN
    Benefits & applications
    Training and inference with deep NN
    Types of deep NN
    Sparse connectivity
    Residual networks
    Separable models
    Key enablers & challenges
  2. Computer architectures for deep NN inference
    Benefits and limitations of CPU and GPUs
    Exploiting NN structure in custom processors
    Architecture level exploitation: spatial reuse in efficient datapaths
    Architecture level exploitation: temporal reuse in efficient memory hierarchies
    Circuit level exploitation: near/in memory compute
    Exploiting NN precision in custom processors
    Architecture level exploitation: reduced and variable precision processors
    Circuit level exploitation: mixed signal neural network processors
    Exploiting NN sparsity:
    Architecture level exploitation: computational and memory gating
    Architecture level exploitation: I/O compression
  3. HW and SW optimization for efficient inference
    Co-optimizing NN topology and precision with hardware architectures
    Hardware modeling
    Hardware-aware network optimization
    Network-aware hardware optimization
  4. Trends and outlook
    Dynamic application-pipelines
    Dynamic SoCs
    Beyond deep learning, explainable AI
    Outlook
Day 2 is devoted to "Image and Data Fusion," presented by Wilfried PHILIPS (imec and Ghent University, Belgium):
  1. Data fusion: principles and theory
    Bayesian estimation
    Priors and likelihood
    Information content, redundancy, correlation
    Application to image processing: recursive maximum likelihood tracking, pixel fusion
  2. Pixel level fusion
    Sampling grids and spatio-temporal aliasing
    Multi-modal sensors, interpolation
    Temporal fusion and superresolution
    Multi-focal fusion
  3. Multi-camera image fusion
    Occlusion and inpainting
    Uni and multimodal Inter-camera pixel fusion
    Fusion of heterogeneous sources: camera, lidar, radar
    Applications: time of flight, hyperspectral, hdr, multiview imaging
    Fusion of heterogeneous sources: radar, video, lidar
  4. Geometric fusion
    Multi-view geometry
    Fusion of point clouds
    Image stitching
    Simultaneous localization and mapping
    Applications: remote sensing from drones and vehicles
  5. Inference fusion in camera networks
    Multi-camera calibration
    Occlusion reasoning for multiple cameras with an overlapping viewpoint
    Multi-camera tracking
    Cooperative fusion and distributed processing

Pyxalis Presents GS HDR Sensor

Pyxalis seems to expand its activity beyond custom image sensors to standard products. At the Vision Show in Stuttgart, Germany, the company presented a flyer of its Robin chips with 3.2um global shutter pixels, said to provide "artifact-free in-pixel HDR." The new sensor outputs ASIL data per frame, suitable for automotive applications:


Thanks to AB for the photo from Pyxalis booth!

Friday, November 09, 2018

Ouster Discusses its LiDAR Principles

PRNewswire: Ouster unveils the details of its LiDAR technology. Several breakthroughs covered by recently granted patents have enabled Ouster's move toward state-of-the art, high volume, silicon-based sensors and lasers that operate in a near-infrared light spectrum.

Ouster's multi-beam Lidar is said to carry significant advantages over traditional approaches:

True solid state - Ouster's core technology is a two chip (one monolithic laser array, one monolithic receiver ASIC) solid state lidar core, which is integrated in the mechanically scanning product lines (the OS-1 and OS-2) and will be configured as a standalone device in a future solid state product. Unlike competing solid state technologies, Ouster's two chip lidar core contains no moving parts on the macro or micro scale while retaining the performance advantages of scanning systems through its multi-beam flash lidar technology.

Lower cost at higher resolution - Ouster's OS-1 64 sensor costs nearly 85% less than competing sensors, making it the most economical sensor on the market. In an industry first, Ouster has decoupled cost from increases in resolution by placing all critical functionality on scalable semiconductor dies.

Simplified architecture - Ouster's multi-beam flash lidar sensor contains a vastly simpler architecture than other systems. The OS-1 64 contains just two custom semiconductor chips capable of firing lasers and sensing the light that reflects back to the sensor. This approach replaces the thousands of discrete, delicately positioned components in a traditional lidar with just two.

Smaller size and weight - Because of the sensor's simpler architecture, Ouster's devices are significantly smaller, lighter weight and more power efficient, making them a perfect fit for unmanned aerial vehicles (UAVs), handheld and backpack-based mapping applications, and small robotic platforms. With lower power and more resolution, drone and handheld systems can run longer and scan faster for significant increases in system productivity.

In an article on the company's website, CEO Angus Pacala wrote, "I'm excited to announce that Ouster has been granted foundational patents for our unique multi-beam flash lidar technology which allows me to talk more openly about the incredible technology we've developed over the last three years and why we're going to lead the market with a portfolio of low-cost, compact, semiconductor-based lidar sensors in both scanning and solid state configurations."

The US10063849 "Optical system for collecting distance information within a field" and US9989406 "Systems and methods for calibrating an optical distance sensor." disclose LiDAR Tx side consisting of an array of VCSEL lasers and Rx side - an array of SPADs. The VCSEL lasers project an set of points on the subject, while each SPAD has a small FOV aligned with the projection point in order to cut the ambient light. Also, the Rx side optics has a 2nm-narrow spectral filter, again to cut more of the ambient light illumination. All this is placed on a rotating platform:


Angus Pacala also publishes an explanatory article in the Company's blog on Medium and gives an interview to ArsTechnica. Few quotes:

"While our technology is applicable to a wide range of wavelengths, one of the more unique aspects of our sensors is their 850 nm operating wavelength. The lasers in a lidar sensor must overcome the ambient sunlight in the environment in order to see obstacles. As a result lidar engineers often choose operating wavelengths in regions of low solar flux to ease system design. Our decision to operate at 850 nm runs counter to this trend.

A plot of solar photon flux versus wavelength at ground level (the amount of sunlight hitting the earth versus wavelength) shows that at 850 nm there is almost 2x more sunlight than at 905 nm, up to 10x more sunlight than at 940nm, and up to 3x more sunlight than 1550 nm — all operating wavelengths of legacy lidar systems.



We’ve gotten plenty of strange looks for our choice given that it runs counter to the rest of the industry. However, one of our patented breakthroughs is exceptional ambient light rejection which makes the effective ambient flux that our sensor sees far lower than the effective flux of other lidar sensors at other wavelengths, even accounting for the differences in solar spectrum. Our IP turns what would ordinarily be a disadvantage into a number of critical advantages:

  • Better performance in humidity
  • Improved sensitivity in CMOS: Silicon CMOS detectors are far more sensitive at 850 nm than at longer wavelengths. There is as much as a 2x reduction in sensitivity just between 850 and 905 nm. Designing our system at 850 nm allows us to detect more of the laser light reflected back towards our sensor which equates to longer range and higher resolution.
  • High quality ambient imagery
  • Access to lower power, higher efficiency technologies

...the flood illumination in a conventional flash lidar, while simpler to develop, wastes laser power on locations the detectors aren’t looking. By sending out precision beams only where our detectors are looking, we achieve a major efficiency improvement over a conventional flash lidar.

Our single VCSEL die has the added advantage of massively reducing system complexity and cost. Where other lidar sensors have tens or even hundreds of expensive laser chips and laser driver circuits painstakingly placed on a circuit board, Ouster sensors use a single laser driver and a single laser die. A sliver of glass no bigger than a grain of rice is all that’s needed for an OS-1–64 to see 140 meters in every direction. It’s an incredible achievement of micro-fabrication that our team has gotten this to work at all, let alone so well.

The second chip in our flash lidar is our custom designed CMOS detector ASIC that incorporates an advanced single photon avalanche diode (SPAD) array. Developing our own ASICs is key to our breakthrough performance and cost, but the approach is not without risk. Ouster’s ASIC team has distinguished themselves time and again and they’ve now delivered seven successful ASICs — each more powerful, more reliable, and more refined than the previous."

Thursday, November 08, 2018

Photoneo 3D Camera Wins Vision Show Award

Optics.org reports: "The winner of this year’s VISION Award, presented by Imaging & Machine Vision magazine, was named at the conference as Photoneo. Its new PhoXi 3D Camera is said to be the highest resolution and highest accuracy area based 3D camera available. It is based on Photoneo’s patented technology called Parallel Structured Light implemented by a custom CMOS image sensor.

The developer says this “novel approach” makes it the most efficient technology for high resolution scanning in motion. The key features of Parallel Structured Light include: scanning in a rapid motion – one frame acquisition, 40 m/s motion possible; 10x higher resolution and accuracy with more efficient depth coding technique with per pixel measurement possible; no motion blur resulting from its 10 ┬Ás per pixel exposure time; and rapid acquisition of 1068x800 point-clouds and texture up to 60 fps.
"

Photoneo claims that its custom designed image sensor is the key to the high performance of its 3D camera:

"Photoneo has developed a new technique of one frame 3D sensing that can offer high resolution common for multiple frame structured light systems, with fast, one frame acquisition of TOF systems. We call it Parallel Structured Light and it runs thanks to our exotic image sensor."


The company's patent application US20180139398 updates on the "exotic image sensor" over the earlier version circa 2014:


The 3D camera offers a nice trade-off between the resolution and speed:




Update: IMVE too publishes an article on Photoneo technology.