Lists

Thursday, September 15, 2016

InVisage Launches Spark4K "Micro-LiDAR" for Drones

BusinessWire: InVisage says it achieves a LIDAR-like performance in its Spark Micro-LiDAR (SML20) structured light module. The previously announced Spark4K 13MP, 1.1um NIR sensor enables the SML20 module to sense structured light patterns at a range of 20m, even in direct sunlight. With a sensor module of 8.5 x 8.5 x 4.5 mm, SML20 fits to drones and other mobile autonomous devices that require a lighter, more power-efficient alternative to conventional LiDAR without the limitations of ultrasonic and stereo-camera depth sensing systems.

In order to perform autonomously at a high flight speed of 20 meters per second, drones and other unmanned vehicles require at least half a second to recognize an upcoming obstacle and another half a second to change trajectory or decelerate in order to avoid it. This means accurate ranging at 20 meters is crucial,” said Jess Lee, InVisage President and CEO. “SML20 is the only solution enabling obstacle avoidance at that distance without being weighed down by a traditional bulky LiDAR.

Many obstacle avoidance systems have turned to mechanical and solid state LiDAR for depth sensing, but conventional LiDARs place high demands on weight, size and power budgets, making them unsuitable for drones. The cost of LiDAR can range from hundreds to tens of thousands of dollars per unit. Ultrasonic sensors and stereo cameras do offer more compact form factors than LiDAR, but ultrasonic systems offer only a sub-five-meter range and stereo cameras have high CPU demands and ranging capabilities limited by camera disparity. The SML20 eliminates the need to compromise, delivering effective collision avoidance with small size, minimal weight, and all-inclusive power consumption between 200 and 500 mW on average, according to the range requirements of the application.

Single camera-based obstacle avoidance systems use structured light to map their environments in 3D. Pairing Spark NIR sensors with lasers emitting a specific pattern of light, depth maps are captured by detecting modifications to that pattern. SML20 delivers QuantumFilm’s increased sensitivity to 940nm NIR light (said to be five times that of silicon) at a 1.1um pixel size. This allows autonomous devices to perceive their surroundings with an accurate depth map fused with the sharpness of 4K 30fps video previously reserved for cinema cameras, in contrast to the limited information in the series of dotted outlines offered by LiDAR.

Conventional structured light cameras have struggled to perform accurately outdoors or in bright sunlight because more than half of sunlight is in the infrared spectrum. In the resulting wash of infrared, silicon-based camera sensors easily saturate and fail to detect the structured light patterns their devices emit. Optimized for the invisible NIR 940-nanometer wavelength, SML20 takes advantage of the fact that water in the atmosphere absorbs most of the 940nm IR light in sunlight, minimizing solar interference with structured light systems.

In combination with this wavelength optimization, SML20’s 1.1um pixels have a global electronic shutter — said to be the only one of its kind at this pixel size. With global shutter, a structured light source can be pulsed in sync with a fast exposure, allowing for 20m ranging with high solar irradiance rejection while remaining eye-safe and low power.

The SML20 is said to be only the beginning — extended range options at 100 meters and beyond are promised in the coming quarters.

4 comments:

  1. How can a sensor with normal F number, say F2.0, not be saturated under Sun shine with 10ms exposure time?

    ReplyDelete
    Replies
    1. They might average a series of short exposure frames

      Delete
    2. Per the artcile
      "SML20 takes advantage of the fact that water in the atmosphere absorbs most of the 940nm IR light in sunlight, minimizing solar interference with structured light systems.

      Delete
  2. It seems the novel part is the structured light.

    "stereo cameras have high CPU demands and ranging capabilities limited by camera disparity"
    This is incorrect as stereo with two cameras or one camera and one calibrated light are equivalent with minor difference in data density.

    Stereo cameras with uncalibrated structured light of the one mentioned above would perform better to deal with occlusion but additional cost..

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.