Friday, December 21, 2018

Aeye Raises $40m, Unveils iDAR Product

BusinessWire: AEye, the developer of iDAR, announces the second close of its Series B financing, bringing the company’s total funding to over $60 million. AEye Series B round includes Hella Ventures, SUBARU-SBI Innovation Fund, LG Electronics, and SK Hynix. AEye previously announced that the round was led by Taiwania Capital along with existing investors Kleiner Perkins, Intel Capital, Airbus Ventures, R7 Partners, and an undisclosed OEM.

AEye's iDAR physically fuses the 1550nm solid-state LiDAR with a high-resolution camera to create a new data type called Dynamic Vixels. This real-time integration occurs in the IDAR sensor, rather than post fusing separate camera and LiDAR data after the scan. By capturing both geometric and true color (x,y,z and r,g,b) data, Dynamic Vixels uniquely mimic the data structure of the human visual cortex, capturing better data for vastly superior performance and accuracy.

This funding marks an inflection point for AEye, as we scale our staff, partnerships and investments to align with our customers’ roadmap to commercialization,” said Luis Dussan, AEye founder and CEO. “The support we have received from major players in the automotive industry validates that we are taking the right approach to addressing the challenges of artificial perception. Their confidence in AEye and iDAR will be borne out by the automotive specific products we'll be bringing to market at scale in Q2 of 2019. These products will help OEMs and Tier 1s accelerate their products and services by delivering market leading performance at the lowest cost.

Aeye AE110 iDAR is fusing 1550nm solid-state agile MOEMS LiDAR, a low-light HD camera, and embedded AI to intelligently capture data at the sensor level. The AE110’s pseudo-random beam distribution search option makes the system eight times more efficient than fixed pattern LiDARs. The AE110 is said to achieve 16 times greater coverage of the entire FOV at 10 times the frame rate (up to 100 Hz) due to its ability to support multiple regions of interest for both LiDAR and camera.

2 comments:

  1. What is the physical scanning rate please?

    ReplyDelete
    Replies
    1. They claim they never need to scan the full frame. Rather, they only scan "interesting parts" that are identified by a camera, possibly complemented by maps and GPS data. For example, nobody needs to send laser beams to sky that often occupies half a frame. Also, when driving around a city, nobody needs to scan buildings - they are in Google maps anyways. The "interesting parts" for scanning are roads, cars, pedestrians, possible obstacles, etc. The RGB camera identifies them, and the LiDAR scans these areas mostly.

      So, there is no such a thing as "scan rate", it changes from one scene to another.

      It remains to be seen whether this approach actually works or not.

      Delete

All comments are moderated to avoid spam and personal attacks.