On the imaging side, Ford announcing four key investments and collaborations in advanced algorithms, 3D mapping, LiDAR, and radar and camera sensors:
- Velodyne: Ford has invested in Velodyne, the Silicon Valley-based company dealing with light detection and ranging (LiDAR) sensors. The aim is to quickly mass-produce a more affordable automotive LiDAR sensor. Ford has a longstanding relationship with Velodyne, and was among the first to use LiDAR for both high-resolution mapping and autonomous driving beginning more than 10 years ago
- SAIPS: Ford has acquired the Israel-based computer vision and machine learning company to further strengthen its expertise in artificial intelligence and enhance computer vision. SAIPS has developed algorithmic solutions in image and video processing, deep learning, signal processing and classification. This expertise will help Ford autonomous vehicles learn and adapt to the surroundings of their environment
- Nirenberg Neuroscience LLC: Ford has an exclusive licensing agreement with Nirenberg Neuroscience, a machine vision company founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code the eye uses to transmit visual information to the brain. This has led to a powerful machine vision platform for performing navigation, object recognition, facial recognition and other functions, with many potential applications. For example, it is already being applied by Dr. Nirenberg to develop a device for restoring sight to patients with degenerative diseases of the retina. Ford’s partnership with Nirenberg Neuroscience will help bring humanlike intelligence to the machine learning modules of its autonomous vehicle virtual driver system
- Civil Maps: Ford has invested in Berkeley, California-based Civil Maps to further develop high-resolution 3D mapping capabilities. Civil Maps has pioneered an innovative 3D mapping technique that is scalable and more efficient than existing processes. This provides Ford another way to develop high-resolution 3D maps of autonomous vehicle environments
I ain't risking my life by trusting such cars, needs to be proven before I get near it.
ReplyDeleteTrue, this has to be proven many times over for safety concerns
ReplyDeleteWhat happen if two velodyne lidar car meet each other? Have they tested this case?
ReplyDeleteI'm fairly sure that it won't be an issue most of the time. Unless you have two cars next to each other, their rotations are synced and thus the beam spots land next to each other thus confusing the system. But the thing rotates very fast, so the chance of error in a huge cloud of data points is small, the impact is small, and the time period for this event is also short.
DeleteWhat exactly is the concern there?
I don't know what is the exact FOV for their autonomous application but I just assume they HDL-64 (which is their latest technology). The FOV of HDL-64 is 26.8, thus the probability for the FOV elapse probability for two nearby lidar car is about 7% which is acceptable but what if the number of surrounded lidar car is 4,5 or 7... the impact will go up pretty fast.
DeleteThis is similar with cell phone communication if every body use same frequency. (most LIDAR sensor use NIR light)
It seems like I make mistake regarding the FOV but my point is that the error rate will go up with the number of car.
Delete