Intel publishes a 1-pager on its autonomous vehicle platform:
"There are 12 cameras in a 360-degree configuration. Eight cameras support self-driving and four short-range cameras support near-field sensing for self-driving as well as self-parking. The camera is the highest resolution sensor (hundreds of millions of samples per second) and is the only sensor capable of detecting both shape (vehicles, pedestrians, etc.) and texture (road markings, traffic sign text, traffic light color, etc.). Advanced artificial intelligence and vision capabilities are able to build a full-sensing state from the cameras. This end-to-end capability is critical to achieve “true redundancy” in combination with other sensor types.
There are six total “sector” lidars; three in front and three in rear. Lidar sensors are useful in detecting objects by measuring reflected laser light pulses. Lidar, in combination with radar, is used by the system to provide a fully independent source of shape detection. It works in addition to the camera system. Given our camera-centric approach, lidar only needs to be used for very specific tasks, primarily long-distance ranging and road contour. Limiting the workload for lidar results in much lower cost compared to lidar-centric systems; it also provides easier manufacturing and volume at scale."
No comments:
Post a Comment
All comments are moderated to avoid spam and personal attacks.