EETimes publishes a nice summary of image and vision processor papers at the oncoming ISSCC:
"Image processing in fact is one of the most popular ISSCC topics, appearing in the session on “Digital Processors” and again in the session on “Next-Generation Processing.” While dramatic new application areas for image processors include gesture-recognition and augmented reality, automotive driving assistance systems (ADAS) are among the most popular. With research and development on autonomous vehicles increasing, the need for faster detection of obstacles in a vehicle’s path becomes acute. Head mounted displays with augmented reality (HMD/AR) systems are intended to calculate the scenario for what’s going to happen on the roadway ahead of a speeding car. Processors in the imaging sessions will describe the impact of deep learning algorithms (like convolutional neural networks, CNN or K-nearest-neighbors, KNN). These processers support a range of machine learning applications, including computer vision, object detection (apart from what’s on the roadway), and handwriting recognition."
"One paper from KAIST (in the “Next-Generation” session) will present a low-power natural user interface processor with an embedded deep learning engine. The device is fabricated in 65nm CMOS. It claims a higher recognition rate over the best-in-class pattern recognition. Another paper from KAIST presents a dedicated high-performance advanced driver assistance SoC, capable of identifying potentially “risky objects” in automotive systems. This chip is also implemented in 65nm CMOS, and, Kaist claims, was successfully tested in an autonomous vehicle."
"In the “Digital Processors” session, Renesas will present a 12-channel video-processing chip for ADAS — implemented in 16nm FinFET CMOS."
No comments:
Post a Comment
All comments are moderated to avoid spam and personal attacks.