Lists

Wednesday, October 09, 2019

Nature Journal on Fooling Deep-Learning AI

Nature article "Why deep-learning AIs are so easy to fool" by Douglas Heaven says:

"A self-driving car approaches a stop sign, but instead of slowing down, it accelerates into the busy intersection. An accident report later reveals that four small rectangles had been stuck to the face of the sign. These fooled the car’s onboard artificial intelligence (AI) into misreading the word ‘stop’ as ‘speed limit 45’.

Such an event hasn’t actually happened, but the potential for sabotaging AI is very real. Researchers have already demonstrated how to fool an AI system into misreading a stop sign, by carefully positioning stickers on it. They have deceived facial-recognition systems by sticking a printed pattern on glasses or hats.

These are just some examples of how easy it is to break the leading pattern-recognition technology in AI, known as deep neural networks (DNNs).

“There are no fixes for the fundamental brittleness of deep neural networks,” argues François Chollet, an AI engineer at Google in Mountain View, California. To move beyond the flaws, he and others say, researchers need to augment pattern-matching DNNs with extra abilities: for instance, making AIs that can explore the world for themselves, write their own code and retain memories. These kinds of system will, some experts think, form the story of the coming decade in AI research.
"

1 comment:

  1. Human (oragnic) neural nets do this as well. "I saw a glimpse of ..." in my peripheral vision. Closer examination then demonstrates it is something else.

    ReplyDelete

All comments are moderated to avoid spam and personal attacks.