How to tell whether machine-learning systems are robust enough for the real world
MIT researchers have devised a method that detects inputs called “adversarial examples” that cause neural networks to misclassify inputs, to better measure how robust the models are for various real-world tasks.
Sourced through Scoop.it from: news.mit.edu