May 10, 2019

How to tell whether machine-learning systems are robust enough for the real world

Book a Demo
  • This field is for validation purposes and should be left unchanged.

MIT researchers have devised a method that detects inputs called “adversarial examples” that cause neural networks to misclassify inputs, to better measure how robust the models are for various real-world tasks.

Sourced through Scoop.it from: news.mit.edu