Somesh Jha

Date: 14:00, Friday, October 6, 2017
Speaker: Somesh Jha
Venue: TU Wien, HS Zemanek, Favoritenstr. 9-11
Notes:

Friday 2pm

Abstract:

Machine learning (ML) models, e.g., deep neural networks
(DNNs), are vulnerable to adversarial examples: malicious inputs
modified to yield erroneous model outputs, while appearing unmodified
to human observers. Potential attacks include having malicious content
like malware identified as legitimate or controlling vehicle
behavior. Yet, most existing adversarial example attacks require
knowledge of either the model internals or its training data.  We will
describe a black-box attack on ML models. These algorithms yield
adversarial examples misclassified by Amazon and Google at rates of
96.19% and 88.94%. We also find that this black-box attack strategy is
capable of evading defense strategies previously found to make
adversarial example crafting harder.

Posted in RiSE Seminar