Nicholas Carlini, Research Scientist, Google Despite significant ...

آموزش و یادگیری
منتشر شده در 30 مرداد 1398

Nicholas Carlini, Research Scientist, Google


Despite significant successes, machine learning has serious security and privacy concerns. This talk will examine two of these. First, how adversarial examples can be used to fool state-of-the-art vision classifiers (to, e.g., make self-driving cars incorrectly classify road signs). Second, how to extract private training data out of a trained neural network.Learning Objectives:1: Recognize the potential impact of adversarial examples for attacking neural network classifiers.2: Understand how sensitive training data can be leaked through exposing APIs to pre-trained models.3: Know when you need to deploy defenses to counter these new threats in the machine learning age.Pre-Requisites:Understanding of threats on traditional classifiers (e.g., spam or malware systems), evasion attacks, and privacy, as well as the basics of machine learning.

دیدگاه کاربران