Star 0

Abstract

Adversarial Examples: Using AI to Cheat AI - Mengyun Tang and Xiangqi Huang, Tencent Security Platform Department
Recent deep neural networks have been proven to be very effective in multiple important practical problems, e.g. object detection, speech recognition, language translation and automatic drive etc. However, most existing deep learning algorithms are highly vulnerable to adversarial examples. An adversarial example is an input data which has been modified very slightly that intended to cause a machine learning classifier to misclassify it. But in many cases, the modification cannot be noticed by human eyes. This exposes a potential security problem of artificial intelligence (AI) applications which used these deep learning algorithms. Once the vulnerability is adopted by hackers, it is possible to bring real-life security issues. This talk will give an explanation about the generation of adversarial examples and share some recent research progress which has been published on ECCV 2018. Besides, it will present experiment results of several tasks, including image recognition, object detection, porn identification and so on. Some real-world attack cases on online systems and how to defense adversarial examples will be discussed as well.