Star 0

Abstract

This talk presents threats to AI applications caused by a set of vulnerabilities in deep learning frameworks.  Contrast to the small code size of deep learning models, these deep learning frameworks are complex and contains heavy dependencies on numerous open source packages.  By exploiting these framework implementations, this presentation demonstrates attacks on common deep learning applications such as as voice recognition and imaging classifications.
The talk will present the details of exploiting software vulnerabilities to cause image recognition systems to produce attacker-controlled arbitrary classification results.  The goal of this presentation is to draw attention to software implementations and call for collaborative effort to improve the security of deep learning framework.