Attacks on Machine Learning : Lurking Danger for Accountability

Abstract

It is well-known that there is no safety without security. That being said, a sound investigation of security breaches on Machine Learning (ML) is a prerequisite for any safety concerns. Since attacks on ML systems and their impact on the security goals threaten the safety of an ML system, we discuss the impact attacks have on the ML models' security goals, which are rarely considered in published scientific papers. The contribution of this paper is a non-exhaustive list of published attacks on ML models and a categorization of attacks according to their phase (training, after-training) and their impact on security goals. Based on our categorization we show that not all security goals have yet been considered in the literature, either because they were ignored or there are no publications on attacks targeting those goals specifically, and that some are difficult to assess, such as accountability. This is probably due to some ML models being a black box.

Publication
Proceedings of the AAAI Workshop on Artificial Intelligence Safety 2019 co-located with the Thirty-Third AAAI Conference on Artificial Intelligence 2019 (AAAI 2019)

Related