De Montfort University
Browse

D1.3 Cyberthreats and countermeasures

Download (4 MB)
Version 3 2020-04-29, 14:29
Version 2 2019-04-17, 13:53
Version 1 2019-04-09, 10:46
online resource
posted on 2020-04-29, 14:29 authored by Andrew PatelAndrew Patel, Tally HatzakisTally Hatzakis, Kevin MacnishKevin Macnish, Mark RyanMark Ryan, Alexey KirichenkoAlexey Kirichenko

While recent innovations in the machine learning domain have enabled significant improvements in a variety of computer-aided tasks, machine learning systems present us with new challenges, new risks, and new avenues for attackers. The arrival of new technologies can cause changes and create new risks for society (Zwetsloot and Dafoe, 2019) (Shushman et al., 2019), even when they are not deliberately misused. In some areas, artificial intelligence has become powerful to the point that trained models have been withheld from the public over concerns of potential malicious use. This situation parallels to vulnerability disclosure, where researchers often need to make a trade-off between disclosing a vulnerability publicly (opening it up for potential abuse) and not disclosing it (risking that attackers will find it before it is fixed). As such, researchers should consider how machine learning may shape our environment in ways that could be harmful.

Funding

European Union’s Horizon 2020 Research and Innovation Programme Under Grant Agreement no. 786641

History