UWE Bristol researchers develop new defense against adversarial machine learning attacks on cybersecurity intrusion detection systems


Man looking at a screen with a lock on it

As cyberattacks become more sophisticated, intrusion detection systems (IDS) are often seen as a way to mitigate threats to computer networks.

Yet attackers continue to evade detection and cause disruption by delivering malware and other common attack processes. There is a growing trend of being able to evade machine learning systems to carry out attacks, effectively compromising the intended functionality of the machine learning system.

Recent work by Andrew McCarthy, a PhD student at UWE Bristol studying cybersecurity analytics, was able to demonstrate both the feasibility of carrying out such attacks against intrusion detection systems, as well as propose a new approach to combat the vulnerabilities that machine learning classifiers can expose.

While the field of adversarial machine learning often concerns computer vision systems, this cutting-edge research applies these concepts to cybersecurity, to understand what future threats might look like and how to best develop threat detection systems. intrusion to avoid such vulnerabilities.

The results of Andrew’s recent doctoral work have just been published in the top journal Journal of Information Systems and Applications (Elsevier). Andrew is about to complete his PhD studies, working with Professor Phil Legg (Director of Studies) and supported by industry partner Techmodal as part of the UWE Partnership PhD programme.

THE the full article is available online.

Leave a comment