Our research group conducts fundamental research at the intersection of computer security and machine learning. On the one end, we are interested in developing intelligent systems that can learn to protect computers from attacks and identify security problems automatically. On the other end, we explore the security and privacy of machine learning by developing novel attacks and defenses.
Adversarial Machine Learning
As our first research topic, we invesitgate the security and privacy of machine learning systems. Our objective is to create learning-based systems resilient to different forms of attacks, including adversarial examples, poisoning, and backdoors. To achieve this goal, we approach this research jointly from the perspective of the attacker and the defender. Following is a selection of related publications from our group:
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning.
Proc. of the 32nd USENIX Security Symposium, 2023.
Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning.
Proc. of the 29th USENIX Security Symposium, 2020.
Machine Unlearning of Features and Labels.
Proc. of the 30th Network and Distributed System Security Symposium (NDSS), 2023.
Intelligent Security Systems
Our group has strong experience in the development of intelligent security systems. We have devised learning-based systems for detecting attacks, analyzing malware and detecting vulnerabilities. Our goal is to establish security solutions that adapt to changing conditions and provide automatic protection from different forms of threats. Following is a selection of related publications from this research topic:
Dos and Don'ts of Machine Learning in Computer Security.
Proc. of the 31st USENIX Security Symposium, 2022.
ZOE: Content-based Anomaly Detection for Industrial Control Systems.
Proc. of the 48th Conference on Dependable Systems and Networks (DSN), 127–138, 2018.
Drebin: Efficient and Explainable Detection of Android Malware in Your Pocket.
Proc. of the 21st Network and Distributed System Security Symposium (NDSS), 2014.
Novel Attacks and Defenses
We believe that defensive and offensive security techniques must go hand in hand to improve practical protection. Consequently, we research methods to identify unknown security and privacy vulnerabilities. To complement this research, we develop approaches and solutions to defend against these novel threats. Following is a selection of publications from this research branch of our group:
Misleading Authorship Attribution of Source Code using Adversarial Learning.
Proc. of the 28th USENIX Security Symposium, 2019.
Automatically Inferring Malware Signatures for Anti-Virus Assisted Attacks.
Proc. of the ACM Asia Conference on Computer and Communications Security (ASIACCS), 587–598, 2017.
Automatic Inference of Search Patterns for Taint-Style Vulnerabilities.
Proc. of the 36th IEEE Symposium on Security and Privacy (S&P), 2015.
See all our publications.
The ERC Consolidator Grant MALFOY explores the application of machine learning in offensive computer security. It is an effort to understand how learning algorithms can be used by attackers and how this threat can be effectively mitigated.
ALISON — Attacks against Machine Learning in Structured Domains
The goal of this project is to investigate the security of learning algorithms in structured domains. That is, the project develops a better understanding of attacks and defenses that operate in the problem space of learning algorithms rather than the feature space.
TELLY — Testing the Limits of Machine Learning in Vulnerability Discovery
The project aims to open the black box of machine learning in vulnerability discovery. Its goal is to systematically assess the limits of learning-based discovery approaches and derive a better understanding of their role in security. The project is part of the excellence cluster CASA.
See all our research projects.