I am a researcher

  • at the International Computer Science Institute (ICSI) at the UC Berkeley, and
  • at the Ruhr University Bochum, Germany, working in the Jump.Start postdoc program of the DFG Cluster of Excellence “Cyber Security in the Age of Large-Scale Adversaries” (CASA).

My research is about the secure, reliable, and trustworthy application of machine learning and AI. I am also studying the application of machine learning to computer security, such as malware and fraud detection.

Feel free to contact me for questions or collaborations.

Interests
  • Trustworthy AI
  • Security & Safety of Machine Learning
  • Machine Learning for Computer Security
  • Malware Analysis
Education
  • Ph.D. in Computer Science (With distinction), 2021

    Technische Universität Braunschweig, Germany

  • MSc in Computer Science (With distinction), 2013-2015

    Universität Münster, Germany

  • BSc in Information Systems (Best graduate), 2010-2013

    Universität Münster, Germany

About My Research

Machine learning (ML) and artificial intelligence (AI) represent an inflection point in human’s history. The capability to solve even highly complex tasks allow incredible opportunities in various fields ranging from medicine over environmental protection and autonomous driving to computer security. Studying the ad billboards in San Francisco, AI is everywhere.

In the light of these far-reaching societal impacts, it is essential that we apply ML/AI in a secure, reliable and trustworthy way.

Unfortunately, these key aspects are not guaranteed per se. The application of ML/AI requires working at a highly delicate workflow where incorrect assumptions & subtle pitfalls can undermine the reliability & trustworthiness. Moreover, ML/AI systems show considerable vulnerabilities in face of attackers, allowing them e.g. to hide backdoors in a model or to control the classification only through input modifications. These attacks raise the question of how we can build secure systems that can withstand a malicious actor. If not dealt with, pitfalls & attacks will cause harm and delay advancements in academia and industry.

I have dedicated my research to secure, reliable and trustworthy ML/AI systems. I am studying attacks and defenses from a holistic viewpoint, that is, I consider the whole data workflow and varying data domains (images, text, code, malware, …). Furthermore, my research vision is to improve reliability by better understanding possible pitfalls and problems during the application of machine learning. Finally, I am using this knowledge to apply ML to computer security, such as for malware, fraud, or deepfake detection.

Recent Publications

Quickly discover relevant content by filtering publications.
(2023). No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning. Proc. of USENIX Security Symposium.

Cite URL

(2023). On the Detection of Image-Scaling Attacks in Machine Learning. Proc. of Annual Computer Security Applications Conference (ACSAC).

Cite URL

(2022). Dos and Don'ts of Machine Learning in Computer Security. Proc. of USENIX Security Symposium.

Cite URL

(2022). Misleading Deep-Fake Detection with GAN Fingerprints. Deep Learning and Security Workshop (DLS).

Cite DOI

Selected Honors, Scholarships, and Awards

  • Dissertation Award, AI Talent of Lower Saxony, Germany, 2022
  • Distinguished Paper Award at USENIX Security Symposium 2022
  • Winner of the defender challenge at the Machine Learning Security Evasion Competition by Microsoft, 2020
  • Best Student Paper Award at IEEE International Workshop on Information Forensics and Security (WIFS), 2015
  • Scholarship by German Academic Exchange Service (DAAD), 2015
  • Deutschlandstipendium (Scholarship), sponsored by BASF SE, 2013-2015
  • Winner of Special Award of Münster’s Society for Applied Informatics for the best empirical thesis, 2013
  • Winner of Hays-AlumniUM-Bachelor-Award as the best BSc. student in graduating class, 2013

Service

Membership in Program Committees / Conference Reviewing:

  • 2021-2023 - ACM Workshop on Artificial Intelligence and Security (AISec)
  • 2023 - IEEE European Symposium on Security and Privacy (EuroS&P)
  • 2023 - Workshop on Robust Malware Analysis (WoRMA)
  • 2019-2023 - IEEE Workshop on Deep Learning and Security (DLS)
  • 2022 - USENIX Security Artifact Evaluation Committee
  • 2022 - European Symposium on Research in Computer Security (ESORICS) Program Committee
  • 2021 - European Symposium on Research in Computer Security (ESORICS) Posters Program Committee

Journal Reviewing:

  • IEEE Transactions on Information Forensics & Security (TIFS)
  • IEEE Intelligent Systems
  • Computer Science Review

Sub-Reviewing:

  • IEEE Symposium on Security and Privacy (S&P)
  • Network and Distributed System Security Symposium (NDSS)
  • ACM Conference on Computer and Communications Security (CCS)
  • USENIX Security Symposium
  • Annual Computer Security Applications Conference (ACSAC)
  • Symposium on Research in Attacks, Intrusions, and Defenses (RAID)
  • IEEE European Symposium on Security and Privacy (EuroS&P)
  • ACM Asia Conference on Computer and Communications Security (ASIA CCS)

Teaching

I have given several seminars, team projects, or lecture units on security and machine learning, privacy and machine learning, and multimedia security in the last years. I have also supervised several bachelor theses and master theses on these topics.

Currently, I am open to supervising theses or projects. If you are a student at RUB or at Berkeley, and you are interested in one of my research topics, feel free to contact me.

Contact

Contact me by email.