The Department of Computing at the Faculty of Technology invites applications for a full-time Doctoral Researcher position. This role focuses on the privacy protection, security enhancement, and vulnerability research of Large Language Models (LLMs). The fixed-term position starts based on mutual agreement, and ends 30.6.2029.
Job description
This position is integrated into a high-impact European defense research project. As a PhD candidate, you will conduct deep-dive research into the security of LLMs within adversarial environments. You will join an elite, interdisciplinary team, collaborating closely with leading academic experts and industrial security architects across Europe.
The successful candidate will focus on one of following pillars throughout the LLM lifecycle:
Privacy Preservation: Researching the application of Federated Learning, Differential Privacy (DP), or Homomorphic Encryption (HE) in LLM training and inference to prevent sensitive data leakage.
Security Hardening: Developing robust defense mechanisms against adversarial threats such as Prompt Injection, backdoor attacks.
Vulnerability Attack & Defense: Conducting automated "red-blue teaming" exercises to uncover latent model vulnerabilities and developing automated patching or filtering technologies.