Giorgio Severi
Boston, MA
I am a Senior AI Safety Researcher at Microsoft, with the AI Red Team. I received my PhD from Northeastern University, where I worked in the NDS2 lab, advised by professor Alina Oprea.
My primary interest lies in machine learning security and safety. I have worked across several areas of adversarial machine learning, focusing on the analysis and exploitation of AI systems, particularly in security-sensitive applications. My work also explores the intersection of adversarial robustness and model interpretability.
news
| Feb 10, 2026 | Our paper GRP-Obliteration: Unaligning LLMs With a Single Unlabeled Prompt is now available on ArXiv |
|---|---|
| Feb 5, 2026 | Our paper The Trigger in the Haystack: Extracting and Reconstructing LLM Backdoor Triggers is now available on ArXiv |
| Jul 8, 2025 | Our paper A Systematization of Security Vulnerabilities in Computer Use Agents is now available on ArXiv |
| Jun 4, 2025 | Our paper A Representation Engineering Perspective on the Effectiveness of Multi-Turn Jailbreaks is now available on ArXiv |
| Jun 1, 2025 | Our paper Weathering the CUA Storm: Mapping Security Threats in the Rapid Rise of Computer Use Agents was accepted at the ICML 2025 Workshop on Computer Use Agents! |
latest posts
| Mar 13, 2019 | Visualizing wine data using Choropleths and Linking |
|---|---|
| Mar 10, 2019 | Installing VizDoom emulator on CentOS |