Lingzhi Yuan
University of Maryland, Department of Computer Science.

lingzhiyxp [at] gmail.com
lingzhiy [at] umd.edu
Maryland, USA
I’m currently a first year PhD Student in Computer Science at University of Maryland (UMD), advised by Professor Furong Huang. Before that, I obtained my BEng degree at Zhejiang University majoring in Automation. I was also a research internship student during the summer of 2024 at University of Chicago (UChi), Secure Learning Lab advised by Professor Bo Li. I was a research student at USSLAB advised by Professor Yanjiao Chen and Professor Xiaoyu Ji.
During my undergraduate studies, I was also a member of Intensive Training Honors Program of Innovation and Entrepreneurship (ITP) at Chu Kochen Honors College.
My current research primarily concentrates on Trustworthy Machine Learning, with a particular emphasis on enhancing the safety and robustness of advanced models. My research centers on exploring the vulnerability of cutting-edge ML models, especially Multi-modal Models (Text-to-Image model, VLM, Audio LLM etc.) and developing reliable defense mechanisms to safeguard their widespread deployments. By tackling these challenges, I aim to contribute to the development of AI technologies that are not only high-performing but also secure, transparent, and aligned with ethical standards.
You could refer to my resume for more detail.
Find other interesting things about me in this page🧩!
news
Aug 20, 2025 | I become a PhD student at University of Maryland, advised by Prof. Furong Huang! |
---|---|
Jan 23, 2025 | A paper accepted to ICLR 2025. |
Nov 15, 2024 | A paper submitted for CVPR 2025 review. |
Oct 02, 2024 | A paper submitted for ICLR 2025 review. |
Mar 15, 2024 | I became a research internship student at Secure Learning Lab, University of Chicago, supervised by Prof. Bo Li. |
selected publications
- PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models2025Arxiv Pre-print
- MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models2025ICLR 2025