Lingzhi Yuan
Zhejiang University, College of Electrical Engineering.

lingzhiyxp [at] gmail.com
yuan_lz [at] zju.edu.cn
Hangzhou, China
I’m currently a final year Undergraduate Student majoring in Automation at Zhejiang University (ZJU). I am also a research internship student at University of Chicago (UChi), Secure Learning Lab advised by Professor Bo Li. Before that, I was also a research student at USSLAB advised by Professor Yanjiao Chen and Professor Xiaoyu Ji.
Besides that, I am also a member of Intensive Training Honors Program of Innovation and Entrepreneurship (ITP) at Chu Kochen Honors College.
My current research primarily concentrates on Trustworthy Machine Learning, with a particular emphasis on enhancing the safety and robustness of advanced models. My research centers on exploring the vulnerability of cutting-edge ML models, especially Multi-modal Models (Text-to-Image model, VLM, Audio LLM etc.) and developing reliable defense mechanisms to safeguard their widespread deployments. By tackling these challenges, I aim to contribute to the development of AI technologies that are not only high-performing but also secure, transparent, and aligned with ethical standards.
You could refer to my resume for more detail.
Find other interesting things about me in this page🧩!
news
Jan 23, 2025 | A paper accepted to ICLR 2025. |
---|---|
Nov 15, 2024 | A paper submitted for CVPR 2025 review. |
Oct 02, 2024 | A paper submitted for ICLR 2025 review. |
Mar 15, 2024 | I became a research internship student at Secure Learning Lab, University of Chicago, supervised by Prof. Bo Li. |
Apr 01, 2023 | I became a research student at USSLAB supervised by Prof. Xiaoyu Ji and Prof. Yanjiao Chen. |
selected publications
- PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models2025Submitted to CVPR 2025
- MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models2025ICLR 2025