Changsheng Wang

myself.jpeg

Room 3210

428 S Shaw LN

East Lansing, Michigan

United States of America

Changsheng Wang (王昌盛) is a first-year Ph.D. student in Computer Science at Michigan State University, working in the OPTML Group under the supervision of Prof. Sijia Liu. His research centers on trustworthy AI and AI safety, with a focus on large language models (LLMs). He is particularly interested in building machine learning systems that are robust, efficient, and secure in adversarial or real-world settings. Before joining MSU, Changsheng received his B.S. in Data Science and Big Data Technology from the University of Science and Technology of China (USTC), where he was advised by Prof. Xiangnan He. He has completed research internships at Intel, and collaborated with IBM Research on trustworthy and safe AI.

Research Keywords: Machine Unlearning, AI Safety, Adversarial Training, Fine-Tuning Efficiency, Watermarking, LLM Robustness, Diffusion Models, Recommender System Security, Optimization.

Looking for Collaboration!

I am currently seeking a 2026 Summer Internship position in industrial or academic research labs related to AI safety, LLM robustness, or foundation model alignment. Feel free to reach out, befriend me on Wechat, or connect with me on LinkedIn.

News

Jul 7, 2025 :tada: My first author Unlearning Coreset paper accepted in COLM 2025, check out our paper here!
Jun 12, 2025 :tada: Our latest research about Reasoning Unlearn has been released on arXiv!
May 19, 2025 :tada: I will start working as a research scientist intern at Intel!
Apr 17, 2025 :tada: My first author Unlearning Robustness paper accepted in ICML 2025, check out our paper here!
Dec 1, 2024 :jack_o_lantern: Call for papers for 2nd New Frontiers in Adversarial Machine Learning and I will serve as the student chair!

First-Authored Publications

See a full publication list at here.

  1. COLM’25
    LLM Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks
    Soumyadeep Pal*, Changsheng Wang*, James Diffenderfer, Bhavya Kaulkhura, and Sijia Liu
    In The Conference on Language Modeling 2025
  2. ICML’25
    Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning
    Changsheng Wang, Yihua Zhang, Jinghan Jia, Parikshit Ram, Dennis Wei, Yuguang Yao, Soumyadeep Pal, Nathalie Baracaldo, and Sijia Liu
    In Proceedings of the 42th International Conference on Machine Learning 2025
  3. arXiv’25
    Reasoning Model Unlearning: Forgetting Traces, Not Just Answers, While Preserving Reasoning Skills
    Changsheng Wang*, Chongyu Fan*, Yihua Zhang, Jinghan Jia, Dennis Wei, Parikshit Ram, Nathalie Baracaldo, and Sijia Liu
    In arXiv preprint arXiv:2506.12963 2025
  4. WWW’23
    Uplift Modeling for Target User Attacks on Recommender Systems
    Wenjie Wang*, Changsheng Wang*, Fuli Feng, Wentao Shi, Daizong Ding, and Tat-Seng Chua
    In Proceedings of the ACM Web Conference 2023