I am a PhD student at the Institute of Information Engineering, Chinese Academy of Sciences (CAS), specializing in the research of large language models (LLMs). I am expected to graduate in July 2025.

My research focuses on model compression and acceleration, specifically in the context of LLMs. I am interested in utilizing techniques like quantization, knowledge distillation, and pruning to improve computational efficiency and performance of LLMs. My ultimate goal is to achieve efficient language models.

🔥 News

  • 2024.03.30:  🎉🎉 Feeling tired and sleepy, it’s time for a relaxing moment. ☞FunAI

📝 Selected Papers [Full List] [Google Scholar]

  • Data-Efficient Knowledge Distillation with Teacher Assistant-Based Dynamic Objective Alignment. [pdf] [code]
    International Conference on Computational Science (ICCS), 2024. Preprint. IIE-B Conference.

  • A Contrastive Self-distillation BERT with Kernel Alignment-Based Inference. [pdf] [code]
    International Conference on Computational Science (ICCS), 2023. IIE-B Conference.

  • MetaBERT: Collaborative Meta-Learning for Accelerating BERT Inference. [pdf] [code]
    International Conference on Computer Supported Cooperative Work in Design (CSCWD), 2023. CCF-C Conference.

🎖 Honors and Awards

  • PhD Graduate Study Scholarship First Prize, UCAS, 2022.
  • Merit Student, UCAS, 2022~2023.
  • Merit Student, UCAS, 2021~2022.

📖 Teaching

  • Machine Learning, Teaching Assistant, UCAS.
  • Pattern Matching and Information Filtering, Teaching Assistant, UCAS.

Academic Service

  • Reviewers of Conferences: ICASSP 2024.