Tianxin Wei

I am a third-year Ph.D. student advised by Prof. Jingrui He at the University of Illinois Urbana-Champaign. Prior to joining UIUC in 2021, I earned my B.S. degree in Computer Science from University of Science and Technology of China, School of the Gifted Young.

My research primarily centers on enhancing the trustworthiness and efficiency of machine learning algorithms across various modalities and disciplines, with the ultimate goal of making ML models more accessible and inclusive. My research interests covers a wide range of topics:

  • Trustworthy: bias, fairness, robustness, and transferability;
  • Efficiency: sampling and model (Vanilla and MoE Transformer) efficiency;
  • Multi-modality: modal interaction and fusion;
  • Knowledge-enhanced LLM: content/KG retrieval and knowledge fusion;
  • LLM Governance/Policy: technical strategies for implementing regulatory principles effectively (currently exploring!);
  • Applications: agriculture, recsys, and looking forward to more science applications.
Feel free to drop me an e-mail, if you are interested in my research and want to discuss relevant research topic or potential collaborations!

CV  /  Email  /  Google Scholar  /  Github  /  Twitter  /  Linkedin  

Education

Ph.D.          2021 - 2026 (expected)
                       University of Illinois Urbana-Champaign (UIUC)
                       Advisor: Prof. Jingrui He

B.S.              2016 - 2020
                       University of Science and Technology (USTC)
                       School of the Gifted Young
                       Advisor: Prof. Xiangnan He
Work Experience

Amazon Search, Palo Alto, CA
Research Intern • Aug.-Dec. 2023
Multi-modal Large Langauge Models for Personalization
Two papers published at ICLR'24 and, WWW'24 (Oral). Two in submission.
Main Advisors: Dr. Xianfeng Tang at Amazon; Prof. Suhang Wang at PSU
News
Jan, 2024 One paper RIPOR (Scalable and effective generative retrieval) accepted @ WWW’24.
Jan, 2024 One paper UniMP (multi-modal personalization including recommendation and search, etc.) accepted @ ICLR’24.
Dec, 2023 One paper accepted @ AAAI’24. Will attend NeurIPS'23 between Dec. 9-16. See you there!
Oct, 2023 Awarded the NeurIPS’23 Scholar Award. Thanks to NeurIPS!
Sep, 2023 Two papers (Test-time personalization and bandit scheduler for meta-learning) accepted @ NeurIPS’23.
Aug, 2023 One paper BNCL accepted @ CIKM’23.
May, 2023 Received the ICML’23 Grant Award. Thanks to ICML!
Apr, 2023 MLP Fusion (NTK-approximating MLP Fusion for efficient learning) accepted @ ICML’23.
Mar, 2023 Will join Amazon Search team as an applied scientist intern this summer. See you in Palo Alto!
Dec, 2022 Collect a curated list of papers on the distribution shift. Check out at awesome-distribution-shift!
Oct, 2022 Awarded the NeurIPS’22 Scholar Award. Thanks to NeurIPS!
Oct, 2022 HyperGCL (contrastive learning on hypergraphs) accepted @ NeurIPS’22.
May, 2022 CLOVER (comprehensive fairness of cold-start recsys) accepted @ KDD’22.
Jul, 2021 Awarded the SIGIR 2021 Best Paper Honorable Mention. Thanks to SIGIR!
May, 2021 MACR (popularity debias with causal counterfactual reasoning) accepted @ KDD’21.
Apr, 2021 PDA (popularity adjusted deconfounded training with causal intervention) accepted @ SIGIR’21.
Aug, 2020 MetaCF (graph meta-learning for cold-start recsys) accepted @ ICDM’20. Internship work at UCLA.
Publications (*equal contribution)
       Conference
              Journal

              Preprint

Services & Awards

University Nomination (Top3) for Apple Scholar in AI/ML 2024
Conference Presentation Award, UIUC 2023
SIGIR 2021 Best Paper Honorable Mention
NeurIPS 2023 Scholar Award
NeurIPS 2022 Scholar Award
ICML 2023 Grant Award
Program Committee/Reviewer: CIKM (2021-2023), ICML (2022-2024), NeurIPS (2022-2023), ICLR (2023-2024), KDD (2023-2024), AAAI (2023-2024), WSDM 2023, ACL 2023, EMNLP 2023, LOG 2022
Journal Reviewer: TOIS, TKDE, DMKD, Machine Learning, TMLR
Pets