Zhongkai Xue
Zhongkai Xue
New Haven, U.S. | email | website | scholar | github | twitter | rΓ©sumΓ©
"A springtide banquet, green wine in hand; and lo, the song is sung but once, then lost to time."

Introduction

πŸ‘‹ Hi! I’m currently an undergraduate student, looking for graduate research opportunities starting from Fall 2026.

πŸ€” My current research interests span Multimodal Foundation Models and Reinforcement Learning for LLMs.

πŸš€ I'm passionate about learning, building, and collaborating!

Education

πŸŽ“ Bachelor in Financial Engineering @ The Chinese University of Hong Kong, Shenzhen
Sep 2022 – Jun 2026
πŸŽ“ Exchange Student in Math & Statistics @ University of Oxford, St Hilda's College
Oct 2024 – Mar 2025

Publications

VIS-GNN
πŸ“„ The Underappreciated Power of Vision Models for Graph Structural Understanding
Under review '25 β€” Xinjian Zhao*, Wei Pang*, Zhongkai Xue*, Xiangru Jian, Lei Zhang, Yaoyao Xu, Xiaozhuang Song, Shu Wu, Tianshu Yu
Abstract: We conduct a systematic analysis that uncovers how visual perception and message‑passing offer complementary strengths in graph understanding, and introduce a novel benchmark to showcase these insights. Our findings reveal that vision models can significantly enhance graph structural understanding, outperforming traditional GNNs in various tasks.
[arxiv] [code] (To be released soon)
Political-LLM
πŸ“„ Political-LLM: Large Language Models in Political Science
Under review '25 β€” Lincan Li, Jiaqi Li, Catherine Chen, Fred Gui, Hongjia Yang, Chenxiao Yu, Zhengguang Wang, Jianing Cai, Junlong Aaron Zhou, Bolin Shen, Alex Qian, Weixin Chen, Zhongkai Xue ... Yue Zhao, Yushun Dong et al.
Abstract: We propose Political-LLM, a framework that bridges large language models with political science. It provides a dual-perspective taxonomy, political tasks and computational methods, while outlining key challenges and future directions, aiming to guide ethical and effective AI use in political research.
[arxiv] [site]
MJ-VIDEO
πŸ“„ MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation
Under review '25 β€” Haibo Tong, Zhaoyang Wang, Zhaorun Chen, Haonian Ji, Shi Qiu, Siwei Han, Kexin Geng, Zhongkai Xue , Yiyang Zhou, Peng Xia, Mingyu Ding, Rafael Rafailov, Chelsea Finn, Huaxiu Yao
Abstract: We present MJ-VIDEO, a Mixture-of-Experts reward model for fine-grained video preference evaluation, which is built upon MJ-BENCH-VIDEO, a large-scale benchmark covering alignment, safety, coherence, and bias. Our model achieves significant improvements in preference judgment and enhances alignment in video generation.
[arxiv] [code] [site]

Research Experience

πŸ”¬ Visiting Research Assistant @ Graph and Geometric Learning Lab, Yale University
Apr 2025 – Sep 2025
Jan 2025 – Jun 2026

Industry Experience

Oct 2024 – Jan 2025
πŸ’Ό Quantitative Researcher @ Jupyter Investment, Shenzhen
Jun 2024 – Oct 2024

Awards & Scholarship

πŸŽ–οΈ Lambda Research GPU Grant
Lambda.ai, 2025
πŸŽ–οΈ Undergraduate Research Award
School of Data Science, CUHK-SZ, 2024
πŸ₯ˆ Silver Medal (Top 2%) @ Trading at the Close
Kaggle Challenge, Optiver, 2023

Miscellaneous

πŸ‘¨β€πŸ« Teaching: I served as a TA for Financial Management and Introduction to C++ at CUHK-SZ.
πŸ“· Interest: When free I enjoy analog photography, shooting with Canon A1 and trying out different kinds of films.