Publications
See a full list on Google Scholar.
2025
Enhance-A-Video: Better Generated Video for Free
Yang Luo, Xuanlei Zhao, Mengzhao Chen, Kaipeng Zhang, Wenqi Shao, Kai Wang, Zhangyang Wang, Yang You
| arXiv | paper | code |Concerto: Automatic Communication Optimization and Scheduling for Large-Scale Deep Learning
Shenggan Cheng, Shengjie Lin, Lansong Diao, Hao Wu, Siyu Wang, Chang Si, Ziming Liu, Xuanlei Zhao, Jiangsu Du, Wei Lin, Yang You
| ASPLOS 2025 | paper |
2024
Faster Vision Mamba is Rebuilt in Minutes via Merged Token Re-training
Mingjia Shi, Yuhao Zhou, Ruiji Yu, Zekai Li, Zhiyuan Liang, Xuanlei Zhao, Xiaojiang Peng, Tanmay Rajpurohit, Shanmukha Ramakrishna Vedantam, Wangbo Zhao, Kai Wang, Yang You
| arXiv | paper | code |Training Variable Sequences with Data-Centric Parallel
Geng Zhang*, Xuanlei Zhao*, Kai Wang†, Yang You†
| arXiv | code | blog |Real-Time Video Generation with Pyramid Attention Broadcast
Xuanlei Zhao*, Xiaolong Jin*, Kai Wang*, Yang You
| ICLR 2025 | paper | code | blog |DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers
Xuanlei Zhao, Shenggan Cheng, Chang Chen, Zangwei Zheng, Ziming Liu, Zheming Yang, Yang You
| arXiv | paper | code |HeteGen: Heterogeneous Parallel Inference for Large Language Models on Resource-Constrained Devices
Xuanlei Zhao*, Bin Jia*, Haotian Zhou*, Ziming Liu, Shenggan Cheng, Yang You
| MLSys 2024 | paper |FastFold: Optimizing AlphaFold Training and Inference on GPU Clusters
Shenggan Cheng, Xuanlei Zhao, Guangyang Lu, Jiarui Fang, Tian Zheng, Ruidong Wu, Xiwen Zhang, Jian Peng, Yang You
| PPoPP 2024 | paper | code |AutoChunk: Automated Activation Chunk for Memory-Efficient Long Sequence Inference
Xuanlei Zhao, Shenggan Cheng, Guangyang Lu, Jiarui Fang, Haotian Zhou, Bin Jia, Ziming Liu, Yang You
| ICLR 2024 | paper | code |Wallfacer: Guiding transformer model training out of the long-context dark forest with n-body problem
Ziming Liu, Shaoyu Wang, Shenggan Cheng, Zhongkai Zhao, Kai Wang, Xuanlei Zhao, James Demmel, Yang You
| arXiv | paper |
* indicates equal contribution, and † indicates equal corresponding.