Portrait
Zhouqi Hua
Ph.D. Student
Fudan University & Shanghai AI Lab
About Me

HišŸ‘‹ I am Zhouqi Hua (åŽę“²ē¦).

I am a first year PhD student at Fudan University, in the joint program in Large Model Center of Shanghai AI Laboratory. I am fortunate to be advised by Dr. Wenwei Zhang, Dr. Kai Chen and Prof. Dahua Lin. Before that, I received the bachelor degree at Tongji University in 2025 (under the supervision of Prof. Yufei Chen and Prof. Zhangkai Ni).

My research focus on generalization in LLMs, including length generalization and compositional generalization. Now I'm interested in investigating the mathematical abilities of LLMs.

Education
  • Shanghai AI Lab
    Shanghai AI Lab
    Research Intern @ Large Model Center
    Joint Ph.D. Student
    Sep. 2025 - present
  • Fudan University
    Fudan University
    Ph.D. Student in Computer Science
    Sep. 2025 - present
  • Tongji University
    Tongji University
    B.S. in Computer Science
    Sep. 2021 - Jul. 2025
Honors & Awards
  • šŸ„‡First Prize, National Intelligent Car Competition
    2024
  • 🄈Second Prize, CCCC-MAIC (Hosted by Apple Inc.)
    2024
  • šŸ„‡Tongji Excellent Student Scholarship (First Prize)
    2024
  • šŸ„‡Tongji Excellent Student Scholarship (First Prize)
    2023
News
2026
Accepted One paper (TAIL) accepted at ICLR 2026. See you in Rio!
Jan 27
2025
We release Intern-S1, an advanced open-source scientific multimodal reasoning model. Try it
Aug 21
Selected Publications (view all )
Intern-S1: A Scientific Multimodal Foundation Model
Intern-S1: A Scientific Multimodal Foundation Model

Lei Bai, Zhongrui Cai, ..., Zhouqi Hua, ..., Yu Qiao et al.

Technical Report

Intern-S1 is a large multimodal MoE foundation model trained with massive scientific data and mixture-of-rewards reinforcement learning, achieving SOTA performance in scientific reasoning and professional tasks while remaining competitive in general reasoning among open-source models.

Intern-S1: A Scientific Multimodal Foundation Model

Lei Bai, Zhongrui Cai, ..., Zhouqi Hua, ..., Yu Qiao et al.

Technical Report

Intern-S1 is a large multimodal MoE foundation model trained with massive scientific data and mixture-of-rewards reinforcement learning, achieving SOTA performance in scientific reasoning and professional tasks while remaining competitive in general reasoning among open-source models.

The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner
The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner

Zhouqi Hua, Wenwei Zhang, Chengqi Lyu, Yuzhe Gu, Songyang Gao, Kuikun Liu, Dahua Lin, Kai Chen

International Conference on Learning Representations ICLR 2026

Turing Machine Imitation Learning (TAIL) is a synthetic chain-of-thought framework that instills Turing machine–like execution in LLMs, enabling robust length generalization for computable reasoning. On 18 challenging tasks, a 7B TAIL model outperforms the 671B DeepSeek-R1, establishing a new state of the art.

The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner

Zhouqi Hua, Wenwei Zhang, Chengqi Lyu, Yuzhe Gu, Songyang Gao, Kuikun Liu, Dahua Lin, Kai Chen

International Conference on Learning Representations ICLR 2026

Turing Machine Imitation Learning (TAIL) is a synthetic chain-of-thought framework that instills Turing machine–like execution in LLMs, enabling robust length generalization for computable reasoning. On 18 challenging tasks, a 7B TAIL model outperforms the 671B DeepSeek-R1, establishing a new state of the art.

All publications