Hiš I am Zhouqi Hua (å擲ē¦).
I am a first year PhD student at Fudan University, in the joint program in Large Model Center of Shanghai AI Laboratory. I am fortunate to be advised by Dr. Wenwei Zhang, Dr. Kai Chen and Prof. Dahua Lin. Before that, I received the bachelor degree at Tongji University in 2025 (under the supervision of Prof. Yufei Chen and Prof. Zhangkai Ni).
My research focus on generalization in LLMs, including length generalization and compositional generalization. Now I'm interested in investigating the mathematical abilities of LLMs.
") does not match the recommended repository name for your site ("").
", so that your site can be accessed directly at "http://".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}" in index.html.
",
which does not match the baseurl ("") configured in _config.yml.
baseurl in _config.yml to "".

Lei Bai, Zhongrui Cai, ..., Zhouqi Hua, ..., Yu Qiao et al.
Technical Report
Intern-S1 is a large multimodal MoE foundation model trained with massive scientific data and mixture-of-rewards reinforcement learning, achieving SOTA performance in scientific reasoning and professional tasks while remaining competitive in general reasoning among open-source models.
Lei Bai, Zhongrui Cai, ..., Zhouqi Hua, ..., Yu Qiao et al.
Technical Report
Intern-S1 is a large multimodal MoE foundation model trained with massive scientific data and mixture-of-rewards reinforcement learning, achieving SOTA performance in scientific reasoning and professional tasks while remaining competitive in general reasoning among open-source models.

Zhouqi Hua, Wenwei Zhang, Chengqi Lyu, Yuzhe Gu, Songyang Gao, Kuikun Liu, Dahua Lin, Kai Chen
International Conference on Learning Representations ICLR 2026
Turing Machine Imitation Learning (TAIL) is a synthetic chain-of-thought framework that instills Turing machineālike execution in LLMs, enabling robust length generalization for computable reasoning. On 18 challenging tasks, a 7B TAIL model outperforms the 671B DeepSeek-R1, establishing a new state of the art.
Zhouqi Hua, Wenwei Zhang, Chengqi Lyu, Yuzhe Gu, Songyang Gao, Kuikun Liu, Dahua Lin, Kai Chen
International Conference on Learning Representations ICLR 2026
Turing Machine Imitation Learning (TAIL) is a synthetic chain-of-thought framework that instills Turing machineālike execution in LLMs, enabling robust length generalization for computable reasoning. On 18 challenging tasks, a 7B TAIL model outperforms the 671B DeepSeek-R1, establishing a new state of the art.