Hello there! I am a second-year PhD student at HKU (the University of Hong Kong), advised by Prof. Ping Luo.
From February 2024 to October 2025, I worked as a research intern with the Humanoid Research team at Shanghai AI Lab, where I was fortunate to work with Dr. Jiangmiao Pang and Dr. Jingbo Wang.
I received both my M.Eng. and B.Eng. degrees from Tsinghua University under the supervision of Prof. Guijin Wang.
Research interests:Whole-body controlReinforcement learningHuman-object interaction
If you are interested in any of these topics, or would simply like to chat, feel free to drop me an email.
Junli Ren「任峻立」
Projects
SMASH: Mastering Scalable Whole-Body Skills for Humanoid Ping-Pong with Egocentric Vision
Junli Ren†,*, Yinghui Li†,*, Kai Zhang*, Penglin Fu*, Haoran Jiang, Yixuan Pan, Guangjun Zeng, Tao Huang, Weizhong Guo, Peng Lu, Tianyu Li, Jingbo Wang, Li Chen, Hongyang Li, Ping Luo‡
Humanoid Goalkeeper learns a single end-to-end RL policy, executing agile, human-like motions to intercept flying balls, as well as performing tasks such as escaping a ball using jump and squat motions.
VB-Com: Learning Vision-Blind Composite Humanoid Locomotion Against Deficient Perception
Junli Ren, Tao Huang, Huayi Wang, Zirui Wang, Qingwei Ben, Junfeng Long, Yanchao Yang Jiangmiao Pang†, Ping Luo†
International Conference on Robotics and Automation (ICRA), 2026
We propose VB-Com, a composite framework that enables humanoid robots to determine when to rely on the vision policy and when to switch to the blind policy under perceptual deficiency.
AdaMimic: Towards Adaptable Humanoid Control via Adaptive Motion Tracking
We present a physical-world humanoid-scene interaction system, PhysHSI, that enables humanoids to autonomously perform diverse interaction tasks while maintaining natural and lifelike behaviors.
Learning Humanoid Standing-up Control across Diverse Postures
we present HoST (Humanoid Standing-up Control), a reinforcement learning framework that learns standing-up control from scratch, enabling robust sim-to-real transfer across diverse postures.
BeamDojo: Learning Agile Humanoid Locomotion on Sparse Footholds
BeamDojo achieves efficient learning in simulation and enables agile locomotion with precise foot placement on sparse footholds in the real world,
maintaining a high success rate even under significant external disturbances.
Learning Humanoid Locomotion with Perceptive Internal Model
We propose the Perceptive Intenal Model (PIM), a method to estimate environmental disturbances
with perceptive information, enabling agile and robust locomotion for various humanoid robots on various terrains.
TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation
We propose TOP-Nav, a novel legged navigation framework that integrates a comprehensive path planner
with Terrain awareness, Obstacle avoidance and close-loop Proprioception.