The Next Motion Generation | 下一代人体动作生成

Image credit: Unsplash
Click on the Slides button above to view the built-in slides feature.

Intro

Welcome to The Next Motion Generation, presented by Chris Xin Chen. This presentation unfolds the advancement of Motion Generation and Human Agent, including two key areas: Motion Generation, encompassing MotionGPT and Motion-Latent-Diffusion, and the Autonomous Human Agent, grounded in foundational visual-language models (VLMs). Recent advancements in language models have demonstrated their adeptness in conducting multi-turn dialogues and retaining conversational context. By integrating multi-turn conversations in controlling continuous virtual human movements, generative human motion models can achieve an intuitive and step-by-step process of human task execution for humanoid robotics, game agents, or other embodied systems. By discreting all text, images, and motion into tokens, could we develop an autonomous human agent leveraging the fundamental strengths of VLMs? This presentation will go though this exploration. For some details, please visit motion-gpt.github.io.

Xin Chen
Xin Chen
陈欣 | Research Scientist

My research interests include generative AI, human agents, 3D and human motion generation.