Research
My research interest centers on developing virtual environments and digital avatars that can seamlessly interact with humans. To achieve this, I am currently focusing on human motion generation, as well as human-scene and human-human interaction.
|
|
DisCoRD: Discrete Tokens to Continuous Motion via Rectified Flow Decoding
Jungbin Cho*, Junwan Kim*, Jisoo Kim, Minseo Kim, Mingu Kang, Sungeun Hong ,Tae-Hyun Oh,Youngjae Yu
Preprint
Generating smooth and natural motion by decoding discrete motion tokens with rectified flow.
|
|
DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation
Jisoo Kim*, Jungbin Cho*, Joonho Park, Soonmin Hwang, Da Eun Kim, Geon Kim, Youngjae Yu
AAAI, 2025
Generating dynamic emotional talking faces using probabilistic embeddings and a temporally hiearchical motion tokenizer.
|
|
AVIN-Chat: An Audio-Visual Interactive Chatbot System with Emotional State Tuning
Chanhyuk Park*, Jungbin Cho*, Junwan Kim*, Seongmin Lee, Jungsu Kim, Sanghoon Lee
IJCAI Demo, 2024
A chatbot system designed for face-to-face interactions, featuring customizable virtual avatars for personalized conversations.
|
|
VSCHH 2023: A Benchmark for the View Synthesis Challenge of Human Heads
Youngkyoon Jang, ..., Hyeseong Kim, Jungbin Cho, Dosik Hwang, ..., Stefanos Zafeiriou
ICCV Workshop, 2023
Reconstructing high resolution 3D human heads from sparse input views.
|
|