Imagine having a conversation with a socially intelligent agent. It can attentively listen to your words and offer visual and linguistic feedback promptly. This seamless interaction allows for multiple rounds of conversation to flow smoothly and naturally. In pursuit of actualizing it, we propose INFP, a novel audio-driven head generation framework for dyadic interaction. Unlike previous head generation works that only focus on single-sided communication, or require manual role assignment and explicit role switching, our model drives the agent portrait dynamically alternates between speaking and listening state, guided by the input dyadic audio. Specifically, INFP comprises a Motion-Based Head Imitation stage and an Audio-Guided Motion Generation stage. The first stage learns to project facial communicative behaviors from real-life conversation videos into a low-dimensional motion latent space, and use the motion latent codes to animate a static image. The second stage learns the mapping from the input dyadic audio to motion latent codes through denoising, leading to the audio-driven head generation in interactive scenarios. To facilitate this line of research, we introduce DyConv, a large scale dataset of rich dyadic conversations collected from the Internet. Extensive experiments and visualizations demonstrate superior performance and effectiveness of our method.
Existing interactive head generation (left) applied manual role assignment and explicit role switching. Our proposed INFP (right) is a unified framework which can dynamically and naturally adapt to various conversational states.
Schematic illustration of INFP. The first stage (Motion-Based Head Imitation) learns to project facial communicative behaviors from real-life conversation videos into a low-dimensional motion latent space, and use the latent codes to animate a static image. The second stage (Audio-Guided Motion Generation) learns the mapping from the input dyadic audio to motion latent codes through denoising, to achieve the audio-driven interactive head generation.
Our method can generate motion-adapted synthesis results for the same reference image based on different audio inputs.
Our method can support non-human realistic and side-face images.
Thanks to the flash inference speed of INFP (over 40 fps at Nvidia Tesla A10), our method can achive the real-time agent-agent communication. Alternatively, human-agent interaction is also enabled.
Unlike existing methods should explicitly and manually switch roles between Listener and Speaker, our method can dynamically adapt to various states, leading to smoother and more natural results.
INFP can naturally adapt to related tasks like talking or listening head generation without any modification.
Our method generates results with high lip-sync accuracy, expressive facial expressions and rhythmic head pose movements.
Our method can support singing and different languages.
Our method generates results with high fidelity, natural facial behaviors and diverse head motions. Note that all results except ours stem from the homepage of DIM.