📍DisPose is a controllable human image animation method that improves video generation using motion field guidance and keypoint correspondence..
Controllable human image animation aims to generate videos from reference images using driving videos. Due to the limited control signals provided by sparse guidance (e.g., skeleton pose), recent works have attempted to introduce additional dense conditions (e.g., depth map) to ensure motion alignment. However, such strict dense guidance impairs the quality of the generated video when the body shape of the reference character differs significantly from that of the driving video. In this paper, we present DisPose to mine more generalizable and effective control signals without additional dense input, which disentangles the sparse skeleton pose in human image animation into motion field guidance and keypoint correspondence. Specifically, we generate a dense motion field from a sparse motion field and the reference image, which provides region-level dense guidance while maintaining the generalization of the sparse pose control. We also extract diffusion features corresponding to pose keypoints from the reference image, and then these point features are transferred to the target pose to provide distinct identity information. To seamlessly integrate into existing models, we propose a plug-and-play hybrid ControlNet that improves the quality and consistency of generated videos while freezing the existing model parameters. Extensive qualitative and quantitative experiments demonstrate the superiority of DisPose compared to current methods.
we propose DisPose, a plug-and-play guidance module to disentangle pose guidance, which extracts robust control signals from only the skeleton pose map and reference image without additional dense inputs. Specifically, we disentangle pose guidance into motion field estimation and keypoint correspondence. First, we compute the sparse motion field using the skeleton pose. We then introduce a reference-based dense motion field to provide region-level motion signals through condition motion propagation on the reference image. To enhance appearance consistency, we extract diffusion features corresponding to key points in the reference image. These point features are transferred to the target pose by computing multi-scale point correspondences from the motion trajectory. Architecturally, we implement these disentangled control signals in a ControlNet-like manner to integrate them into existing methods. Finally, motion fields and point embedding are injected into the latent video diffusion model resulting in accurate human image animation.
|
|
|
|
|
---|---|---|---|---|
|
|
|
|
|
|
---|
|
|