Audio-Synchronized Visual Animation

Abstract

Current visual generation methods can produce high quality videos guided by texts. However, effectively controlling object dynamics remains a challenge. This work explores audio as a cue to generate temporally synchronized image animations. We introduce Audio Synchronized Visual Animation (ASVA), a task animating a static image to demonstrate motion dynamics, temporally guided by audio clips across multiple classes. To this end, we present AVSync15, a dataset curated from VGGSound with videos featuring synchronized audio visual events across 15 categories. We also present a diffusion model, AVSyncD, capable of generating dynamic animations guided by audios. Extensive evaluations validate AVSync15 as a reliable benchmark for synchronized generation and demonstrate our models superior performance. We further explore AVSyncDs potential in a variety of audio synchronized generation tasks, from generating full videos without a base image to controlling object motions with various sounds. We hope our established benchmark can open new avenues for controllable visual generation.

Published at: arxiv preprint (2403.05659), 2024.

Paper

Bibtex

@Article{zhang2024asva,
 title={Audio-Synchronized Visual Animation},
 author={Lin Zhang, Shentong Mo, Yijing Zhang, Pedro Morgado},
 journal={ArXiv},
 year={2024}
 }