Self-Supervised Generation of Spatial Audio for 360° Video

Overview


We introduce an approach to convert mono audio recorded by a 360° video camera into spatial audio, a representation of the distribution of sound over the full viewing sphere. Spatial audio is an important component of immersive 360° video viewing, but spatial audio microphones are still rare in current 360° video production. Our system consists of end-to-end trainable neural networks that separate individual sound sources and localize them on the viewing sphere, conditioned on multi-modal analysis of audio and 360° video frames. We introduce several datasets, including one filmed ourselves, and one collected in-the-wild from YouTube, consisting of 360° videos uploaded with spatial audio. During training, ground-truth spatial audio serves as self-supervision and a mixed down mono track forms the input to our network. Using our approach, we show that it is possible to infer the spatial location of sound sources based only on 360° video and a mono audio track.

Supplement

Repository

Poster

Bibtex

Dataset


First 360° video dataset with spatial audio (1st-order ambisonics).

1146 Videos

Scrapped from youtube or recorded by ourselves.

113 Hours

More than 3M training samples (0.1s each).

4 Partitions

Organized under different levels of difficulty.

Download

Youtube links available for download.

Models


paper

Architecture: Schematic representation of our network.

Code

Training, evaluation and deployment code available on GitHub.

GitHub

Trained models

Get model checkpoints trained on each dataset.

Predictions


Click the videos for watching our model predictions.

NOTE: First video is the input to our model. The network output can be seen either with a colormap overlay that localizes the most salient sounds (second video) or using headphones in the 360 video player (third column).

Authors


pedro

Pedro Morgado

UC San Diego
pedro

Timothy Langlois

Adobe Research
pedro

Oliver Wang

Adobe Research