Objects that sound
Relja Arandjelovic
(Deep Mind, UK)
Abstract:
We consider the question: what can be learnt by looking at and
listening to a large number of unlabelled videos? There is a
valuable, but so far untapped, source of information contained in the
video itself -- the correspondence between the visual and the audio
streams, and we introduce a novel "Audio-Visual Correspondence" (AVC)
learning task that makes use of this. Training visual and audio
networks from scratch, without any additional supervision other than
the raw unconstrained videos themselves, is shown to successfully
solve this task, and, more interestingly, result in good visual and
audio representations. These features set the new state-of-the-art on
two sound classification benchmarks, and perform on par with the
state-of-the-art self-supervised approaches on ImageNet
classification. We also design a network that can learn to embed
audio and visual inputs into a common space that is suitable for
cross-modal retrieval, and a network that can localize the object
that sounds in an image, given the audio signal. We achieve all of
these objectives by training from unlabelled video using only
audio-visual correspondence (AVC) as the objective function.
Time permitting, I will also present our latest work on training
models that are verifiably robust to norm-bounded adversarial
perturbations. We show that a careful implementation of a simple
bounding technique, interval bound propagation, can be exploited to
train verifiably robust neural networks that beat the
state-of-the-art in verified accuracy.