Contrast and Mix: Temporal Contrastive
Video Domain Adaptation with Background Mixing
Aadarsh Sahoo1
Rutav Shah1
Rameswar Panda2
Kate Saenko2,3
Abir Das1
1 IIT Kharagpur
2 MIT-IBM Watson AI Lab
3 Boston University
NeurIPS 2021

Abstract

Unsupervised domain adaptation which aims to adapt models trained on a labeled source domain to a completely unlabeled target domain has attracted much attention in recent years. While many domain adaptation techniques have been proposed for images, the problem of unsupervised domain adaptation in videos remains largely underexplored. In this paper, we introduce Contrast and Mix (CoMix), a new contrastive learning framework that aims to learn discriminative invariant feature representations for unsupervised video domain adaptation. First, unlike existing methods that rely on adversarial learning for feature alignment, we utilize temporal contrastive learning to bridge the domain gap by maximizing the similarity between encoded representations of an unlabeled video at two different speeds as well as minimizing the similarity between different videos played at different speeds. Second, we propose a novel extension to the temporal contrastive loss by using background mixing that allows additional positives per anchor, thus adapting contrastive learning to leverage action semantics shared across both domains. Moreover, we also integrate a supervised contrastive learning objective using target pseudo-labels to enhance discriminability of the latent space for video domain adaptation. Extensive experiments on several benchmark datasets demonstrate the superiority of our proposed approach over state-of-the-art methods.

Experimental Results Overview


Results on UCF-HMDB Dataset.


Results on Jester and Epic-Kitchens Datasets.


Paper & Code

Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das
Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing
Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS), 2021
[PDF] [Supp] [Poster] [Slides] [Code]