Recognizing actions from a limited set of labeled videos remains a challenge as annotating visual data is not only tedious but also
can be expensive due to classified nature. Moreover, handling spatiotemporal data using deep 3D transformers for this can introduce significant computational complexity. In this paper, our objective is to address
video action recognition in a semi-supervised setting by leveraging only
a handful of labeled videos along with a collection of unlabeled videos
in a compute efficient manner. Specifically, we rearrange multiple frames
from the input videos in row-column form to construct super images.
Subsequently, we capitalize on the vast pool of unlabeled samples and
employ contrastive learning on the encoded super images. Our proposed
approach employs two pathways to generate representations for temporally augmented super images originating from the same video. Specifically, we utilize a 2D image-transformer to generate representations and
apply a contrastive loss function to minimize the similarity between representations from different videos while maximizing the representations
of identical videos. Our method demonstrates superior performance compared to existing state-of-the-art approaches for semi-supervised action
recognition across various benchmark datasets, all while sig