Jigsaw Self-Supervised Visual Representation Learning: An Applied Comparative Analysis Study

Kawashti, Yomna A.; Khattab, Dina; Aref, Mostafa M.;

Abstract


Self-supervised learning has been gaining momentum in the computer vision community as a hopeful contender to replace supervised learning. It aims to leverage unlabeled data by training a network on a proxy task and using transfer learning for a downstream task. Jigsaw is one of the proxy tasks used for learning better feature representations in self-supervised learning. In this work, we comparably evaluated the transferability of jigsaw using different architectures and a different dataset for jigsaw training. The features extracted from each convolutional block were evaluated using a unified downstream task. The best performance was achieved by the shallower architecture of AlexNet where the 2nd block achieved better transferability with a mean average precision of 36.17. We conclude that this behavior could be attributed to the smaller scale of our used dataset, so features extracted from earlier and shallower blocks had higher transferability to a dataset of a different domain.


Other data

Title Jigsaw Self-Supervised Visual Representation Learning: An Applied Comparative Analysis Study
Authors Kawashti, Yomna A.; Khattab, Dina ; Aref, Mostafa M.
Keywords downstream task;feature extraction;representation learning;pretext task;self-supervised learning;jigsaw
Issue Date 1-Jan-2022
Journal MIUCC 2022 - 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference 
Conference 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference
ISBN 9781665466776
DOI 10.1109/MIUCC55081.2022.9781725
Scopus ID 2-s2.0-85132418259

Recommend this item

Similar Items from Core Recommender Database

Google ScholarTM

Check

views 22 in Shams Scholar


Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.