国家天元数学中部中心高性能计算系列报告 | 黄维然 华为诺亚方舟实验室研究员

发布时间: 2021-11-24 16:02

报告题目The Generalization of Contrastive Self-Supervised Learning

报告时间:2021-12-03  10:00-11:00

报告人:黄维然 华为诺亚方舟实验室研究员

ZOOMID:962 7874 3226  会议密码:20211202

Abstract:Recently, self-supervised learning has attracted great attention since it only requires unlabeled data for training. Contrastive learning is a popular approach for self-supervised learning and empirically performs well in practice. However, the theoretical understanding of its generalization ability on downstream tasks is not well studied. To this end, we present a theoretical explanation of how contrastive self-supervised pre-trained models generalize to downstream tasks. Concretely, we quantitatively show that the self-supervised model has generalization ability on downstream classification tasks if it embeds input data into a feature space with distinguishing centers of classes and closely clustered intra-class samples. With the above conclusion, we further explore SimCLR and Barlow Twins, which are two canonical contrastive self-supervised methods. We prove that the aforementioned feature space can be obtained via any of the methods, and thus explain their success on the generalization on downstream classification tasks. Finally, various experiments are also conducted to verify our theoretical findings.