Self-supervised Learning: Theories (Part 2)
#Self-supervised Learning
We will dive deep into Section 6 of the paper arXiv:2006.08218. Here are a few topics to be explored.
- InfoGAN objective;
- Positive and negative samples in loss function (InfoNCE);
- Uniformity in constrastive loss;
- JS-divergence.
Planted:
by NeuronStar;
: 2021-09-11T14:30:00 - 2021-09-11T16:00:00 (CET)
: Lark
: More details on page
Conditional Probability Estimation
References:
- Liu X, Zhang F, Hou Z, Wang Z, Mian L, Zhang J, et al. Self-supervised Learning: Generative or Contrastive. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2006.08218
- Wang T, Isola P. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2005.10242
- Newell A, Deng J. How Useful is Self-Supervised Pretraining for Visual Tasks? arXiv [cs.CV]. 2020. Available: http://arxiv.org/abs/2003.14323
- Tschannen M, Djolonga J, Rubenstein PK, Gelly S, Lucic M. On Mutual Information Maximization for Representation Learning. arXiv [cs.LG]. 2019. Available: http://arxiv.org/abs/1907.13625
- van den Oord A, Li Y, Vinyals O. Representation learning with Contrastive Predictive Coding. arXiv [cs.LG]. 2018. Available: http://arxiv.org/abs/1807.03748
- Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. arXiv [cs.LG]. 2016. Available: http://arxiv.org/abs/1606.03657
- Nowozin S, Cseke B, Tomioka R. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. arXiv [stat.ML]. 2016. Available: http://arxiv.org/abs/1606.00709
Current Ref:
- cpe/20.self-supervised-learning-theories-2.md