# Conditional Probability Estimation

## An online discussion group on the topic Conditional Probability Estimation

### Introduction: Conditional Probability Estimation

# References for Probability Estimation Club

Published: 2020-12-12

Tags:

References:
- Trevor Hastie, Robert Tibshirani, J. F. (2004). The Elements of Statistical Learning (Vol. 99, Issue 466). Springer Science & Business Media.
- Christpher M. Bishop. (2006). Pattern Recognition and Machine Learning.

Summary: A list of references for our online discussions.

Pages: 22

^{21} Graph Neural Networks: Basics

Published: 2021-09-11

Tags:

Summary: This will be the beginning of a new topic: Graph Neural Networks. In this new series, we will use the textbook by Hamilton1. For the first episode, we will discuss some basics about graphs to make sure we are all on the same page.
@Steven will lead the discussion.
Hamilton2020 Hamilton WL. Graph representation learning. Synth lect artif intell mach learn. 2020;14: 1–159. doi:10.2200/s01045ed1v01y202009aim046 ↩︎

Pages: 22

^{20} Self-supervised Learning: Theories (Part 2)

Published: 2021-08-26

Tags:

References:
- Liu X, Zhang F, Hou Z, Wang Z, Mian L, Zhang J, et al. Self-supervised Learning: Generative or Contrastive. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2006.08218
- Wang T, Isola P. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2005.10242
- Newell A, Deng J. How Useful is Self-Supervised Pretraining for Visual Tasks? arXiv [cs.CV]. 2020. Available: http://arxiv.org/abs/2003.14323
- Tschannen M, Djolonga J, Rubenstein PK, Gelly S, Lucic M. On Mutual Information Maximization for Representation Learning. arXiv [cs.LG]. 2019. Available: http://arxiv.org/abs/1907.13625
- van den Oord A, Li Y, Vinyals O. Representation learning with Contrastive Predictive Coding. arXiv [cs.LG]. 2018. Available: http://arxiv.org/abs/1807.03748
- Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. arXiv [cs.LG]. 2016. Available: http://arxiv.org/abs/1606.03657
- Nowozin S, Cseke B, Tomioka R. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. arXiv [stat.ML]. 2016. Available: http://arxiv.org/abs/1606.00709

Summary: Theories of self-supervised learning

Pages: 22

^{19} Self-supervised Learning: Theories (Part 1)

Published: 2021-08-26

Tags:

Summary: Theories of self-supervised learning

Pages: 22

^{18} Self-supervised Learning: GAN

Published: 2021-08-01

Tags:

References:
- Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative Adversarial Networks. arXiv [stat.ML]. 2014. Available: http://arxiv.org/abs/1406.2661
- Mirza M, Osindero S. Conditional Generative Adversarial Nets. arXiv [cs.LG]. 2014. Available: http://arxiv.org/abs/1411.1784
- Goodfellow I. NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv [cs.LG]. 2016. Available: http://arxiv.org/abs/1701.00160
- GAN Course Introduction - Intuitive Intro To Generative Adversarial Networks. [cited 1 Aug 2021]. Available: https://deeplizard.com/lesson/gaa1ilrazd
- Liu X, Zhang F, Hou Z, Wang Z, Mian L, Zhang J, et al. Self-supervised Learning: Generative or Contrastive. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2006.08218

Summary: Let's talk about GAN this time.

Pages: 22

^{17} Self-supervised Learning: Generative or Contrastive

Published: 2021-07-03

Tags:

Summary: Some biological neural network knowledge

Pages: 22

^{15} Predictive Coding Approximates Backprop along Arbitrary Computation Graphs

Published: 2021-06-21

Tags:

Summary: In this meetup, we will discuss this paper: https://arxiv.org/abs/2006.04182
Why? Feedforward-backprop usually has a loss function that involves all the parameters. Backprop means we need this huge global loss $\mathcal L({w_{ij}})$. However, it is hard to imaging such global loss calculations in our brain. One of the alternatives is predictive coding, which only utilizes local connection information.
In this paper (2006.04182), the author proves the equivalence of backprop and predictive coding on arbitary graph.

Pages: 22

^{14} Energy-based Models 5

Published: 2021-06-02

Tags:

References:
- Pytorch Deep Learning Lectures
- Pytorch Deep Learning Slides
- A high-bias, low-variance introduction to Machine Learning for physicists

Summary: We will discuss RBM and training in this session.

Pages: 22

^{13} Energy-based Models 4

Published: 2021-05-26

Tags:

References:
- Pytorch Deep Learning Lectures
- Pytorch Deep Learning Slides
- A high-bias, low-variance introduction to Machine Learning for physicists

Summary: We will discuss energy-based learning in this session.

Pages: 22

^{12} Energy-based Models 3

Published: 2021-04-24

Tags:

References:
- Pytorch Deep Learning Lectures
- Pytorch Deep Learning Slides
- A high-bias, low-variance introduction to Machine Learning for physicists

Summary: We will discuss energy-based learning in this session.

Pages: 22

^{11} Energy-based Models 2

Published: 2021-02-27

Tags:

Summary: We will discuss energy-based learning in this session.

Pages: 22