Conditional Probability Estimation
An online journal club on the topics Conditional Probability Estimation. Our cover topics on all sorts of probabilistic approach, such as VAE, normalizing, graph neural network, probabilistic time series forecasting.
Introduction: Conditional Probability Estimation
24 Graph Neural Networks: PyTorch
Published:
Tags:
References:
- Tutorial 7: Graph Neural Networks — UvA DL Notebooks v1.1 documentation. [cited 2 Nov 2021]. Available: https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial7/GNN_overview.html
- Hamilton WL. Graph representation learning. Synth lect artif intell mach learn. 2020;14: 1–159. doi:10.2200/s01045ed1v01y202009aim046
Summary: PyTorch tutorials on GNN
Pages: 60
23 Graph Neural Networks
Published:
Tags:
Summary: Chapter 5 of Hamilton1.
Hamilton2020 Hamilton WL. Graph representation learning. Synth lect artif intell mach learn. 2020;14: 1–159. doi:10.2200/s01045ed1v01y202009aim046 ↩︎
Pages: 60
22 Graph Neural Networks: Basics (2)
Published:
Tags:
Summary: We will continue the discussion on Graph Neural Networks.
Problems of using Graphs Graph Neural Networks Textbook: Hamilton1
Hamilton2020 Hamilton WL. Graph representation learning. Synth lect artif intell mach learn. 2020;14: 1–159. doi:10.2200/s01045ed1v01y202009aim046 ↩︎
Pages: 60
21 Graph Neural Networks: Basics
Published:
Tags:
Summary: This will be the beginning of a new topic: Graph Neural Networks. In this new series, we will use the textbook by Hamilton1. For the first episode, we will discuss some basics about graphs to make sure we are all on the same page.
@Steven will lead the discussion.
Hamilton2020 Hamilton WL. Graph representation learning. Synth lect artif intell mach learn. 2020;14: 1–159. doi:10.2200/s01045ed1v01y202009aim046 ↩︎
Pages: 60
20 Self-supervised Learning: Theories (Part 2)
Published:
Tags:
References:
- Liu X, Zhang F, Hou Z, Wang Z, Mian L, Zhang J, et al. Self-supervised Learning: Generative or Contrastive. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2006.08218
- Wang T, Isola P. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2005.10242
- Newell A, Deng J. How Useful is Self-Supervised Pretraining for Visual Tasks? arXiv [cs.CV]. 2020. Available: http://arxiv.org/abs/2003.14323
- Tschannen M, Djolonga J, Rubenstein PK, Gelly S, Lucic M. On Mutual Information Maximization for Representation Learning. arXiv [cs.LG]. 2019. Available: http://arxiv.org/abs/1907.13625
- van den Oord A, Li Y, Vinyals O. Representation learning with Contrastive Predictive Coding. arXiv [cs.LG]. 2018. Available: http://arxiv.org/abs/1807.03748
- Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. arXiv [cs.LG]. 2016. Available: http://arxiv.org/abs/1606.03657
- Nowozin S, Cseke B, Tomioka R. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. arXiv [stat.ML]. 2016. Available: http://arxiv.org/abs/1606.00709
Summary: Theories of self-supervised learning
Pages: 60
19 Self-supervised Learning: Theories (Part 1)
Published:
Tags:
Summary: Theories of self-supervised learning
Pages: 60
18 Self-supervised Learning: GAN
Published:
Tags:
References:
- Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative Adversarial Networks. arXiv [stat.ML]. 2014. Available: http://arxiv.org/abs/1406.2661
- Mirza M, Osindero S. Conditional Generative Adversarial Nets. arXiv [cs.LG]. 2014. Available: http://arxiv.org/abs/1411.1784
- Goodfellow I. NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv [cs.LG]. 2016. Available: http://arxiv.org/abs/1701.00160
- GAN Course Introduction - Intuitive Intro To Generative Adversarial Networks. [cited 1 Aug 2021]. Available: https://deeplizard.com/lesson/gaa1ilrazd
- Liu X, Zhang F, Hou Z, Wang Z, Mian L, Zhang J, et al. Self-supervised Learning: Generative or Contrastive. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2006.08218
Summary: Let's talk about GAN this time.
Pages: 60
17 Self-supervised Learning: Generative or Contrastive
Published:
Tags:
Summary: Some biological neural network knowledge
Pages: 60
15 Predictive Coding Approximates Backprop along Arbitrary Computation Graphs
Published:
Tags:
Summary: In this meetup, we will discuss this paper: https://arxiv.org/abs/2006.04182
Why? Feedforward-backprop usually has a loss function that involves all the parameters. Backprop means we need this huge global loss $\mathcal L({w_{ij}})$. However, it is hard to imaging such global loss calculations in our brain. One of the alternatives is predictive coding, which only utilizes local connection information.
In this paper (2006.04182), the author proves the equivalence of backprop and predictive coding on arbitary graph.
Pages: 60
14 Energy-based Models 5
Published:
Tags:
References:
- Pytorch Deep Learning Lectures
- Pytorch Deep Learning Slides
- A high-bias, low-variance introduction to Machine Learning for physicists
Summary: We will discuss RBM and training in this session.
Pages: 60
13 Energy-based Models 4
Published:
Tags:
References:
- Pytorch Deep Learning Lectures
- Pytorch Deep Learning Slides
- A high-bias, low-variance introduction to Machine Learning for physicists
Summary: We will discuss energy-based learning in this session.
Pages: 60