- Week 8 – Practicum: Variational autoencoders

Week 8 – Practicum: Variational autoencoders

Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Alfredo Canziani
Week 8: http://bit.ly/pDL-en-08

0:00:00 – Week 8 – Practicum

PRACTICUM: http://bit.ly/pDL-en-08-3
In this section, we discussed a specific type of generative model called Variational Autoencoder...
Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Alfredo Canziani
Week 8: http://bit.ly/pDL-en-08

0:00:00 – Week 8 – Practicum

PRACTICUM: http://bit.ly/pDL-en-08-3
In this section, we discussed a specific type of generative model called Variational Autoencoders and compared their functionalities and advantages over Classic Autoencoders. We explored the objective function of VAE in detail, understanding how it enforced some structure in the latent space. Finally, we implemented and trained a VAE on the MNIST dataset and used it to generate new samples.
0:02:35 – Autoencoders (AEs) vs. variational autoencoders (VAEs)
0:16:37 – Understanding the VAE objective function
0:31:33 – Notebook example for variational autoencoder

#Deep Learning #Yann LeCun #autoencoder #over-complete #generative #variational autoencoder #posterior #prior #KL divergence #relative entropy #PyTorch
– Week 8 – Practicum - Week 8 – Practicum: Variational autoencoders

– Week 8 – Practicum

Week 8 – Practicum: Variational autoencoders
2020年05月21日
00:00:00 - 00:02:35
– Autoencoders (AEs) vs. variational autoencoders (VAEs) - Week 8 – Practicum: Variational autoencoders

– Autoencoders (AEs) vs. variational autoencoders (VAEs)

Week 8 – Practicum: Variational autoencoders
2020年05月21日
00:02:35 - 00:16:37
– Understanding the VAE objective function - Week 8 – Practicum: Variational autoencoders

– Understanding the VAE objective function

Week 8 – Practicum: Variational autoencoders
2020年05月21日
00:16:37 - 00:31:33
I loved the way how you are using the concepts of Linear Algebra (@) - at the end it's all vectors and transformations :)You are a great mentor! Note that I did not say "coach", because you are equipping each of us with the skills that can solve most problems, not just one :)Huge Fan of your lectures & advice :) - Week 8 – Practicum: Variational autoencoders

I loved the way how you are using the concepts of Linear Algebra (@) - at the end it's all vectors and transformations :)You are a great mentor! Note that I did not say "coach", because you are equipping each of us with the skills that can solve most problems, not just one :)Huge Fan of your lectures & advice :)

Week 8 – Practicum: Variational autoencoders
2020年05月21日
00:19:32 - 00:58:05
This was an intuitive explanation yet grounded in math. Such a delicate balance! Thanks for doing this! Also, @ I agree the bubble-of-bubbles is indeed cute. 😄 - Week 8 – Practicum: Variational autoencoders

This was an intuitive explanation yet grounded in math. Such a delicate balance! Thanks for doing this! Also, @ I agree the bubble-of-bubbles is indeed cute. 😄

Week 8 – Practicum: Variational autoencoders
2020年05月21日
00:24:48 - 00:58:05
I do not understand why on  N(0, Id) has 0 as E()? According to picture you said Lkl is enforcing z to be in small bublle with different centers, but center must be in some point not in 0. So your KL loss must construct something like in the picture https://www.youtube.com/watch?v=bbOFvxbMIV0 but actually no bubbles - Week 8 – Practicum: Variational autoencoders

I do not understand why on N(0, Id) has 0 as E()? According to picture you said Lkl is enforcing z to be in small bublle with different centers, but center must be in some point not in 0. So your KL loss must construct something like in the picture https://www.youtube.com/watch?v=bbOFvxbMIV0 but actually no bubbles

Week 8 – Practicum: Variational autoencoders
2020年05月21日
00:28:04 - 00:58:05
– Notebook example for variational autoencoder - Week 8 – Practicum: Variational autoencoders

– Notebook example for variational autoencoder

Week 8 – Practicum: Variational autoencoders
2020年05月21日
00:31:33 - 00:58:05
This was a great explanation, however I don’t understand why at  we don’t want to do the reparameterization trick during testing and only return the mu? I would assume we would want to always sample from the latent distribution for passing it to the decoder? Making the encoder give deterministic output (just the mu) during testing will defeat the purpose of variational auto encoders right? - Week 8 – Practicum: Variational autoencoders

This was a great explanation, however I don’t understand why at we don’t want to do the reparameterization trick during testing and only return the mu? I would assume we would want to always sample from the latent distribution for passing it to the decoder? Making the encoder give deterministic output (just the mu) during testing will defeat the purpose of variational auto encoders right?

Week 8 – Practicum: Variational autoencoders
2020年05月21日
00:52:00 - 00:58:05

Alfredo Canziani

🎉 32,000 人達成!  📈 予測:4万人まであと548日(2024年6月7日) 

Timetable

動画タイムテーブル

動画数:121件

– Welcome to class - 10P – Non-contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)

– Welcome to class

10P – Non-contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)
2022年06月07日
00:00:00 - 01:05:28
– Welcome to class - 09P – Contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)

– Welcome to class

09P – Contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)
2022年05月28日
00:00:00 - 00:56:52
– Welcome to class - 14L – Lagrangian backpropagation, final project winners, and Q&A session

– Welcome to class

14L – Lagrangian backpropagation, final project winners, and Q&A session
2021年08月18日
00:00:00 - 02:12:36
– Welcome to class - 13L – Optimisation for Deep Learning

– Welcome to class

13L – Optimisation for Deep Learning
2021年08月18日
00:00:00 - 01:51:32
– Welcome to class - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

– Welcome to class

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
00:00:00 - 01:54:23
I am awake at  as well 🤣Joke aside, 2021 videos are quite different from 2020, which is a great treat! I am being introduced to VAE from EBM's R(z). Also, thanks for sharing the homework 3 questions which help me to think and understand EMB better. Thank you Professor Yann and Professor Alfredo! 🥰 - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

I am awake at as well 🤣Joke aside, 2021 videos are quite different from 2020, which is a great treat! I am being introduced to VAE from EBM's R(z). Also, thanks for sharing the homework 3 questions which help me to think and understand EMB better. Thank you Professor Yann and Professor Alfredo! 🥰

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
00:52:09 - 01:54:23
In  Yann Lecun says that the brain doesn't do reconstruction, that it doesn't reconstruct an input from an embedding. This seems very counter intuitive to me... Why not? What are dreams then? Aren't they reconstructions of input signals (images, sounds etc.) from some sort of embeddings? - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

In Yann Lecun says that the brain doesn't do reconstruction, that it doesn't reconstruct an input from an embedding. This seems very counter intuitive to me... Why not? What are dreams then? Aren't they reconstructions of input signals (images, sounds etc.) from some sort of embeddings?

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
01:12:00 - 01:54:23
– Summary - 08L – Self-supervised learning and variational inference

– Summary

08L – Self-supervised learning and variational inference
2021年08月12日
00:00:00 - 00:01:00
– Welcome to class - 08L – Self-supervised learning and variational inference

– Welcome to class

08L – Self-supervised learning and variational inference
2021年08月12日
00:00:00 - 01:54:44
– GANs - 08L – Self-supervised learning and variational inference

– GANs

08L – Self-supervised learning and variational inference
2021年08月12日
00:01:00 - 00:17:10
– How do Humans and Animals learn quickly - 08L – Self-supervised learning and variational inference

– How do Humans and Animals learn quickly

08L – Self-supervised learning and variational inference
2021年08月12日
00:17:10 - 00:28:05
– Self Supervised Learning - 08L – Self-supervised learning and variational inference

– Self Supervised Learning

08L – Self-supervised learning and variational inference
2021年08月12日
00:28:05 - 00:32:00
– Sparse Coding Sparce Modeling - 08L – Self-supervised learning and variational inference

– Sparse Coding Sparce Modeling

08L – Self-supervised learning and variational inference
2021年08月12日
00:32:00 - 01:07:45
@Alfredo Canziani Hi Alf, at , Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔 - 08L – Self-supervised learning and variational inference

@Alfredo Canziani Hi Alf, at , Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔

08L – Self-supervised learning and variational inference
2021年08月12日
00:57:27 - 01:54:44
If this way of making features ( , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔  I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions?Thanks for the vid, Alfredo and Yann 🤗 - 08L – Self-supervised learning and variational inference

If this way of making features ( , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔 I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions?Thanks for the vid, Alfredo and Yann 🤗

08L – Self-supervised learning and variational inference
2021年08月12日
00:58:55 - 01:54:44
– Regularization Through Temporal Consistency - 08L – Self-supervised learning and variational inference

– Regularization Through Temporal Consistency

08L – Self-supervised learning and variational inference
2021年08月12日
01:07:45 - 01:12:05
at  , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects? - 08L – Self-supervised learning and variational inference

at , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects?

08L – Self-supervised learning and variational inference
2021年08月12日
01:11:40 - 01:54:44
– Variational AE - 08L – Self-supervised learning and variational inference

– Variational AE

08L – Self-supervised learning and variational inference
2021年08月12日
01:12:05 - 01:54:44
– Welcome to class - 09L – Differentiable associative memories, attention, and transformers

– Welcome to class

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
00:00:00 - 02:00:29
Yann gets sad at  while he is talking about attention mecanishm might take the place of convolution at images :/ - 09L – Differentiable associative memories, attention, and transformers

Yann gets sad at while he is talking about attention mecanishm might take the place of convolution at images :/

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
01:26:09 - 02:00:29
For masking, is there a strategy to remove words instead of random masking, as if the object of interest, eg: curtain @ were to be removed from both English and French, wouldn't it make the prediction task much more difficult, as a lot of objects could be substituted in its place. - 09L – Differentiable associative memories, attention, and transformers

For masking, is there a strategy to remove words instead of random masking, as if the object of interest, eg: curtain @ were to be removed from both English and French, wouldn't it make the prediction task much more difficult, as a lot of objects could be substituted in its place.

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
01:29:19 - 02:00:29
– Welcome to class - 14 – Prediction and Planning Under Uncertainty

– Welcome to class

14 – Prediction and Planning Under Uncertainty
2021年08月03日
00:00:00 - 01:14:45