I loved the way how you are using the concepts of Linear Algebra (@) - at the end it's all vectors and transformations :)You are a great mentor! Note that I did not say "coach", because you are equipping each of us with the skills that can solve most problems, not just one :)Huge Fan of your lectures & advice :)(00:19:32 - 00:58:05) - Week 8 – Practicum: Variational autoencoders

I loved the way how you are using the concepts of Linear Algebra (@) - at the end it's all vectors and transformations :)You are a great mentor! Note that I did not say "coach", because you are equipping each of us with the skills that can solve most problems, not just one :)Huge Fan of your lectures & advice :)(00:19:32 - 00:58:05)
Week 8 – Practicum: Variational autoencoders

Course website: http://bit.ly/DLSP20-web
Playlist: http://bit.ly/pDL-YouTube
Speaker: Alfredo Canziani
Week 8: http://bit.ly/DLSP20-08

0:00:00 – Week 8 – Practicum

PRACTICUM: http://bit.ly/DLSP20-08-3
In this section, we discussed a specific type of generative model called Variational Autoencod...
Course website: http://bit.ly/DLSP20-web
Playlist: http://bit.ly/pDL-YouTube
Speaker: Alfredo Canziani
Week 8: http://bit.ly/DLSP20-08

0:00:00 – Week 8 – Practicum

PRACTICUM: http://bit.ly/DLSP20-08-3
In this section, we discussed a specific type of generative model called Variational Autoencoders and compared their functionalities and advantages over Classic Autoencoders. We explored the objective function of VAE in detail, understanding how it enforced some structure in the latent space. Finally, we implemented and trained a VAE on the MNIST dataset and used it to generate new samples.
0:02:35 – Autoencoders (AEs) vs. variational autoencoders (VAEs)
0:16:37 – Understanding the VAE objective function
0:31:33 – Notebook example for variational autoencoder

#Deep Learning #Yann LeCun #autoencoder #over-complete #generative #variational autoencoder #posterior #prior #KL divergence #relative entropy #PyTorch
– Week 8 – Practicum - Week 8 – Practicum: Variational autoencoders

– Week 8 – Practicum

Week 8 – Practicum: Variational autoencoders
2020年05月21日 
00:00:00 - 00:02:35
– Autoencoders (AEs) vs. variational autoencoders (VAEs) - Week 8 – Practicum: Variational autoencoders

– Autoencoders (AEs) vs. variational autoencoders (VAEs)

Week 8 – Practicum: Variational autoencoders
2020年05月21日 
00:02:35 - 00:16:37
– Understanding the VAE objective function - Week 8 – Practicum: Variational autoencoders

– Understanding the VAE objective function

Week 8 – Practicum: Variational autoencoders
2020年05月21日 
00:16:37 - 00:31:33
I loved the way how you are using the concepts of Linear Algebra (@) - at the end it's all vectors and transformations :)You are a great mentor! Note that I did not say "coach", because you are equipping each of us with the skills that can solve most problems, not just one :)Huge Fan of your lectures & advice :) - Week 8 – Practicum: Variational autoencoders

I loved the way how you are using the concepts of Linear Algebra (@) - at the end it's all vectors and transformations :)You are a great mentor! Note that I did not say "coach", because you are equipping each of us with the skills that can solve most problems, not just one :)Huge Fan of your lectures & advice :)

Week 8 – Practicum: Variational autoencoders
2020年05月21日 
00:19:32 - 00:58:05
This was an intuitive explanation yet grounded in math. Such a delicate balance! Thanks for doing this! Also, @ I agree the bubble-of-bubbles is indeed cute. 😄 - Week 8 – Practicum: Variational autoencoders

This was an intuitive explanation yet grounded in math. Such a delicate balance! Thanks for doing this! Also, @ I agree the bubble-of-bubbles is indeed cute. 😄

Week 8 – Practicum: Variational autoencoders
2020年05月21日 
00:24:48 - 00:58:05
I do not understand why on  N(0, Id) has 0 as E()? According to picture you said Lkl is enforcing z to be in small bublle with different centers, but center must be in some point not in 0. So your KL loss must construct something like in the picture https://www.youtube.com/watch?v=bbOFvxbMIV0 but actually no bubbles - Week 8 – Practicum: Variational autoencoders

I do not understand why on N(0, Id) has 0 as E()? According to picture you said Lkl is enforcing z to be in small bublle with different centers, but center must be in some point not in 0. So your KL loss must construct something like in the picture https://www.youtube.com/watch?v=bbOFvxbMIV0 but actually no bubbles

Week 8 – Practicum: Variational autoencoders
2020年05月21日 
00:28:04 - 00:58:05
– Notebook example for variational autoencoder - Week 8 – Practicum: Variational autoencoders

– Notebook example for variational autoencoder

Week 8 – Practicum: Variational autoencoders
2020年05月21日 
00:31:33 - 00:58:05
This was a great explanation, however I don’t understand why at  we don’t want to do the reparameterization trick during testing and only return the mu? I would assume we would want to always sample from the latent distribution for passing it to the decoder? Making the encoder give deterministic output (just the mu) during testing will defeat the purpose of variational auto encoders right? - Week 8 – Practicum: Variational autoencoders

This was a great explanation, however I don’t understand why at we don’t want to do the reparameterization trick during testing and only return the mu? I would assume we would want to always sample from the latent distribution for passing it to the decoder? Making the encoder give deterministic output (just the mu) during testing will defeat the purpose of variational auto encoders right?

Week 8 – Practicum: Variational autoencoders
2020年05月21日 
00:52:00 - 00:58:05

Alfredo Canziani

※本サイトに掲載されているチャンネル情報や動画情報はYouTube公式のAPIを使って取得・表示しています。

Timetable

動画タイムテーブル