- Week 8 – Lecture: Contrastive methods and regularised latent variable models

Week 8 – Lecture: Contrastive methods and regularised latent variable models

Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 8: http://bit.ly/pDL-en-08

0:00:00 – Week 8 – Lecture

LECTURE Part A: http://bit.ly/pDL-en-08-1
In this section, we focused on the introduction of contrastive methods in Energy-Based Models in se...
Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 8: http://bit.ly/pDL-en-08

0:00:00 – Week 8 – Lecture

LECTURE Part A: http://bit.ly/pDL-en-08-1
In this section, we focused on the introduction of contrastive methods in Energy-Based Models in several aspects. First, we discuss the advantage brought by applying contrastive methods in self-supervised learning. Second, we discussed the architecture of denoising autoencoders and their weakness in image reconstruction tasks. We also talked about other contrastive methods, like contrastive divergence and persistent contrastive divergence.
0:00:05 – Recap on EBM and Characteristics of Different Contrastive Methods
0:10:13 – Contrastive Methods in Self-Supervised Learning
0:23:04 – Denoising Autoencoder and other Contrastive methods

LECTURE Part B: http://bit.ly/pDL-en-08-2
In this section, we discussed regularized latent variable EBMs in detail covering concepts of conditional and unconditional versions of these models. We then discussed the algorithms of ISTA, FISTA and LISTA and look at examples of sparse coding and filters learned from convolutional sparse encoders. Finally we talked about Variational Auto-Encoders and the underlying concepts involved.
0:37:13 – Overview of Regularized Latent Variable Energy Based Models and Sparse Coding
1:07:46 – Convolutional Sparse Auto-Encoders
1:12:51 – Variational Auto-Encoders

#Yann LeCun #Deep Learning #PyTorch #NYU #EBM #Energy Based Models #SSL #Semi Supervised Learning #LV #Latent Variable #contrastive methods #Regularised Latent Variables

Alfredo Canziani

🎉 32,000 人達成!  📈 予測:4万人まであと548日(2024年6月7日) 

Timetable

動画タイムテーブル

動画数:121件

– Welcome to class - 10P – Non-contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)

– Welcome to class

10P – Non-contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)
2022年06月07日
00:00:00 - 01:05:28
– Welcome to class - 09P – Contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)

– Welcome to class

09P – Contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)
2022年05月28日
00:00:00 - 00:56:52
– Welcome to class - 14L – Lagrangian backpropagation, final project winners, and Q&A session

– Welcome to class

14L – Lagrangian backpropagation, final project winners, and Q&A session
2021年08月18日
00:00:00 - 02:12:36
– Welcome to class - 13L – Optimisation for Deep Learning

– Welcome to class

13L – Optimisation for Deep Learning
2021年08月18日
00:00:00 - 01:51:32
– Welcome to class - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

– Welcome to class

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
00:00:00 - 01:54:23
I am awake at  as well 🤣Joke aside, 2021 videos are quite different from 2020, which is a great treat! I am being introduced to VAE from EBM's R(z). Also, thanks for sharing the homework 3 questions which help me to think and understand EMB better. Thank you Professor Yann and Professor Alfredo! 🥰 - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

I am awake at as well 🤣Joke aside, 2021 videos are quite different from 2020, which is a great treat! I am being introduced to VAE from EBM's R(z). Also, thanks for sharing the homework 3 questions which help me to think and understand EMB better. Thank you Professor Yann and Professor Alfredo! 🥰

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
00:52:09 - 01:54:23
In  Yann Lecun says that the brain doesn't do reconstruction, that it doesn't reconstruct an input from an embedding. This seems very counter intuitive to me... Why not? What are dreams then? Aren't they reconstructions of input signals (images, sounds etc.) from some sort of embeddings? - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

In Yann Lecun says that the brain doesn't do reconstruction, that it doesn't reconstruct an input from an embedding. This seems very counter intuitive to me... Why not? What are dreams then? Aren't they reconstructions of input signals (images, sounds etc.) from some sort of embeddings?

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
01:12:00 - 01:54:23
– Summary - 08L – Self-supervised learning and variational inference

– Summary

08L – Self-supervised learning and variational inference
2021年08月12日
00:00:00 - 00:01:00
– Welcome to class - 08L – Self-supervised learning and variational inference

– Welcome to class

08L – Self-supervised learning and variational inference
2021年08月12日
00:00:00 - 01:54:44
– GANs - 08L – Self-supervised learning and variational inference

– GANs

08L – Self-supervised learning and variational inference
2021年08月12日
00:01:00 - 00:17:10
– How do Humans and Animals learn quickly - 08L – Self-supervised learning and variational inference

– How do Humans and Animals learn quickly

08L – Self-supervised learning and variational inference
2021年08月12日
00:17:10 - 00:28:05
– Self Supervised Learning - 08L – Self-supervised learning and variational inference

– Self Supervised Learning

08L – Self-supervised learning and variational inference
2021年08月12日
00:28:05 - 00:32:00
– Sparse Coding Sparce Modeling - 08L – Self-supervised learning and variational inference

– Sparse Coding Sparce Modeling

08L – Self-supervised learning and variational inference
2021年08月12日
00:32:00 - 01:07:45
@Alfredo Canziani Hi Alf, at , Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔 - 08L – Self-supervised learning and variational inference

@Alfredo Canziani Hi Alf, at , Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔

08L – Self-supervised learning and variational inference
2021年08月12日
00:57:27 - 01:54:44
If this way of making features ( , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔  I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions?Thanks for the vid, Alfredo and Yann 🤗 - 08L – Self-supervised learning and variational inference

If this way of making features ( , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔 I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions?Thanks for the vid, Alfredo and Yann 🤗

08L – Self-supervised learning and variational inference
2021年08月12日
00:58:55 - 01:54:44
– Regularization Through Temporal Consistency - 08L – Self-supervised learning and variational inference

– Regularization Through Temporal Consistency

08L – Self-supervised learning and variational inference
2021年08月12日
01:07:45 - 01:12:05
at  , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects? - 08L – Self-supervised learning and variational inference

at , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects?

08L – Self-supervised learning and variational inference
2021年08月12日
01:11:40 - 01:54:44
– Variational AE - 08L – Self-supervised learning and variational inference

– Variational AE

08L – Self-supervised learning and variational inference
2021年08月12日
01:12:05 - 01:54:44
– Welcome to class - 09L – Differentiable associative memories, attention, and transformers

– Welcome to class

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
00:00:00 - 02:00:29
Yann gets sad at  while he is talking about attention mecanishm might take the place of convolution at images :/ - 09L – Differentiable associative memories, attention, and transformers

Yann gets sad at while he is talking about attention mecanishm might take the place of convolution at images :/

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
01:26:09 - 02:00:29
For masking, is there a strategy to remove words instead of random masking, as if the object of interest, eg: curtain @ were to be removed from both English and French, wouldn't it make the prediction task much more difficult, as a lot of objects could be substituted in its place. - 09L – Differentiable associative memories, attention, and transformers

For masking, is there a strategy to remove words instead of random masking, as if the object of interest, eg: curtain @ were to be removed from both English and French, wouldn't it make the prediction task much more difficult, as a lot of objects could be substituted in its place.

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
01:29:19 - 02:00:29
– Welcome to class - 14 – Prediction and Planning Under Uncertainty

– Welcome to class

14 – Prediction and Planning Under Uncertainty
2021年08月03日
00:00:00 - 01:14:45