- Week 9 – Lecture: Group sparsity, world model, and generative adversarial networks (GANs)

Week 9 – Lecture: Group sparsity, world model, and generative adversarial networks (GANs)

Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 9: http://bit.ly/pDL-en-09

0:00:00 – Week 9 – Lecture

LECTURE Part A: http://bit.ly/pDL-en-09-1
We discussed discriminative recurrent sparse auto-encoders and group sparsity. The main idea was ho...
Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 9: http://bit.ly/pDL-en-09

0:00:00 – Week 9 – Lecture

LECTURE Part A: http://bit.ly/pDL-en-09-1
We discussed discriminative recurrent sparse auto-encoders and group sparsity. The main idea was how to combine sparse coding with discriminative training. We went through how to structure a network with a recurrent autoencoder similar to LISTA and a decoder. Then we discussed how to use group sparsity to extract invariant features.
0:00:35 – Discriminative Recurrent Sparse Auto-Encoder and Group Sparsity
0:15:18 – AE With Group Sparsity: Questions and Clarification
0:30:34 – Convolutional RELU with Group Sparsity

LECTURE Part B: http://bit.ly/pDL-en-09-2
In this section, we talked about the World Models for autonomous control including the neural network architecture and training schema. Then, we discussed the difference between World Models and Reinforcement Learning (RL). Finally, we studied Generative Adversarial Networks (GANs) in terms of energy-based model with the contrastive method.
0:42:06 – Learning World Models for Autonomous Control
1:06:33 – Reinforcement Learning
1:30:30 – Generative Adversarial Network

#Yann LeCun #Deep Learning #PyTorch #NYU #EBM #Energy Based Models #contrastive methods #Regularised Latent Variables #Generative Adversarial Network #GAN #World model #control #Reinforcement Learning #RL #sparsity

Alfredo Canziani

🎉 32,000 人達成!  📈 予測:4万人まであと548日(2024年6月7日) 

Timetable

動画タイムテーブル

動画数:121件

– Welcome to class - 10P – Non-contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)

– Welcome to class

10P – Non-contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)
2022年06月07日
00:00:00 - 01:05:28
– Welcome to class - 09P – Contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)

– Welcome to class

09P – Contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)
2022年05月28日
00:00:00 - 00:56:52
– Welcome to class - 14L – Lagrangian backpropagation, final project winners, and Q&A session

– Welcome to class

14L – Lagrangian backpropagation, final project winners, and Q&A session
2021年08月18日
00:00:00 - 02:12:36
– Welcome to class - 13L – Optimisation for Deep Learning

– Welcome to class

13L – Optimisation for Deep Learning
2021年08月18日
00:00:00 - 01:51:32
– Welcome to class - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

– Welcome to class

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
00:00:00 - 01:54:23
I am awake at  as well 🤣Joke aside, 2021 videos are quite different from 2020, which is a great treat! I am being introduced to VAE from EBM's R(z). Also, thanks for sharing the homework 3 questions which help me to think and understand EMB better. Thank you Professor Yann and Professor Alfredo! 🥰 - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

I am awake at as well 🤣Joke aside, 2021 videos are quite different from 2020, which is a great treat! I am being introduced to VAE from EBM's R(z). Also, thanks for sharing the homework 3 questions which help me to think and understand EMB better. Thank you Professor Yann and Professor Alfredo! 🥰

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
00:52:09 - 01:54:23
In  Yann Lecun says that the brain doesn't do reconstruction, that it doesn't reconstruct an input from an embedding. This seems very counter intuitive to me... Why not? What are dreams then? Aren't they reconstructions of input signals (images, sounds etc.) from some sort of embeddings? - 07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE

In Yann Lecun says that the brain doesn't do reconstruction, that it doesn't reconstruct an input from an embedding. This seems very counter intuitive to me... Why not? What are dreams then? Aren't they reconstructions of input signals (images, sounds etc.) from some sort of embeddings?

07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
2021年08月12日
01:12:00 - 01:54:23
– Summary - 08L – Self-supervised learning and variational inference

– Summary

08L – Self-supervised learning and variational inference
2021年08月12日
00:00:00 - 00:01:00
– Welcome to class - 08L – Self-supervised learning and variational inference

– Welcome to class

08L – Self-supervised learning and variational inference
2021年08月12日
00:00:00 - 01:54:44
– GANs - 08L – Self-supervised learning and variational inference

– GANs

08L – Self-supervised learning and variational inference
2021年08月12日
00:01:00 - 00:17:10
– How do Humans and Animals learn quickly - 08L – Self-supervised learning and variational inference

– How do Humans and Animals learn quickly

08L – Self-supervised learning and variational inference
2021年08月12日
00:17:10 - 00:28:05
– Self Supervised Learning - 08L – Self-supervised learning and variational inference

– Self Supervised Learning

08L – Self-supervised learning and variational inference
2021年08月12日
00:28:05 - 00:32:00
– Sparse Coding Sparce Modeling - 08L – Self-supervised learning and variational inference

– Sparse Coding Sparce Modeling

08L – Self-supervised learning and variational inference
2021年08月12日
00:32:00 - 01:07:45
@Alfredo Canziani Hi Alf, at , Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔 - 08L – Self-supervised learning and variational inference

@Alfredo Canziani Hi Alf, at , Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔

08L – Self-supervised learning and variational inference
2021年08月12日
00:57:27 - 01:54:44
If this way of making features ( , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔  I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions?Thanks for the vid, Alfredo and Yann 🤗 - 08L – Self-supervised learning and variational inference

If this way of making features ( , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔 I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions?Thanks for the vid, Alfredo and Yann 🤗

08L – Self-supervised learning and variational inference
2021年08月12日
00:58:55 - 01:54:44
– Regularization Through Temporal Consistency - 08L – Self-supervised learning and variational inference

– Regularization Through Temporal Consistency

08L – Self-supervised learning and variational inference
2021年08月12日
01:07:45 - 01:12:05
at  , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects? - 08L – Self-supervised learning and variational inference

at , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects?

08L – Self-supervised learning and variational inference
2021年08月12日
01:11:40 - 01:54:44
– Variational AE - 08L – Self-supervised learning and variational inference

– Variational AE

08L – Self-supervised learning and variational inference
2021年08月12日
01:12:05 - 01:54:44
– Welcome to class - 09L – Differentiable associative memories, attention, and transformers

– Welcome to class

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
00:00:00 - 02:00:29
Yann gets sad at  while he is talking about attention mecanishm might take the place of convolution at images :/ - 09L – Differentiable associative memories, attention, and transformers

Yann gets sad at while he is talking about attention mecanishm might take the place of convolution at images :/

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
01:26:09 - 02:00:29
For masking, is there a strategy to remove words instead of random masking, as if the object of interest, eg: curtain @ were to be removed from both English and French, wouldn't it make the prediction task much more difficult, as a lot of objects could be substituted in its place. - 09L – Differentiable associative memories, attention, and transformers

For masking, is there a strategy to remove words instead of random masking, as if the object of interest, eg: curtain @ were to be removed from both English and French, wouldn't it make the prediction task much more difficult, as a lot of objects could be substituted in its place.

09L – Differentiable associative memories, attention, and transformers
2021年08月12日
01:29:19 - 02:00:29
– Welcome to class - 14 – Prediction and Planning Under Uncertainty

– Welcome to class

14 – Prediction and Planning Under Uncertainty
2021年08月03日
00:00:00 - 01:14:45