- CS25 I Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and Music

CS25 I Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and Music

Transformers have touched many fields of research and audio and music is no different. This talk will present 3 of my papers as a case study done, on how we can leverage powerfulness of Transformers, with that of representation learning, signal processing and clustering. For the first part, we wo...
Transformers have touched many fields of research and audio and music is no different. This talk will present 3 of my papers as a case study done, on how we can leverage powerfulness of Transformers, with that of representation learning, signal processing and clustering. For the first part, we would discuss how were able to beat wildly popular wavenet architecture, proposed by Google-DeepMind in raw audio synthesis. We would also show how we overcame, the quadratic constraint of the Transformers by conditioning on the context itself. Secondly, a version of Audio Transformers for large scale audio understanding, which is inspired by viT, operating on raw waveforms is presented. It combines powerful ideas from traditional signal processing, aka wavelets on intermediate transformer embeddings to produce state of the art results. Investigating into the front-end to see why they do so well, we show they learn auditory filter-bank which in a way adapts time-frequency representation according to a task which makes machine listening really cool. Finally, for the third part, the powerfulness of operating on latent code, and discuss language modeling on continuous audio signals using discrete tokens will be discussed. This will describe how simple unsupervised tasks can give us strong competitive results compared with that of end-to-end supervision. We will give an overview of some recent trends in the field and papers by Google, OpenAI etc about the current “fashion”. This work was done in collaboration with Prof. Chris Chafe, Prof. Jonathan Berger and Prof. Julius Smith, all at the Center for Computer Research in Music and Acoustics at Stanford University. Thanks to Stanford’s Human Centered AI for supporting this work, by a generous Google cloud computing grant.

Prateek Verma is currently a research assistant working with Prof. Anshul Kundaje in the Department of Computer Science and Genomics. He works on modeling genomic sequences using machine learning, tackling long sequences, and developing techniques to understand them. He also splits his time working on audio research at Stanford’s Center for Computer Research in Music and Acoustics, with Prof. Chris Chafe, Prof. Jonathan Berger and Prof. Julius Smith. He got his Master's degree from Stanford, and before that, he was at IIT Bombay. He loves biking, hiking, and playing sports.

A full list of guest lectures can be found here: https://www.youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CM

0:00 Introduction
0:06 Transformers for Music and Audio: Language Modelling to Understanding to Synthesis
1:35 The Transformer Revolution
5:02 Models getting bigger ...
7:43 What are spectograms
14:30 Raw Audio Synthesis: Difficulty Classical FM synthesis Karplus Strong
17:14 Baseline : Classic WaveNet
20:04 Improving Transformer Baseline • Major bottleneck of Transformers
21:02 Results & Unconditioned Setup • Evaluation Criterion o Comparing Wavenet, Transformers on next sample prediction Top-5 accuracy, out of 256 possible states as a error metric Why this setup 7 1. Application agnostic 2. Suits training setup
22:11 A Framework for Generative and Contrastive Learning of Audio Representations
22:38 Acoustic Scene Understanding
24:34 Recipe of doing
26:00 Turbocharging best of two worlds Vector Quantization: A powerful and under-uilized algorithm Combining VQwih auto-encoders and Transformers
33:24 Turbocharging best of two worlds Leaming clusters from vector quantization Use long term dependency kaming with that cluster based representation for markovian assumption Better we become in prediction, the better the summarization is
37:06 Audio Transformers: Transformer Architectures for Large Scale Audio Understanding - Adieu Convolutions Stanford University March 2021
38:45 Wavelets on Transformer Embeddings
41:20 Methodology + Results
44:04 What does it learn -- the front end
47:18 Final Thoughts

#Stanford #Stanford Online #Audio Research #Music #AI #Artificial Intelligence
Stanford Online

Stanford Online

🎉 370,000 人達成! 🎉

【予測】40万人まであと161日(2023年3月17日)

チャンネル登録 RSS
Stanford Online is Stanford’s online learning provider, offering learners access to Stanford’s extended education and lifelong learning opportunities. Our robust catalog of credit-bearing, professional, and free and open content provides a variety of ways to expand your learning, advance your car...
Stanford Online is Stanford’s online learning provider, offering learners access to Stanford’s extended education and lifelong learning opportunities. Our robust catalog of credit-bearing, professional, and free and open content provides a variety of ways to expand your learning, advance your career, and enhance your life. Stanford Online is operated and managed by the Stanford Center for Professional Development, a leader in online and extended education.

https://online.stanford.edu/

Timetable

動画タイムテーブル

動画数:131件

plasticity - CS25 I Stanford Seminar - Mixture of Experts (MoE) paradigm and the Switch Transformer

plasticity

CS25 I Stanford Seminar - Mixture of Experts (MoE) paradigm and the Switch Transformer
2022年07月15日
00:20:10 - 01:05:44
Introduction - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Introduction

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:00:00 - 00:00:08
3-Gram Model (Shannon 1951) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

3-Gram Model (Shannon 1951)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:00:08 - 00:00:27
Recurrent Neural Nets (Sutskever et al 2011) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Recurrent Neural Nets (Sutskever et al 2011)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:00:27 - 00:01:12
Big LSTM (Jozefowicz et al 2016) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Big LSTM (Jozefowicz et al 2016)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:01:12 - 00:01:52
Transformer (Llu and Saleh et al 2018) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Transformer (Llu and Saleh et al 2018)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:01:52 - 00:02:33
GPT-2: Big Transformer (Radford et al 2019) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

GPT-2: Big Transformer (Radford et al 2019)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:02:33 - 00:03:38
GPT-3: Very Big Transformer (Brown et al 2019) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

GPT-3: Very Big Transformer (Brown et al 2019)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:03:38 - 00:05:12
GPT-3: Can Humans Detect Generated News Articles? - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

GPT-3: Can Humans Detect Generated News Articles?

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:05:12 - 00:09:09
Why Unsupervised Learning? - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Why Unsupervised Learning?

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:09:09 - 00:10:38
Is there a Big Trove of Unlabeled Data? - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Is there a Big Trove of Unlabeled Data?

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:10:38 - 00:11:11
Why Use Autoregressive Generative Models for Unsupervised Learnin - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Why Use Autoregressive Generative Models for Unsupervised Learnin

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:11:11 - 00:13:00
Unsupervised Sentiment Neuron (Radford et al 2017) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Unsupervised Sentiment Neuron (Radford et al 2017)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:13:00 - 00:14:11
Radford et al 2018) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Radford et al 2018)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:14:11 - 00:15:21
Zero-Shot Reading Comprehension - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Zero-Shot Reading Comprehension

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:15:21 - 00:16:44
GPT-2: Zero-Shot Translation - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

GPT-2: Zero-Shot Translation

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:16:44 - 00:18:15
Language Model Metalearning - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Language Model Metalearning

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:18:15 - 00:19:23
GPT-3: Few Shot Arithmetic - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

GPT-3: Few Shot Arithmetic

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:19:23 - 00:20:14
GPT-3: Few Shot Word Unscrambling - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

GPT-3: Few Shot Word Unscrambling

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:20:14 - 00:20:36
GPT-3: General Few Shot Learning - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

GPT-3: General Few Shot Learning

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:20:36 - 00:23:42
IGPT (Chen et al 2020): Can we apply GPT to images? - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

IGPT (Chen et al 2020): Can we apply GPT to images?

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:23:42 - 00:25:31
IGPT: Completions - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

IGPT: Completions

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:25:31 - 00:26:24
IGPT: Feature Learning - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

IGPT: Feature Learning

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:26:24 - 00:32:20
Isn't Code Just Another Modality? - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Isn't Code Just Another Modality?

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:32:20 - 00:33:33
The HumanEval Dataset - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

The HumanEval Dataset

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:33:33 - 00:36:00
The Pass @ K Metric - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

The Pass @ K Metric

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:36:00 - 00:36:59
Codex: Training Details - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Codex: Training Details

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:36:59 - 00:38:03
An Easy Human Eval Problem (pass@1 -0.9) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

An Easy Human Eval Problem ([email protected] -0.9)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:38:03 - 00:38:36
A Medium HumanEval Problem (pass@1 -0.17) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

A Medium HumanEval Problem ([email protected] -0.17)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:38:36 - 00:39:00
A Hard HumanEval Problem (pass@1 -0.005) - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

A Hard HumanEval Problem ([email protected] -0.005)

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:39:00 - 00:41:26
Calibrating Sampling Temperature for Pass@k - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Calibrating Sampling Temperature for [email protected]

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:41:26 - 00:42:19
The Unreasonable Effectiveness of Sampling - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

The Unreasonable Effectiveness of Sampling

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:42:19 - 00:43:17
Can We Approximate Sampling Against an Oracle? - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Can We Approximate Sampling Against an Oracle?

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:43:17 - 00:45:52
Main Figure - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Main Figure

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:45:52 - 00:46:53
Limitations - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Limitations

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:46:53 - 00:47:38
Conclusion - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Conclusion

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:47:38 - 00:48:19
Acknowledgements - CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3

Acknowledgements

CS25 I Stanford Seminar - Transformers in Language: The development of GPT Models including GPT3
2022年07月12日
00:48:19 - 00:48:39
Introduction - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

Introduction

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:00 - 00:00:06
Logic: first-order logic - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

Logic: first-order logic

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:06 - 00:00:36
Limitations of propositional logic - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

Limitations of propositional logic

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:36 - 00:05:08
First-order logic: examples - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

First-order logic: examples

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:05:08 - 00:06:19
Syntax of first-order logic - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

Syntax of first-order logic

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:06:19 - 00:12:55
Natural language quantifiers - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

Natural language quantifiers

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:12:55 - 00:15:47
Some examples of first-order logic - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

Some examples of first-order logic

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:15:47 - 00:20:01
Graph representation of a model If only have unary and binary predicates, a model w can be represented as a directed graph - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

Graph representation of a model If only have unary and binary predicates, a model w can be represented as a directed graph

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:20:01 - 00:22:09
A restriction on models - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

A restriction on models

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:22:09 - 00:24:16
Propositionalization If one-to-one mapping between constant symbols and objects (unique names and domain closure) - Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)

Propositionalization If one-to-one mapping between constant symbols and objects (unique names and domain closure)

Logic 7 - First Order Logic | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:24:16 - 00:26:10
Introduction - Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)

Introduction

Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:00 - 00:00:06
Logic: modus ponens with Horn clauses - Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)

Logic: modus ponens with Horn clauses

Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:06 - 00:01:13
Definite clauses - Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)

Definite clauses

Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:01:13 - 00:04:07
Completeness of modus ponens - Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)

Completeness of modus ponens

Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:04:07 - 00:06:06
Example: Modus ponens - Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)

Example: Modus ponens

Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:06:06 - 00:07:06
Summary - Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)

Summary

Logic 5 - Propositional Modus Ponens | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:07:06 - 00:08:07
Introduction - Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)

Introduction

Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:00 - 00:00:06
Logic: inference rules - Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)

Logic: inference rules

Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:06 - 00:05:51
Inference framework - Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)

Inference framework

Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:05:51 - 00:11:05
Inference example - Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)

Inference example

Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:11:05 - 00:12:45
Desiderata for inference rules - Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)

Desiderata for inference rules

Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:12:45 - 00:16:11
Soundness and completeness The truth, the whole truth, and nothing but the truth - Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)

Soundness and completeness The truth, the whole truth, and nothing but the truth

Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:16:11 - 00:17:58
Soundness: example - Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)

Soundness: example

Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:17:58 - 00:23:13
Fixing completeness - Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)

Fixing completeness

Logic 4 - Inference Rules | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:23:13 - 00:24:20
Introduction - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Introduction

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:00 - 00:00:06
Logic: propositional logic semantics - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Logic: propositional logic semantics

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:06 - 00:05:19
Interpretation function: definition - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Interpretation function: definition

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:05:19 - 00:07:36
Interpretation function: example Example: Interpretation function - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Interpretation function: example Example: Interpretation function

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:07:36 - 00:11:13
Models: example - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Models: example

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:11:13 - 00:17:21
Adding to the knowledge base - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Adding to the knowledge base

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:17:21 - 00:23:17
Contradiction and entailment - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Contradiction and entailment

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:23:17 - 00:23:30
Contingency - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Contingency

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:23:30 - 00:25:40
Tell operation - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Tell operation

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:25:40 - 00:27:23
Ask operation - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Ask operation

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:27:23 - 00:28:19
Digression: probabilistic generalization - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Digression: probabilistic generalization

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:28:19 - 00:31:45
Satisfiability - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Satisfiability

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:31:45 - 00:37:02
Model checking - Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)

Model checking

Logic 3 - Propositional Logic Semantics | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:37:02 - 00:38:34
Introduction - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Introduction

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:00 - 00:00:06
Logic: overview - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Logic: overview

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:06 - 00:00:21
Question - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Question

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:21 - 00:01:41
Course plan - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Course plan

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:01:41 - 00:02:07
Taking a step back - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Taking a step back

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:02:07 - 00:03:16
Modeling paradigms State-based models: search problems, MDPs, games Applications: route finding, game playing, etc. Think in terms of states, actions, and costs - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Modeling paradigms State-based models: search problems, MDPs, games Applications: route finding, game playing, etc. Think in terms of states, actions, and costs

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:03:16 - 00:09:34
Motivation: smart personal assistant - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Motivation: smart personal assistant

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:09:34 - 00:10:06
Natural language - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Natural language

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:10:06 - 00:11:43
Language Language is a mechanism for expression - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Language Language is a mechanism for expression

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:11:43 - 00:12:48
Two goals of a logic language - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Two goals of a logic language

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:12:48 - 00:13:31
Ingredients of a logic Syntax: defines a set of valid formulas (Formulas) Example: Rain A Wet - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Ingredients of a logic Syntax: defines a set of valid formulas (Formulas) Example: Rain A Wet

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:13:31 - 00:16:10
Syntax versus semantics - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Syntax versus semantics

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:16:10 - 00:17:55
Propositional logic Semantics - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Propositional logic Semantics

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:17:55 - 00:20:34
Roadmap - Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)

Roadmap

Logic 1 - Overview: Logic Based Models | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:20:34 - 00:22:14
Introduction - Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)

Introduction

Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:00 - 00:00:06
Bayesian networks: smoothing - Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)

Bayesian networks: smoothing

Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:06 - 00:00:11
Review: maximum likelihood - Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)

Review: maximum likelihood

Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:11 - 00:01:49
Laplace smoothing example - Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)

Laplace smoothing example

Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:01:49 - 00:03:45
Laplace smoothing Key idea: maximum likelihood with Laplace smoothing - Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)

Laplace smoothing Key idea: maximum likelihood with Laplace smoothing

Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:03:45 - 00:04:47
Interplay between smoothing and data - Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)

Interplay between smoothing and data

Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:04:47 - 00:06:16
Summary - Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)

Summary

Bayesian Networks 8 - Smoothing | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:06:16 - 00:07:02
Introduction - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Introduction

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:00 - 00:00:06
Bayesian networks: supervised learning - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Bayesian networks: supervised learning

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:06 - 00:00:15
Review: Bayesian network - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Review: Bayesian network

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:00:15 - 00:01:22
Review: probabilistic inference - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Review: probabilistic inference

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:01:22 - 00:02:15
Where do parameters come from? - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Where do parameters come from?

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:02:15 - 00:02:37
Learning task - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Learning task

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:02:37 - 00:03:42
Example: one variable - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Example: one variable

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:03:42 - 00:05:41
Example: two variables - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Example: two variables

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:05:41 - 00:08:13
Example: v-structure - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Example: v-structure

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:08:13 - 00:11:33
Example: inverted-v structure - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Example: inverted-v structure

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:11:33 - 00:15:17
Parameter sharing - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Parameter sharing

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:15:17 - 00:18:10
Example: Naive Bayes - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Example: Naive Bayes

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:18:10 - 00:19:51
Example: HMMS - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Example: HMMS

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:19:51 - 00:22:57
General case: learning algorithm - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

General case: learning algorithm

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:22:57 - 00:24:15
Maximum likelihood - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Maximum likelihood

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:24:15 - 00:30:15
Summary - Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)

Summary

Bayesian Networks 7 - Supervised Learning | Stanford CS221: AI (Autumn 2021)
2022年06月01日
00:30:15 - 00:31:44