[Andrej Karpathy] Building makemore Part 5: Building a WaveNet

🎯 Загружено автоматически через бота: 🚫 Оригинал видео: 📺 Данное видео принадлежит каналу «Andrej Karpathy» (@AndrejKarpathy). Оно представлено в нашем сообществе исключительно в информационных, научных, образовательных или культурных целях. Наше сообщество не утверждает никаких прав на данное видео. Пожалуйста, поддержите автора, посетив его оригинальный канал. ✉️ Если у вас есть претензии к авторским правам на данное видео, пожалуйста, свяжитесь с нами по почте support@, и мы немедленно удалим его. 📃 Оригинальное описание: We take the 2-layer MLP from previous video and make it deeper with a tree-like structure, arriving at a convolutional neural network architecture similar to the WaveNet (2016) from DeepMind. In the WaveNet paper, the same hierarchical architecture is implemented more efficiently using causal dilated convolutions (not yet covered). Along the way we get a better sense of and what it is and how it works under the hood, and what a typical deep learning development process looks like (a lot of reading of documentation, keeping track of multidimensional tensor shapes, moving between jupyter notebooks and repository code, ...). Links: makemore on github: jupyter notebook I built in this video: collab notebook: my website: my twitter: our Discord channel: Supplementary links: WaveNet 2016 from DeepMind Bengio et al. 2003 MLP LM Chapters: intro intro starter code walkthrough let’s fix the learning rate plot pytorchifying our code: layers, containers, , fun bugs implementing wavenet overview: WaveNet dataset bump the context size to 8 re-running baseline code on block_size 8 implementing WaveNet training the WaveNet: first pass fixing batchnorm1d bug re-training WaveNet with bug fix scaling up our WaveNet conclusions experimental harness WaveNet but with “dilated causal convolutions” the development process of building deep neural nets going forward improve on my loss! how far can we improve a WaveNet on this data?
Back to Top