Today we are going to talk about the Data-efficient image Transformers paper or (DeiT) which Hugo is the primary author of. One of the recipes of success for vision models since the DL revolution began has been the availability of large training sets. CNNs have been optimized for almost a decade now, including through extensive architecture search which is prone to overfitting. Motivated by the success of transformers-based models
in Natural Language Processing there has been increasing attention in applying these approaches to vision models. Hugo and his collaborators used a different training strategy and a new distillation token to get a massive increase in sample efficiency with image transformers.
00:00:00 Introduction
00:06:33 Data augmentation is all you need
00:09:53 Now the image patches are the convolutions though?
00:12:16 Where are those inductive biases hiding?
00:15:46 Distillation token
00:21:01 Why different resolutions on training
00:24:14 How data efficient can
8 views
54
9
4 weeks ago 00:11:41 2
10 Забавных Загадок Для Детей, Которые не Под Силу Многим Взрослым
1 month ago 00:56:51 1
Почему возникает рак и как его предотвратить. Всему виной лактат
1 month ago 00:43:11 2
Славянские божества: от Мары до русалок / [История по Чёрному]
2 months ago 00:10:35 1
Дефицит каких витаминов и минералов ведет к раку?
3 months ago 00:21:58 9
Вы только послушайте! Во что превратилась Чечня? Кровавый клан Кадырова!
5 months ago 00:15:16 1
Cтало известно как клетки запасают вещество, тормозящее старение и как это улучшить.
5 months ago 00:17:28 2
Взлёты и падения Николь Кидман
6 months ago 00:06:13 0
ГАЙД для НОВИЧКОВ Black Myth: Wukong куда вложить ИСКРЫ на старте игры #blackmythwukong