Dive into Deep Learning Lec7: Regularization in PyTorch from Scratch (Custom Loss Function Autograd)
This video discusses the implementation of a custom loss function in PyTorch and using to compute the gradient of the loss function with respect to the model parameters. This approach has many applications, including regularizing deep learning models and neural networks using l1 and l2 norms. This video discusses the basics of regularization, such as adding penalty terms and regularization parameters that control the tradeoff between model fitting and complexity. We present a simple example using synthetic data to generate training and test data points. We then define a neural network model, initialize its weights, and define various loss functions. We also discuss how to implement Stochastic Gradient Descent (SGD). Here is the code summary:
import numpy as np
import torch
from torch import nn
from import data
import as plt
# Implementation from Scratch
def init_params():
“““Parameter initialization“““
w = (0, 1, size=(num_inputs, 1), requires_grad=True)
b = (1, requires_grad=True)
return [w, b]
def l2_penalty(w):
“““Defining the penalty term “““
return ((2)) / 2
def l1_penalty(w):
“““Defining the penalty term “““
return ((w))
def forward(X, w, b):
“““Linear regression model“““
return (X, w) b
def squared_loss(y_hat, y):
“““Squared loss“““
return (y_hat - ()) ** 2 / 2
def sgd(params, lr, batch_size):
“““Minibatch stochastic gradient descent“““
with ():
for param in params:
param -= lr * / batch_size
()
def evaluate_loss(net, data_iter, loss):
“““Evaluate the loss of a model on the given dataset“““
l = 0
for X, y in data_iter:
out = net(X)
y = ()
w, b = init_params()
num_epochs, lr, alpha, beta = 200, .001, 5, 1
net = lambda X: forward(X, w, b)
train_loss = (num_epochs)
test_loss = (num_epochs)
for epoch in range(num_epochs):
for X, y in train_iter:
l = squared_loss(net(X), y) alpha * l2_penalty(w) beta * l1_penalty(w)
().backward()
sgd([w, b], lr, batch_size)
with ():
train_loss[epoch] = evaluate_loss(net, train_iter, squared_loss)
test_loss[epoch] = evaluate_loss(net, test_iter, squared_loss)
({’’: 14})
((num_epochs), train_loss, label=’train’)
((num_epochs), test_loss, label=’test’)
()
(’log’)
l = loss(out, y).sum()
return l
#python #regularization #machinelearning
1 view
7
0
4 years ago 00:15:14 77
Погружение в Териберку / Dive into Teriberka
3 years ago 00:02:53 58
Dive Into Science 2021 Ksenia Klimova
3 years ago 00:31:56 180
A Creator’s Dive Into Turmoil
1 year ago 00:03:16 198
Han Seungwoo (VICTON) - Dive Into
10 years ago 00:00:34 194
Nao Dive Into 2015
1 year ago 00:04:45 68
Dive Into The Dark Waves
3 years ago 00:03:36 39
Fear Of Domination - Dive Into I
9 years ago 00:31:18 164
Dive into C++11 - [4] - Smart pointers
11 years ago 00:05:41 22
Dive into Egypt 2013 part 1
3 years ago 01:10:00 179
A Deep Dive into Nanite Virtualized Geometry
3 years ago 00:37:16 15
A Deep Dive Into Disney Adults
5 years ago 00:03:30 201
Dive Into TESS’s Southern Sky Panorama
9 years ago 00:01:52 239
Deep Dive into Enceladus Plume
4 years ago 00:02:19 22
Dive into the unknown
7 years ago 00:04:22 21
Drop After Dawn - DIVE INTO THE FUTURE
1 year ago 00:00:16 185
아큐브® DIVE INTO BLUE Travel (15s)
3 years ago 00:25:40 13
Deep Dive into DEATHLOOP with Arkane Lyon
8 years ago 00:03:33 3
BioShock 2 Remastered Gameplay: Dive Into Madness
2 years ago 00:55:08 27
Dive into Source Control in Unreal Engine | Webinar
11 years ago 00:01:51 170
NiP diving into June
3 years ago 12:00:00 40
【時光代理人 -LINK CLICK-】“Dive” into “Chill” #1
9 years ago 00:06:39 13
Dive into infinity
5 years ago 00:04:01 103
Mouse In Da Chaos - Dive Into Me (live)
8 years ago 00:42:42 218
Dive into C++11 - [5] - Game entity management basics