3.3. Concise Implementation of Linear Regression

Eunjin Kim·2022년 4월 26일
0

Dive into Deep Learning

목록 보기
13/14

3.3. Concise Implementation of Linear Regression

In this section, we will show you how to implement the linear regression model from Section 3.2 concisely by using high-level APIs of deep learning frameworks.

# Generating the Dataset
import numpy as np
import torch
from torch.utils import data
from d2l import torch as d2l

true_w = torch.tensor([2, -3.4])
true_b = 4.2
features, labels = d2l.synthetic_data(true_w, true_b, 1000)

# Reading the Dataset
def load_array(data_arrays, batch_size, is_train=True):  #@save
    """Construct a PyTorch data iterator."""
    dataset = data.TensorDataset(*data_arrays)
    return data.DataLoader(dataset, batch_size, shuffle=is_train)

batch_size = 10
data_iter = load_array((features, labels), batch_size)

next(iter(data_iter))

# Defining the Model
# `nn` is an abbreviation for neural networks
# In PyTorch, the fully-connected layer is defined in the Linear class
from torch import nn

net = nn.Sequential(nn.Linear(2, 1))

# Initializing Model Parameters
net[0].weight.data.normal_(0, 0.01)
net[0].bias.data.fill_(0)

# Defining the Loss Function
loss = nn.MSELoss()

# Defining the Optimization Algorithm
trainer = torch.optim.SGD(net.parameters(), lr=0.03)

# Training
num_epochs = 3
for epoch in range(num_epochs):
    for X, y in data_iter:
        l = loss(net(X) ,y)
        trainer.zero_grad()
        l.backward()
        trainer.step()
    l = loss(net(features), labels)
    print(f'epoch {epoch + 1}, loss {l:f}')

w = net[0].weight.data
print('error in estimating w:', true_w - w.reshape(true_w.shape))
b = net[0].bias.data
print('error in estimating b:', true_b - b)

3.3.8. Summary

  • Using PyTorch’s high-level APIs, we can implement models much more concisely.
  • In PyTorch, the data module provides tools for data processing, the nn module defines a large number of neural network layers and common loss functions.
  • We can initialize the parameters by replacing their values with methods ending with _.

3.3.9. Exercises

  • If we replace nn.MSELoss(reduction='sum') with nn.MSELoss(), how can we change the learning rate for the code to behave identically. Why?

  • Review the PyTorch documentation to see what loss functions and initialization methods are provided. Replace the loss by Huber’s loss.

  • How do you access the gradient of net[0].weight?

profile
ALL IS WELL🌻

0개의 댓글