- 파이토치에서 제공되는 함수들을 통해 더 쉽게 선형 회귀 모델 구현 가능
1. 단순 선형 회귀 구현하기
import torch
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed(1)
<torch._C.Generator at 0x19efbde65d0>
x_train = torch.FloatTensor([[1], [2], [3]])
y_train = torch.FloatTensor([[2], [4], [6]])
model = nn.Linear(1,1)
print(list(model.parameters()))
[Parameter containing:
tensor([[0.5153]], requires_grad=True), Parameter containing:
tensor([-0.4414], requires_grad=True)]
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
nb_epochs = 2000
for epoch in range(nb_epochs+1):
prediction = model(x_train)
cost = F.mse_loss(prediction, y_train)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if epoch % 200 == 0:
print('Epoch {:4d}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
Epoch 0/2000 Cost: 0.000000
Epoch 200/2000 Cost: 0.000000
Epoch 400/2000 Cost: 0.000000
Epoch 600/2000 Cost: 0.000000
Epoch 800/2000 Cost: 0.000000
Epoch 1000/2000 Cost: 0.000000
Epoch 1200/2000 Cost: 0.000000
Epoch 1400/2000 Cost: 0.000000
Epoch 1600/2000 Cost: 0.000000
Epoch 1800/2000 Cost: 0.000000
Epoch 2000/2000 Cost: 0.000000
new_var = torch.FloatTensor([[4.0]])
pred_y = model(new_var)
print("훈련 후 입력이 4일 때의 예측값 :", pred_y)
훈련 후 입력이 4일 때의 예측값 : tensor([[8.0000]], grad_fn=<AddmmBackward0>)
print(list(model.parameters()))
[Parameter containing:
tensor([[2.0000]], requires_grad=True), Parameter containing:
tensor([1.5649e-05], requires_grad=True)]
2. 다중선형회귀 구현하기
import torch
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed(1)
<torch._C.Generator at 0x19efbde65d0>
x_train = torch.FloatTensor([[73, 80, 75],
[93, 88, 93],
[89, 91, 90],
[96, 98, 100],
[73, 66, 70]])
y_train = torch.FloatTensor([[152], [185], [180], [196], [142]])
model = nn.Linear(3,1)
print(list(model.parameters()))
[Parameter containing:
tensor([[ 0.2975, -0.2548, -0.1119]], requires_grad=True), Parameter containing:
tensor([0.2710], requires_grad=True)]
optimizer = torch.optim.SGD(model.parameters(), lr=1e-5)
nb_epochs = 2000
for epoch in range(nb_epochs+1):
prediction = model(x_train)
cost = F.mse_loss(prediction, y_train)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if epoch % 200 == 0:
print('Epoch {:4d}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
Epoch 0/2000 Cost: 31667.599609
Epoch 200/2000 Cost: 0.223911
Epoch 400/2000 Cost: 0.220059
Epoch 600/2000 Cost: 0.216575
Epoch 800/2000 Cost: 0.213413
Epoch 1000/2000 Cost: 0.210559
Epoch 1200/2000 Cost: 0.207967
Epoch 1400/2000 Cost: 0.205618
Epoch 1600/2000 Cost: 0.203481
Epoch 1800/2000 Cost: 0.201539
Epoch 2000/2000 Cost: 0.199770
new_var = torch.FloatTensor([[73, 80, 75]])
pred_y = model(new_var)
print("훈련 후 입력이 73, 80, 75일 때의 예측값 :", pred_y)
훈련 후 입력이 73, 80, 75일 때의 예측값 : tensor([[151.2306]], grad_fn=<AddmmBackward0>)
print(list(model.parameters()))
[Parameter containing:
tensor([[0.9778, 0.4539, 0.5768]], requires_grad=True), Parameter containing:
tensor([0.2802], requires_grad=True)]
3. 행렬연산을 통한 다중선형회귀 개선
- 다수의 x를 행렬로 선언하고, 내적을 통해 곱셈연산
- 가설을 행렬곱으로 간단히 정의하면 독립변수를 추가로 늘리거나 줄이더라도
가설 선언 코드를 수정할 필요가 없음
x_train = torch.FloatTensor([[73, 80, 75],
[93, 88, 93],
[89, 91, 80],
[96, 98, 100],
[73, 66, 70]])
y_train = torch.FloatTensor([[152], [185], [180], [196], [142]])
print(x_train.shape)
print(y_train.shape)
torch.Size([5, 3])
torch.Size([5, 1])
W = torch.zeros((3, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
hypothesis = x_train.matmul(W) + b
nb_epochs = 2000
for epoch in range(nb_epochs+1):
prediction = model(x_train)
cost = F.mse_loss(prediction, y_train)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if epoch % 200 == 0:
print('Epoch {:4d}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
Epoch 0/2000 Cost: 1.352501
Epoch 200/2000 Cost: 1.221171
Epoch 400/2000 Cost: 1.105422
Epoch 600/2000 Cost: 1.003253
Epoch 800/2000 Cost: 0.912978
Epoch 1000/2000 Cost: 0.833073
Epoch 1200/2000 Cost: 0.762247
Epoch 1400/2000 Cost: 0.699401
Epoch 1600/2000 Cost: 0.643532
Epoch 1800/2000 Cost: 0.593801
Epoch 2000/2000 Cost: 0.549470
print(list(model.parameters()))
[Parameter containing:
tensor([[1.0941, 0.6361, 0.2888]], requires_grad=True), Parameter containing:
tensor([0.2797], requires_grad=True)]