_
)를 붙여 표시함.In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss of history. Hence, their use is discouraged.
In-place 연산은 메모리를 절약할 수 있으나, 즉각적인 history 손실로 인해 미분 계산 시 문제를 일으킬 수 있다. 따라서 권장되지 않는다
x = torch.tensor([[1, 2], [3, 4]])
y = torch.tensor([1, 1])
# 일반적인 덧셈
z = torch.add(x, y) # y를 [[1, 1], [1, 1]]로 확장
print(z) # tensor([[2, 3], [4, 5]])
# in-place 덧셈
x.add_(y)
print(x) # tensor([[2, 3], [4, 5]])
x = torch.tensor([[1, 2], [3, 4]])
y = torch.tensor([1, 1])
# 일반적인 뺄셈
z = torch.sub(x, y) # y를 [[1, 1], [1, 1]]로 확장
print(z) # tensor([[0, 1], [2, 3]])
# in-place 뺄셈
x.sub_(y)
print(x) # tensor([[0, 1], [2, 3]])
x = torch.tensor([[1, 4], [7, 8]])
y = torch.tensor([2, 3])
# 일반적인 곱셈
z = torch.mul(x, y) # y를 [[2, 3], [2, 3]]로 확장
print(z) # tensor([[2, 12], [14, 24]])
# in-place 곱셈
x.mul_(y)
print(x) # tensor([[2, 12], [14, 24]])
x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = torch.div(x, y)
print(z) # tensor([0.2500, 0.4000, 0.5000])
x = torch.tensor([1, 2, 3])
z = torch.pow(x, 2)
print(z) # tensor([1, 4, 9])
x = torch.tensor([1, 4, 9])
z = torch.pows(x, 2)
print(z) # tensor([1, 2, 3])
x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
dot_product = torch.dot(x, y)
print(dot_product) # 32 (1*4 + 2*5 + 3*6)
matmul() = @
: broadcasting 지원(디버깅에 불리) + 3차원부터는 텐서곱(외적)mm()
: broadcasting 미지원(디버깅에 유리)A = torch.tensor([[1, 2],
[3, 4]])
B = torch.tensor([[5, 6],
[7, 8]])
product = torch.mm(A, B)
print(product)
# tensor([[19, 22],
# [43, 50]])
x = torch.tensor([1, 2, 3])
y = torch.tensor([1, 2, 4])
z = torch.eq(x, y)
print(z) # tensor([True, True, False])
x = torch.tensor([1, 2, 3])
y = torch.tensor([1, 2, 4])
z = torch.ne(x, y)
print(z) # tensor([False, False, True])
x = torch.tensor([True, False, True])
y = torch.tensor([True, True, False])
z = torch.logical_and(x, y)
print(z) # tensor([True, False, False])
x = torch.tensor([True, False, True])
y = torch.tensor([True, True, False])
z = torch.logical_or(x, y)
print(z) # tensor([True, True, True])
x = torch.tensor([True, False, True])
y = torch.tensor([True, True, False])
z = torch.logical_xor(x, y)
print(z) # tensor([False, True, True])