[SSH] Runtime에러 해결

김보현·2024년 7월 30일
0

Server

목록 보기
5/5

주로 GPU 메모리 문제와 관련되어 있기 때문에

torch.cuda.empty_cache()

CUDA의 cache를 정리하면 해결할 수 있습니다!

아래의 긴 런타임에러가 발생했었지만 위의 메모리정리 한줄로 해결되었답니다~

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[25], line 13
     11 # Update
     12 optm.zero_grad() # reset gradient
---> 13 loss_out.backward() # backpropagate
     14 optm.step() # optimizer update
     15 loss_val_sum += loss_out

File ~/miniconda3/envs/bohyun/lib/python3.8/site-packages/torch/_tensor.py:525, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
    515 if has_torch_function_unary(self):
    516     return handle_torch_function(
    517         Tensor.backward,
    518         (self,),
   (...)
    523         inputs=inputs,
    524     )
--> 525 torch.autograd.backward(
    526     self, gradient, retain_graph, create_graph, inputs=inputs
    527 )

File ~/miniconda3/envs/bohyun/lib/python3.8/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    262     retain_graph = create_graph
    264 # The reason we repeat the same comment below is that
    265 # some Python versions print out the first line of a multi-line function
...
    746     )  # Calls into the C++ engine to run the backward pass
    747 finally:
    748     if attach_logging_hooks:

RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
profile
Fall in love with Computer Vision

0개의 댓글