torch.cuda.OutOfMemoryError: CUDA out of memory.解决方案
背景
这几天在调深度学习代码的时候,调到最后发现报torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 240.00 MiB (GPU 0; 23.69 GiB total capacity; 22.68 GiB already allocated; 174.44 MiB free; 22.78 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF错,而使用nvidia-smi
命令查看显卡状况时发现显卡还有内存。
解决方式调小batch_size
将batch_size调小即可,batch_size一般在configs文件夹里面,我将batch_size从80调到了32,这个时候模型就跑起来了。