这是本文档旧的修订版!
强行占用20G的显存
因为某些特定的原因,我们必须占用20G的显存
- use_gpu.py
import torch import time device = torch.device("cuda") total_memory = torch.cuda.get_device_properties(0).total_memory target_memory = total_memory * 0.3 num_elements = int(target_memory / 4) tensor = torch.randn(num_elements, dtype=torch.float32, device=device) time.sleep(3600000000) del tensor torch.cuda.empty_cache()