No project description provided
Project description
pytorch_rocm_gtt
Python package to allow ROCm to overcome the reserved iGPU memory limits.
Based on https://github.com/pomoke/torch-apu-helper/tree/main, after discussion here: https://github.com/ROCm/ROCm/issues/2014
Install it from PyPI
pip install pytorch_rocm_gtt
Usage
Just call this before starting pytorch allocations (model or torch):
import pytorch_rocm_gtt
pytorch_rocm_gtt.patch()
hipcc
command should be in your $PATH
.
After that, just allocate GPU memory as you would with cuda:
import torch
torch.rand(1000).to("cuda")
Compatibility
In order to use this package, your APU must be compatible with ROCm in the first place.
Check AMD documentation on how to install ROCm for your distribution.
Development
Read the CONTRIBUTING.md file.
How to release
Update pyproject.toml file with the desired version, and run make release
to create the new tag.
After that, the github action will publish to pypi.
Once it is published, run the docker_build_and_publish.sh <version-number>
script to update the docker images.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pytorch_rocm_gtt-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cf53d90b80ca3e9b677c033f6bb1089f7b277e4c4b84912b80d787bc7534dcc4 |
|
MD5 | a62dcad37479f7e0d22d79a6e631defd |
|
BLAKE2b-256 | e5512fe6411b6b31b3486bda77dd88da819fdfc8257c073ecf3ec0e68c8d41fa |