Skip to main content

No project description provided

Project description

pytorch_rocm_gtt

codecov CI

Python package to allow ROCm to overcome the reserved iGPU memory limits.

Based on https://github.com/pomoke/torch-apu-helper/tree/main, after discussion here: https://github.com/ROCm/ROCm/issues/2014

Install it from PyPI

pip install pytorch_rocm_gtt

Usage

Just call this before starting pytorch allocations (model or torch):

import pytorch_rocm_gtt

pytorch_rocm_gtt.patch()

hipcc command should be in your $PATH.

After that, just allocate GPU memory as you would with cuda:

import torch

torch.rand(1000).to("cuda")

Compatibility

In order to use this package, your APU must be compatible with ROCm in the first place.

Check AMD documentation on how to install ROCm for your distribution.

Development

Read the CONTRIBUTING.md file.

How to release

Update pyproject.toml file with the desired version, and run make release to create the new tag.

After that, the github action will publish to pypi.

Once it is published, run the docker_build_and_publish.sh <version-number> script to update the docker images.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch_rocm_gtt-0.1.1.tar.gz (2.8 kB view hashes)

Uploaded Source

Built Distribution

pytorch_rocm_gtt-0.1.1-py3-none-any.whl (3.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page