Skip to main content

A high performance deep learning inference library

Project description

NVIDIA TensorRT is an SDK that facilitates high-performance machine learning inference. It is designed to work in a complementary fashion with training frameworks such as TensorFlow, PyTorch, and MXNet. It focuses specifically on running an already-trained network quickly and efficiently on NVIDIA hardware.

IMPORTANT: This is a special release of TensorRT designed to work only with TensorRT-LLM. Please refrain from upgrading to this version if you are not using TensorRT-LLM.

To install, please execute the following:

pip install tensorrt --extra-index-url https://pypi.nvidia.com

Or add the index URL to the (space-separated) PIP_EXTRA_INDEX_URL environment variable:

export PIP_EXTRA_INDEX_URL='https://pypi.nvidia.com'
pip install tensorrt

When the extra index url does not contain https://pypi.nvidia.com, a nested pip install will run with the proper extra index url hard-coded.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorrt_lean-cu12-10.0.1.tar.gz (18.3 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page