Skip to main content

Modular and flexible library for Reinforcement Learning

Project description

pypi discussions
license      docs pytest pre-commit


SKRL - Reinforcement Learning library


skrl is an open-source modular library for Reinforcement Learning written in Python (using PyTorch) and designed with a focus on readability, simplicity, and transparency of algorithm implementation. In addition to supporting the OpenAI Gym / Farama Gymnasium and DeepMind and other environment interfaces, it allows loading and configuring NVIDIA Isaac Gym, NVIDIA Isaac Orbit and NVIDIA Omniverse Isaac Gym environments, enabling agents' simultaneous training by scopes (subsets of environments among all available environments), which may or may not share resources, in the same run


Please, visit the documentation for usage details and examples

https://skrl.readthedocs.io/en/latest/


Note: This project is under active continuous development. Please make sure you always have the latest version. Visit the develop branch or its documentation to access the latest updates to be released.


Citing this library

To cite this library in publications, please use the following reference:

@article{serrano2022skrl,
  title={skrl: Modular and Flexible Library for Reinforcement Learning},
  author={Serrano-Mu{\~n}oz, Antonio and Arana-Arexolaleiba, Nestor and Chrysostomou, Dimitrios and B{\o}gh, Simon},
  journal={arXiv preprint arXiv:2202.03825},
  year={2022}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

skrl-0.10.1.tar.gz (87.3 kB view hashes)

Uploaded Source

Built Distribution

skrl-0.10.1-py3-none-any.whl (137.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page