Skip to main content

An easy-to-use reinforcement learning library

Project description

A Reinforcement Learning Library for Research and Education

pytest Documentation Status contributors Codacy

Try it on Google Colab! Open In Colab


Section Description
Goals The philosophy of rlberry
Installation How to install rlberry
Getting started A quick usage guide of rlberry
Documentation A link to the documentation
Contributing A guide for contributing
Citation How to cite this work

Goals

  • Write detailed documentation and comprehensible tutorial/examples (Jupyter Notebook) for each implemented algorithm.

  • Provide a general interface for agents, that

    • puts minimal constraints on the agent code (=> making it easy to include new algorithms and modify existing ones);

    • allows comparison between agents using a simple and unified evaluation interface (=> making it easy, for instance, to compare deep and "traditional" RL algorithms).

  • Unified seeding mechanism: define only one global seed, from which all other seeds will inherit, enforcing independence of the random number generators.

  • Simple interface for creating and rendering new environments.

Installation

Cloning & creating virtual environment

It is suggested to create a virtual environment using Anaconda or Miniconda:

git clone https://github.com/rlberry-py/rlberry.git
conda create -n rlberry python=3.7

Basic installation

Install without heavy libraries (e.g. pytorch).

conda activate rlberry
pip install -e .

Full installation

Install with all features,

conda activate rlberry
pip install -e .[full]

which includes:

  • Numba for just-in-time compilation of algorithms based on dynamic programming,
  • PyTorch for Deep RL agents,
  • Optuna for hyperparameter optimization,
  • ffmpeg-python for saving videos,
  • PyOpenGL for more rendering options.

Getting started

Tests

To run tests, install test dependencies with pip install -e .[test] and run pytest. To run tests with coverage, install test dependencies and run bash run_testscov.sh. See coverage report in cov_html/index.html.

Documentation

Contributing

Want to contribute to rlberry? Please check our contribution guidelines. A list of interesting TODO's will be available soon. If you want to add any new agents or environments, do not hesitate to open an issue!

Implementation notes

  • When inheriting from the Agent class, make sure to call Agent.__init__(self, env, **kwargs) using **kwargs in case new features are added to the base class, and to make sure that copy_env and reseed_env are always an option to any agent.

  • Convention for verbose in the agents:

    • verbose=0: nothing is printed
    • verbose>1: print progress messages

Errors and warnings are printed using the logging library.

Citing rlberry

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rlberry-0.0.1.tar.gz (61.6 kB view hashes)

Uploaded Source

Built Distribution

rlberry-0.0.1-py3-none-any.whl (94.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page