Skip to main content

A library for Reinforcement Learning

Project description

# anyrl-py

This is a Python remake (and makeover) of [anyrl](https://github.com/unixpickle/anyrl). It is a general-purpose library for Reinforcement Learning which aims to be as modular as possible.

# Installation

You can install anyrl with pip:

` pip install anyrl `

# APIs

There are several different sub-modules in anyrl:

  • models: abstractions and concrete implementations of RL models. This includes actor-critic RNNs, MLPs, CNNs, etc. Takes care of sequence padding, BPTT, etc.

  • envs: APIs for dealing with environments, including wrappers and asynchronous environments.

  • rollouts: APIs for gathering and manipulating batches of episodes or partial episodes. Many RL algorithms include a “gather trajectories” step, and this sub-module fulfills that role.

  • algos: well-known learning algorithms like policy gradients or PPO. Also includes mini-algorithms like Generalized Advantage Estimation.

  • spaces: tools for using action and observation spaces. Includes parameterized probability distributions for implementing stochastic policies.

# Motivation

Currently, most RL code out there is very restricted and not properly decoupled. In contrast, anyrl aims to be extremely modular and flexible. The goal is to decouple agents, learning algorithms, trajectories, and things like GAE.

For example, anyrl decouples rollouts from the learning algorithm (when possible). This way, you can gather rollouts in several different ways and still feed the results into one learning algorithm. Further, and more obviously, you don’t have to rewrite rollout code for every new RL algorithm you implement. However, algorithms like A3C and Evolution Strategies may have specific ways of performing rollouts that can’t rely on the rollout API.

# Use of TensorFlow

This project relies on TensorFlow for models and training algorithms. However, anyrl APIs are framework-agnostic when possible. For example, the rollout API can be used with any policy, whether it’s a TensorFlow neural network or a native-Python decision forest.

# TODO

Here is the current TODO list, organized by sub-module:

  • models * Unify CNN and MLP models with a single base class. * Unshared actor-critics for TRPO and the like.

  • rollouts * Maybe: way to not record states in model_outs (memory saving) * Normalization based on advantage magnitudes. * Optimize for sub-batches in BatchedPlayer.

  • algos * TRPO * PPO: allow clipping for value function

  • spaces * Dict

  • tests * Benchmarks for rollouts * Benchmarks for training

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

anyrl-0.11.18.tar.gz (56.3 kB view hashes)

Uploaded Source

Built Distribution

anyrl-0.11.18-py3-none-any.whl (81.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page