Skip to main content

Reliably extension for the Chaos Toolkit

Project description

Chaos Toolkit extension for Reliably

Build

Reliably support for the Chaos Toolkit.

Install

To be used from your experiment, this package must be installed in the Python environment where chaostoolkit already lives.

$ pip install chaostoolkit-reliably

Authentication

To use this package, you must create have registered with Reliably services through their CLI.

You have two ways to pass on the credentials information.

The first one by specifying the path to the Reliably's configuration file, which defaults to $HOME/.config/reliably/config.yaml. The simplest way to achieve this is by running $ reliably login as this will generate the appropriate file.

{
    "configuration": {
        "reliably_config_path": "~/.config/reliably/config.yaml"
    }
}

Because we use the default path, you may omit this configuration's entry altogether unless you need a specific different path.

The second one is by setting some environment variables as secrets. This is for specific use case and usually not required.

  • RELIABLY_TOKEN: the token to authenticate against Reliably's API
  • RELIABLY_ORG: the Reliably organisation to use
  • RELIABLY_HOST:: the hostname to connect to, default to reliably.com
{
    "secrets": {
        "reliably": {
            "token": {
                "type": "env",
                "key": "RELIABLY_TOKEN"
            },
            "org": {
                "type": "env",
                "key": "RELIABLY_ORG"
            },
            "host": {
                "type": "env",
                "key": "RELIABLY_HOST",
                "default": "reliably.com"
            }
        }
    }
}

Usage

As Steady Steate Hypothesis

You can use Reliably's SLO as a mechanism to determine if your system has deviated during a Chaos Toolkit experiment. Here is a simple example:

"steady-state-hypothesis": {
    "title": "We do not consume all of our error budgets during the experiment",
    "probes": [
        {
            "name": "Our 'Must be good' SLO results must be OK",
            "type": "probe",
            "provider": {
                "type": "python",
                "module": "chaosreliably.slo.probes",
                "func": "slo_is_met",
                "arguments": {
                    "labels": {"name": "must-be-good", "service": "must-be-good-service"},
                    "limit": 5
                }
            },
            "tolerance": true,
        }
    ]
}

This above example will get the last 5 Objective Results for our Must be good SLO and determine if they were all okay or whether we've spent our error budget they are allowed.

As Safeguards

Safeguards, provided by the Chaos Toolkit addons extension gives you a nice way to interrupt an experiment as soon as error budgets have been consumed. This is orthogonal to the steady-state hypothesis as it is a mechanism to protect your system from being harmed too harshly by an experiment.

"controls": [
    {
        "name": "safeguard",
        "provider": {
            "type": "python",
            "module": "chaosaddons.controls.safeguards",
            "arguments": {
                "probes": [
                    {
                        "name": "we-do-not-have-enough-error-budget-left-to-carry-on",
                        "type": "probe",
                        "frequency": 5,
                        "provider": {
                            "type": "python",
                            "module": "chaosreliably.slo.probes",
                            "func": "slo_is_met",
                            "arguments": {
                                "labels": {"name": "must-be-good", "service": "must-be-good-service"},
                                "limit": 5
                            }
                        },
                        "tolerance": true
                    }
                ]
            }
        }
    }
]

As you can notice it is the same construct as for the steady-state, it's merely used with a different purpose. Here these probes will be executed every 5s during the experiment (this frequence is for demo purposes, you would usually only run it once every minute or less).

Contribute

If you wish to contribute more functions to this package, you are more than welcome to do so. Please, fork this project, make your changes following the usual PEP 8 code style, sprinkling with tests and submit a PR for review.

Develop

If you wish to develop on this project, make sure to install the development dependencies. But first, create a virtual environment and then install those dependencies.

$ pip install -r requirements-dev.txt -r requirements.txt 

Then, point your environment to this directory:

$ python setup.py develop

Now, you can edit the files and they will be automatically be seen by your environment, even when running from the chaos command locally.

Test

To run the tests for the project execute the following:

$ pytest

Linting & Formatting

A Makefile is provided to abstract away the linting and formatting commands.

To lint the project, run:

$ make lint

To format the project, run:

$ make format

Project details


Release history Release notifications | RSS feed

This version

0.2.2

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chaostoolkit-reliably-0.2.2.tar.gz (19.7 kB view hashes)

Uploaded Source

Built Distribution

chaostoolkit_reliably-0.2.2-py2.py3-none-any.whl (11.4 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page