Skip to main content

A Cron-like system for running tasks

Project description

# CronQ

A cron-like system to run your application tasks across any node, instead of one
special snowflake. This is done by keeping your tasks in MySQL and publishing
them over AMQP to workers that will run your tasks and eventually save the
results back into the DB. This was started as a hackathon project at
[SeatGeek](http://seatgeek.com)

## Requirements

- Python 2.7
- RabbitMQ 3.x
- MySQL 5.x

## Installation

pip install cronq

## Usage

There are various workers that are used by Cronq, as well as a web admin.

### cronq-runner

The `runner` executes tasks, and should be run on hosts that will actually perform work. There is no limit to the number of runners that can execute.

The runner requires `/var/log/cronq/` to exist and be writable by the user
executing the runner.

# setup rabbitmq connection info
export RABBITMQ_HOST=localhost
export RABBITMQ_USER=guest
export RABBITMQ_PASS=guest

# specify the rabbitmq queue to listen to
export CRONQ_QUEUE=cronq_jobs # `cronq_jobs` is the default queue

# run commands
cronq-runner

When run, `cronq-runner` will:

- Setup a rabbitmq connection
- Listen to the `cronq_jobs` queue
- Retrieve commands from the queue
- Publish a message saying the command is started
- Run the command in a shell subprocess
- Publish a message on success/failure to the `cronq` exchange and `cronq_results` queue. This is not configurable.
- Listen for more messages

### cronq-injector

> The `cronq-injector` command will non-destructively create any necessary `cronq` tables, though it will need a database to perform this action against. Please note that if you do not have tables created, it is helpful to run the injector first.

The `injector` is used to retrieve jobs from the database and publish them to AMQP. Jobs are published in the following format:

# where job is a database record
{
'name': job.name,
'command': unicode(job.command),
'id': job.id,
}

You can ostensibly run as many injectors as necessary. MySQL isolation levels are used to attain locks on job records.

# setup rabbitmq connection info
export RABBITMQ_HOST=localhost
export RABBITMQ_USER=guest
export RABBITMQ_PASS=guest

# specify the database connection string
export CRONQ_MYSQL=mysql+mysqlconnector://cronq:cronq@localhost/cronq

# run the comand injector
cronq-injector

`cronq-injector` perform a 1 second sleep between job injections, but may perform an unlimited number of job injections in that time period.

Note that jobs are not queued up at the *exact* time you specify in the database. Rather, jobs that matches the following heuristic are queued one-at-a-time until no jobs are left to be queued for that injection cycle:

Job.next_run < NOW() OR Job.run_now = 1

### cronq-results

The `results` aggregator listens to the `cronq_results` queue for the results of `cronq-runner` executions. You can run as many of these as possible, as they will retrieve results one-at-a-time from rabbitmq.

# setup rabbitmq connection info
export RABBITMQ_HOST=localhost
export RABBITMQ_USER=guest
export RABBITMQ_PASS=guest

# specify the database connection string
export CRONQ_MYSQL=mysql+mysqlconnector://cronq:cronq@localhost/cronq

# run the results-aggregator
cronq-results

These results can be viewed for particular commands within the web-admin, or by inspecting the database.

### cronq-web

The web view is a WSGI app run from `cronq.web:app` and requires only database access. The following is an example for running the web admin using webscale technologies:

# install libevent-dev
sudo apt-get install libevent-dev

# install required python libraries
sudo pip install greenlet gevent gunicorn

# specify the database connection string
export CRONQ_MYSQL=mysql+mysqlconnector://cronq:cronq@localhost/cronq

# run the web admin
gunicorn --access-logfile - -w 2 --worker-class=gevent cronq.web:app

# access the panel on http://127.0.0.1:8000

The web admin will list available commands, their result history, and a button to allow you to immediately schedule a job.

#### Categories Api

The web admin exposes a `category` endpoint which allows you to replace a set of jobs with a single API call

```
curl -v 'localhost:5000/api/category/example' -f -XPUT -H 'content-type: application/json' -d '
{
"category": "example",
"jobs": [{
"name": "Test Job",
"schedule": "R/2013-05-29T00:00:00/PT1M",
"command": "sleep 10",
"routing_key": "slow"
}]
}'
```

This adds / updates a job named `Test Job` in the `example` category. The time format is ISO 8601. Any jobs no longer defined for the example category will be removed. This allows you to script job additions / removes in your VCS.


## License

BSD

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cronq-0.0.39.tar.gz (144.2 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page