Skip to main content

A set of tools to use Python long-running function as pipeline jobs; output-value caching and execution logging.

Project description

Joblib provides a set of tools for using long-running Python functions as pipeline jobs:

  1. transparent disk-caching of the output values and lazy re-evaluation

  2. logging of the execution

The original focus was on scientific-computing scripts, but any long-running succession of operation can profit from the tools provided by joblib.


Joblib came out of long-running data-analysis Python scripts. The long term vision is to provide tools for scientists to achieve better reproducibility when running jobs. However, Joblib can also be used to provide a light-weight make replacement.

The main problems identified are:

  1. Lazy evaluation: People need to rerun over and over the same script as it is tuned, but end up commenting out steps, or uncommenting steps, as they are needed, as they take long to run.

  2. Persistance: It is difficult to persist in an efficient way arbitrary objects containing large numpy arrays. In addition, hand-written persistence to disk does not link easily the file on disk to the corresponding Python object it was persists from in the script. This leads to people not a having a hard time resuming the job, eg after a crash and persistence getting in the way of work.

The approach take by Joblib to address these problems is not to build a heavy framework and coerce user into using it (e.g. with pipeline). It strives to leave your code and your flow control as unmodified as possible.

The tools that have been identified and developped so far are:

  1. Transparent and fast disk-caching of output value: a make-like functionality for Python functions. The goal is to separate a script in a set of steps, with well-defined inputs and outputs, that can be saved and reran only if necessary, by using standard Python functions:

    >>> from joblib import Memory
    >>> mem = Memory(cachedir='/tmp/joblib', debug=True)
    >>> import numpy as np
    >>> a = np.vander(np.arange(3))
    >>> square = mem.cache(np.square)
    >>> b = square(a)
    DBG:Call square(array([[0, 0, 1],
           [1, 1, 1],
           [4, 2, 1]]))
    >>> c = square(a)
  2. Logging/tracing: The functionalities described above will progressively acquire better logging mechanism to help track what has been ran, and capture I/O easily. In addition, Joblib will provide a few I/O primitives, to easily define define logging and display streams, and maybe provide a way of compiling a report. In the long run, we would like to be able to quickly inspect what has been run.

As stated on the project page, currently the project is in alpha quality. I am testing heavily all the features, as I care more about robustness than having plenty of features. On the other side, I expect to be playing with the API and features for a while before I can figure out what is the right set of functionalities to expose.

The code is hosted on launchpad for the good reason that branching the project and publishing it along-side my branch is dead-easy.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

joblib-0.3.2a.dev.tar.gz (127.7 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page