Skip to main content

Neural Pipeline Search helps deep learning experts find the best neural pipeline.

Project description

Neural Pipeline Search

Neural Pipeline Search helps deep learning experts find the best neural pipeline.

Features:

  • Hyperparameter optimization (HPO)
  • Neural architecture search (NAS): cell-based and hierarchical
  • Joint NAS and HPO
  • Expert priors to guide the search
  • Asynchronous parallelization and distribution
  • Fault tolerance for crashes and job time limits

Soon-to-come Features:

  • Multi-fidelity
  • Cost-aware
  • Across code version transfer
  • Python 3.8+ support
  • Multi-objective

Python versions License Tests

Installation

Using pip

pip install neural-pipeline-search

Optional: Specific torch versions

If you run into any issues regarding versions of the torch ecosystem (like needing cuda enabled versions), you might want to use our utility

python -m neps.utils.install_torch

This script asks for the torch version you want and installs all the torch libraries needed for the neps package with that version. For the installation pip of the active python environment is used.

Usage

Using neps always follows the same pattern:

  1. Define a run_pipeline function that evaluates architectures/hyperparameters for your problem
  2. Define a search space pipeline_space of architectures/hyperparameters
  3. Call neps.run to optimize run_pipeline over pipeline_space

In code the usage pattern can look like this:

import neps
import logging

# 1. Define a function that accepts hyperparameters and computes the validation error
def run_pipeline(hyperparameter_a: float, hyperparameter_b: int):
    validation_error = -hyperparameter_a * hyperparameter_b
    return validation_error


# 2. Define a search space of hyperparameters; use the same names as in run_pipeline
pipeline_space = dict(
    hyperparameter_a=neps.FloatParameter(lower=0, upper=1),
    hyperparameter_b=neps.IntegerParameter(lower=1, upper=100),
)

# 3. Call neps.run to optimize run_pipeline over pipeline_space
logging.basicConfig(level=logging.INFO)
neps.run(
    run_pipeline=run_pipeline,
    pipeline_space=pipeline_space,
    working_directory="usage_example",
    max_evaluations_total=5,
)

More examples

For more usage examples for features of neps have a look at neps_examples.

Status information

To show status information about a neural pipeline search use

python -m neps.status WORKING_DIRECTORY

If you need more status information than is printed per default (e.g., the best config over time), please have a look at

python -m neps.status --help

To show the status repeatedly, on unix systems you can use

watch --interval 30 python -m neps.status WORKING_DIRECTORY

Parallelization

In order to run a neural pipeline search with multiple processes or multiple machines, simply call neps.run multiple times. All calls to neps.run need to use the same working_directory on the same filesystem, otherwise there is no synchronization between the neps.run's.

Contributing

Please see our guidelines and guides for contributors at CONTRIBUTING.md.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural-pipeline-search-0.4.9.tar.gz (134.8 kB view hashes)

Uploaded Source

Built Distribution

neural_pipeline_search-0.4.9-py3-none-any.whl (177.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page