Neural Pipeline Search helps deep learning experts find the best neural pipeline.
Project description
Neural Pipeline Search
Neural Pipeline Search helps deep learning experts find the best neural pipeline.
Features:
- Hyperparameter optimization (HPO)
- Neural architecture search (NAS): cell-based and hierarchical
- Joint NAS and HPO
- Expert priors to guide the search
- Asynchronous parallelization and distribution
- Fault tolerance for crashes and job time limits
Soon-to-come Features:
- Multi-fidelity
- Cost-aware
- Across code version transfer
- Python 3.8+ support
- Multi-objective
Installation
Using pip
pip install neural-pipeline-search
Optional: Specific torch versions
If you run into any issues regarding versions of the torch ecosystem (like needing cuda enabled versions), you might want to use our utility
python -m neps.utils.install_torch
This script asks for the torch version you want and installs all the torch libraries needed for the neps package with
that version. For the installation pip
of the active python environment is used.
Usage
Using neps
always follows the same pattern:
- Define a
run_pipeline
function that evaluates architectures/hyperparameters for your problem - Define a search space
pipeline_space
of architectures/hyperparameters - Call
neps.run
to optimizerun_pipeline
overpipeline_space
In code the usage pattern can look like this:
import neps
import logging
# 1. Define a function that accepts hyperparameters and computes the validation error
def run_pipeline(hyperparameter_a: float, hyperparameter_b: int):
validation_error = -hyperparameter_a * hyperparameter_b
return validation_error
# 2. Define a search space of hyperparameters; use the same names as in run_pipeline
pipeline_space = dict(
hyperparameter_a=neps.FloatParameter(lower=0, upper=1),
hyperparameter_b=neps.IntegerParameter(lower=1, upper=100),
)
# 3. Call neps.run to optimize run_pipeline over pipeline_space
logging.basicConfig(level=logging.INFO)
neps.run(
run_pipeline=run_pipeline,
pipeline_space=pipeline_space,
working_directory="usage_example",
max_evaluations_total=5,
)
More examples
For more usage examples for features of neps have a look at neps_examples.
Status information
To show status information about a neural pipeline search use
python -m neps.status WORKING_DIRECTORY
If you need more status information than is printed per default (e.g., the best config over time), please have a look at
python -m neps.status --help
To show the status repeatedly, on unix systems you can use
watch --interval 30 python -m neps.status WORKING_DIRECTORY
Parallelization
In order to run a neural pipeline search with multiple processes or multiple machines, simply call neps.run
multiple times.
All calls to neps.run
need to use the same working_directory
on the same filesystem, otherwise there is no synchronization between the neps.run
's.
Contributing
Please see our guidelines and guides for contributors at CONTRIBUTING.md.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for neural-pipeline-search-0.4.9.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | b82183d8ea70abd87031957e57a014b6790e091453e2c0ce63bc32246bbd9b1b |
|
MD5 | 753b17006059e9701102c00370760082 |
|
BLAKE2b-256 | 2d6c20b0ecb76e06c83116848b2e4da31da0527866d291b9dec9dc01ec8f3dd2 |
Hashes for neural_pipeline_search-0.4.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9118e94681b2d52b19723a5c7f9de9289e6252086dc045d417961e3dd45447a6 |
|
MD5 | 81d3e45f920e79b8186525f2352ccd16 |
|
BLAKE2b-256 | d68f63354f24c8bf56d2e068862340d91e21e6677af8d4aed2b673a0f2838aca |