Machine learning prediction serving
Project description
ServeIt lets you easily serve model predictions and supplementary information from a RESTful API. Current features include:
Model inference serving via RESTful API endpoint
Extensible library for inference-time data loading, preprocessing, input validation, and postprocessing
Supplementary information endpoint creation
Automatic JSON serialization of responses
Configurable request and response logging (work in progress)
Installation: Python 2.7 and Python 3.6
Installation is easy with pip: pip install serveit
Usage:
Deploy your model clf to an API endpoint with as little as one line of code:
from serveit.server import ModelServer
# initialize server with a model and a method to use for predictions
# then start serving predictions
ModelServer(clf, clf.predict).serve()
Your new API is now accepting POST requests at localhost:5000/predictions! Please see the examples directory for additional usage.
Supported libraries
Scikit-Learn
Keras
PyTorch
Coming soon:
TensorFlow
Building
You can build locally with: python setup.py
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for ServeIt-0.0.6a1-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 58561a6785aabe3f4a959f716ebd2d4e9fb7fa2cd86b6e28d8ce8f8c604f4823 |
|
MD5 | 154db031902ca57cf1d9f70b9a5f449b |
|
BLAKE2b-256 | 637e716cc63c03dcda075164026e6f8fdb1d5fd3a92549e1e935f1e1de012a15 |