Skip to main content

mlctl is the control plane for MLOps. It provides a CLI and a Python SDK for supporting key operations related to MLOps.

Project description

mlctl

mlctl logo

mlctl is the Command Line Interface (CLI)/Software Development Kit (SDK) for MLOps. It allows for all ML Lifecycle operations, such as Training, Deployment etc. to be controlled via a simple-to-use command line interface. Additionally, mlctl provides a SDK for use in a notebook environment and employs an extensible mechanism for plugging in various back-end providers, such as SageMaker.

The following ML Lifecycle operations are currently supported via mlctl

  • train - operations related to model training
  • host - operations related to hosting a model for online inference
  • batch inference - operations for running model inference in a batch method

Getting Started

Installation

  1. (Optional) Create a new virtual environment for mlctl

    pip install virtualenv
    virtualenv ~/envs/mlctl
    source ~/envs/mlctl/bin/activate
    
  2. Install mlctl:

    pip install mlctl
    
  3. Upgrade an existing version:

    pip install --upgrade mlctl
    

Usage

Optional Setup

mlctl requires users to specify the plugin and a profile/credentials file for authenticating operations. These values can either be stored as environment variables as shown below OR they can be passed as command line options. Use --help for more details.

```
export PLUGIN=
export PROFILE=
```

Commands

mlctl CLI commands have the following structure:

mlctl <command> <subcommand> [OPTIONS]

To view help documentation, run the following:

mlctl --help
mlctl <command> --help
mlctl <command> <subcommand> --help

Initialize ML Model


mlctl init [OPTIONS]
Options Description
template or -t (optional) Location of the project template github location.

Training Commands


mlctl train <subcommand> [OPTIONS]
Subcommand Description
start train a model
stop stop an ongoing training job
info get training job information

Hosting Commands


mlctl hosting <subcommand> [OPTIONS]
Subcommand Description
create create a model from trained model artifact
deploy deploy a model to create an endpoint for inference
undeploy undeploy a model
info get endpoint information

Batch Inference Commands


mlctl batch <subcommand> [OPTIONS]
Subcommand Description
start perform batch inference
stop stop an ongoing batch inference
info get batch inference information

Examples

Contributing

For information on how to contribute to mlctl, please read through the contributing guidelines.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlctl-0.0.6.dev1.tar.gz (53.9 kB view hashes)

Uploaded Source

Built Distribution

mlctl-0.0.6.dev1-py2.py3-none-any.whl (40.9 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page