Skip to main content

fastText model serving API server

Project description

fasttext-serving

GitHub Actions Crates.io Docker Pulls

fastText model serving service

Installation

You can download prebuilt binary from GitHub releases, or install it using Cargo:

cargo install fasttext-serving

Using Docker:

docker pull messense/fasttext-serving

Usage

$ fasttext-serving --help

USAGE:
    fasttext-serving [OPTIONS] --model <model>

FLAGS:
        --grpc       Serving gRPC API instead of HTTP API
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -a, --address <address>    Listen address [default: 127.0.0.1]
    -m, --model <model>        Model path
    -p, --port <port>          Listen port [default: 8000]
    -w, --workers <workers>    Worker thread count, defaults to CPU count

Serve HTTP REST API

HTTP API endpoint:

POST /predict

Post data should be JSON array of string, for example ["abc", "def"]

CURL example:

$ curl -X POST -H 'Content-Type: application/json' \
     --data "[\"Which baking dish is best to bake a banana bread?\", \"Why not put knives in the dishwasher?\"]" \
     'http://localhost:8000/predict'
[[["baking"],[0.7152988]],[["equipment"],[0.73479545]]]

Serve gRPC API

Run the command with --grpc to serve gRPC API instead of HTTP REST API.

Please refer to gRPC Python client documentation here.

License

This work is released under the MIT license. A copy of the license is provided in the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fasttext_serving_server-0.6.2.tar.gz (25.7 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page