Run a AllenNLP trained model, and serve it with WebAPI.
Project description
allennlp-runmodel
Run a AllenNLP trained model, and serve it with WebAPI.
Usage
Run the program
Execute the program in terminator, the option --help
will show help message:
$ allennlp-runmodel --help
usage: allennlp-runmodel [-h] [--version] [--log-conf LOG_CONF] [--host HOST]
[--port PORT] [--path PATH]
[--predictor-name PREDICTOR_NAME]
[--cuda-device CUDA_DEVICE]
[--max-workers MAX_WORKERS]
archive
Run a AllenNLP trained model, and serve it with WebAPI.
positional arguments:
archive The archive file to load the model from.
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
--log-conf LOG_CONF, -l LOG_CONF
Path to logging configuration file (INI, JSON or YAML)
(default=None)
--host HOST, -s HOST TCP/IP host or a sequence of hosts for HTTP server.
Default is "0.0.0.0" if port has been specified or if
path is not supplied.
--port PORT, -p PORT TCP/IP port for HTTP server. Default is 8080.
--path PATH, -t PATH File system path for HTTP server Unix domain socket.
Listening on Unix domain sockets is not supported by
all operating systems.
--predictor-name PREDICTOR_NAME, -n PREDICTOR_NAME
Optionally specify which `Predictor` subclass;
otherwise, the default one for the model will be used.
--cuda-device CUDA_DEVICE, -c CUDA_DEVICE
If CUDA_DEVICE is >= 0, the model will be loaded onto
the corresponding GPU. Otherwise it will be loaded
onto the CPU. (default=-1)
--max-workers MAX_WORKERS, -w MAX_WORKERS
Uses a pool of at most max_workers threads to execute
calls asynchronously. Default to the number of
processors on the machine, multiplied by 5.
Make prediction from HTTP client
curl \
--header "Content-Type: application/json" \
--request POST \
--data '{"premise":"Two women are embracing while holding to go packages.","hypothesis":"The sisters are hugging goodbye while holding to go packages after just eating lunch."}' \
http://localhost:8080/
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
allennlp-runmodel-0.1.tar.gz
(9.3 kB
view hashes)
Built Distribution
Close
Hashes for allennlp_runmodel-0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ccc6474dce5d6a963c368a8433796718af03f6dec37f12e33df104d1acede803 |
|
MD5 | 335e0ba910b02dc92f4fbf70ddbbf44b |
|
BLAKE2b-256 | 93ab1147408d9c29341e732eff30ddcca511ee9f0eef47b5bc7dbae5efd51ebe |