Skip to main content

This repository contains code to run faster sentence-transformers using tools like quantization, ONNX and pruning.

Project description

Fast Sentence Transformers

This repository contains code to run faster sentence-transformers using tools like quantization and ONNX. Just run your model much faster, while a lot of memory. There is not much to it!

Python package Current Release Version pypi Version PyPi downloads Code style: black

Install

pip install fast-sentence-transformers

Or for GPU support.

pip install fast-sentence-transformers[gpu]

Quickstart

from fast_sentence_transformers import FastSentenceTransformer as SentenceTransformer

# use any sentence-transformer
encoder = SentenceTransformer("all-MiniLM-L6-v2", device="cpu", quantize=True)

encoder.encode("Hello hello, hey, hello hello")
encoder.encode(["Life is too short to eat bad food!"] * 2)

Benchmark

Indicative benchmark for CPU usage with smallest and largest model on sentence-transformers. Note, ONNX doesn't have GPU support for quantization yet.

model Type default ONNX ONNX+quantized ONNX+GPU
paraphrase-albert-small-v2 memory 1x 1x 1x 1x
speed 1x 2x 5x 20x
paraphrase-multilingual-mpnet-base-v2 memory 1x 1x 4x 4x
speed 1x 2x 5x 20x

Shout-Out

This package heavily leans on sentence-transformers and txtai.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fast_sentence_transformers-0.4.1.tar.gz (16.9 kB view hashes)

Uploaded Source

Built Distribution

fast_sentence_transformers-0.4.1-py3-none-any.whl (22.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page