Skip to main content

Python bindings for the C++ port of GPT4All-J model.

Project description

GPT4All-J tests

Python bindings for the C++ port of GPT4All-J model.

Installation

pip install gpt4all-j

Download the model from here.

Usage

from gpt4allj import Model

model = Model('/path/to/ggml-gpt4all-j.bin')

print(model.generate('AI is going to'))

Run in Google Colab

If you are getting illegal instruction error, try using instructions='avx' or instructions='basic':

model = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')

If it is running slow, try building the C++ library from source. Learn more

Parameters

model.generate(prompt,
               seed=-1,
               n_threads=-1,
               n_predict=200,
               top_k=40,
               top_p=0.9,
               temp=0.9,
               n_batch=8,
               callback=None)

callback

If a callback function is passed to model.generate(), it will be called once per each generated token. To stop generating more tokens, return False inside the callback function.

def callback(token):
    print(token)

model.generate('AI is going to', callback=callback)

C++ Library

To build the C++ library from source, please see gptj.cpp. Once you have built the shared libraries, you can use them as:

from gpt4allj import Model, load_library

lib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')

model = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt4all-j-0.2.2.tar.gz (1.8 MB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page