Python bindings for the C++ port of GPT4All-J model.
Project description
GPT4All-J
Python bindings for the C++ port of GPT4All-J model.
Installation
pip install gpt4all-j
Download the model from here.
Usage
from gpt4allj import Model
model = Model('/path/to/ggml-gpt4all-j.bin')
print(model.generate('AI is going to'))
If you are getting illegal instruction
error, try using instructions='avx'
or instructions='basic'
:
model = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')
If it is running slow, try building the C++ library from source. Learn more
Parameters
model.generate(prompt,
seed=-1,
n_threads=-1,
n_predict=200,
top_k=40,
top_p=0.9,
temp=0.9,
n_batch=8,
repeat_penalty=1.0,
repeat_last_n=64,
callback=None)
callback
If a callback function is passed to model.generate()
, it will be called once per each generated token. To stop generating more tokens, return False
inside the callback function.
def callback(token):
print(token)
model.generate('AI is going to', callback=callback)
C++ Library
To build the C++ library from source, please see gptj.cpp. Once you have built the shared libraries, you can use them as:
from gpt4allj import Model, load_library
lib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')
model = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
gpt4all-j-0.2.3.tar.gz
(1.8 MB
view hashes)