Skip to main content

Python bindings for the C++ port of GPT4All-J model.

Project description

GPT4All-J PyPI tests

Python bindings for the C++ port of GPT4All-J model.

Please migrate to ctransformers library which supports more models and has more features.

Installation

pip install gpt4all-j

Download the model from here.

Usage

from gpt4allj import Model

model = Model('/path/to/ggml-gpt4all-j.bin')

print(model.generate('AI is going to'))

Run in Google Colab

If you are getting illegal instruction error, try using instructions='avx' or instructions='basic':

model = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')

If it is running slow, try building the C++ library from source. Learn more

Parameters

model.generate(prompt,
               seed=-1,
               n_threads=-1,
               n_predict=200,
               top_k=40,
               top_p=0.9,
               temp=0.9,
               repeat_penalty=1.0,
               repeat_last_n=64,
               n_batch=8,
               reset=True,
               callback=None)

reset

If True, context will be reset. To keep the previous context, use reset=False.

model.generate('Write code to sort numbers in Python.')
model.generate('Rewrite the code in JavaScript.', reset=False)

callback

If a callback function is passed, it will be called once per each generated token. To stop generating more tokens, return False inside the callback function.

def callback(token):
    print(token)

model.generate('AI is going to', callback=callback)

LangChain

LangChain is a framework for developing applications powered by language models. A LangChain LLM object for the GPT4All-J model can be created using:

from gpt4allj.langchain import GPT4AllJ

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin')

print(llm('AI is going to'))

If you are getting illegal instruction error, try using instructions='avx' or instructions='basic':

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin', instructions='avx')

It can be used with other LangChain modules:

from langchain import PromptTemplate, LLMChain

template = """Question: {question}

Answer:"""

prompt = PromptTemplate(template=template, input_variables=['question'])

llm_chain = LLMChain(prompt=prompt, llm=llm)

print(llm_chain.run('What is AI?'))

Parameters

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin',
               seed=-1,
               n_threads=-1,
               n_predict=200,
               top_k=40,
               top_p=0.9,
               temp=0.9,
               repeat_penalty=1.0,
               repeat_last_n=64,
               n_batch=8,
               reset=True)

C++ Library

To build the C++ library from source, please see gptj.cpp. Once you have built the shared libraries, you can use them as:

from gpt4allj import Model, load_library

lib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')

model = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt4all-j-0.2.6.tar.gz (1.8 MB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page