Skip to main content

GenAI Techne System (gtsystem) is a low code Python package for crafting GenAI applications quickly

Project description

GenAI Techne System (gtsystem)

A low code Python package for crafting GenAI applications quickly

GenAI Techne is on a mission to help enterprise and professionals excel in the craft of Generative AI. Check out the GenAI Techne Substack where you can read more about our mission, read gtsystem documentation, learn from step-by-step tutorials, and influence the roadmap of gtsystem for your use cases.

gtsystem banner

Getting Started

The get started using gtsystem package follow these steps.

Step 1. Install gtsystem package using pip install gtsystem

Step 2. Open a Jupyter notebook and try this sample.

from gtsystem import openai, bedrock
prompt = 'How many faces does a tetrahedron have?'
openai.gpt_text(prompt)
bedrock.llama_text(prompt)
bedrock.claude_text(prompt)

Features and Notebook Samples

You can read more about the vision behind gtsystem on the GenAI Techne substack post.

gtsystem features

You can learn gtsystem API by following along the notebook samples included in this repo.

01-evaluate.ipynb for single statement prompt evaluations across multiple models including OpenAI GPT-4 and Bedrock hosted Claude 2.1 and Llama 2.

02-render.ipynb for well formatted rendering of the model responses.

03-tasks.ipynb for loading evaluation tasks - find, list, load prompts by task, including optinal parameter values for temperature and TopP.

04-instrument.ipynb for instrumenting and comparing multiple models across latency and size of response.

05-benchmark.ipynb for automating benchmarking the quality of responses from models like Llama and Claude using GPT-4 as an LLM evaluator.

Amazon Bedrock Setup

To use Amazon Bedrock hosted models like Llama and Claude follow these steps.

Step 1. Login to AWS Console > Launch Identity and Access Management (IAM) > Create a user for Command-Line Interface (CLI) access. Read Bedrock documentation for more details.

Step 2. Install AWS CLI > Run aws configure in Terminal > Add credentials from Step 1.

Ollama Setup

To use Ollama provided LLMs locally on your laptop follow these steps.

Step 1. Download Ollama Note the memory requirements for each model. 7b models generally require at least 8GB of RAM. 13b models generally require at least 16GB of RAM. 70b models generally require at least 64GB of RAM

Step 2. Find model Ollama library > Run command in terminal to download and run model. Currently gtsystem supports popular models like llama2, mistral, and phi.

OpenAI Setup

To use OpenAI models follow these steps.

Step 1. Signup for OpenAI API access and get the API key.

Step 2. Add OpenAI API Key to your ~/.zshrc or ~/.bashrc using export OPENAI_API_KEY="your-key-here"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gtsystem-0.0.1.tar.gz (11.0 kB view hashes)

Uploaded Source

Built Distribution

gtsystem-0.0.1-py3-none-any.whl (11.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page