Wrapper library for openai to send events to the Imaginary Programming monitor
Project description
Imaginary Dev OpenAI wrapper
Wrapper library for openai to send events to the Imaginary Programming monitor
- Free software: MIT license
- Documentation: https://im-openai.readthedocs.io.
Features
- Patches the openai library to allow user to set an ip_api_key and ip_api_name for each request
- Works out of the box with langchain
Get Started
To send events to Imaginary Programming, you'll need to create a project. From the project you'll need two things:
- API key: This is generated for the project and is used to identify the project and environment (dev, staging, prod) that the event is coming from.
- API Name: This uniquely identifies a particular prompt that you are using. This allows projects to have multiple prompts. You do not need to generate this in advance: if the API name does not exist, then it will be created automatically. This can be in any format but we recommend using a dash-separated format, e.g.
my-prompt-name
.
OpenAI
You can use the patched_openai
context manager to patch your code.
To allow our tools to separate the "prompt" from the "prompt parameters", use TemplateChat
and TemplateText
to create templates.
Use TemplateChat
For the ChatCompletion APIs:
from im_openai import patched_openai, TemplateChat
with patched_openai(api_key="...", api_name="sport-emoji"):
import openai
completion = openai.ChatCompletion.create(
# Standard OpenAI parameters
model="gpt-3.5-turbo",
messages=TemplateChat(
[{"role": "user", "content": "Show me an emoji that matches the sport: {sport}"}],
{"sport": "soccer"},
),
)
Use TemplateText
for the Completion API:
from im_openai import patched_openai, TemplateText
with patched_openai(api_key="...", api_name="sport-emoji"):
import openai
completion = openai.Completion.create(
# Standard OpenAI parameters
model="text-davinci-003",
prompt=TemplateText("Show me an emoji that matches the sport: {sport}", {"sport": "soccer"}),
)
Advanced usage
Patching at startup
Rather than using a context manager, you can patch the library once at startup:
from im_openai import patch_openai
patch_openai(api_key="...")
Then, you can use the patched library as normal:
import openai
completion = openai.ChatCompletion.create(
# Standard OpenAI parameters
...)
Manually passing parameters
While the use of TemplateText
and TemplateChat
are preferred, Most of the parameters passed during patch can also be passed directly to the create()
, with an ip_
prefix.
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
# Note we are passing the raw chat object here
messages=[{"role": "user", "content": "Show me an emoji that matches the sport: soccer"}],
# call configuration
ip_api_key="...",
ip_api_name="sport-emoji",
# Here the prompt and parameters is passed seperately
ip_template_params={"sport": "soccer"},
ip_template_chat=[
{"role": "user", "content": "Show me an emoji that matches the sport: {sport}"}
],
)
Langchain
For langchain, you can directly patch, or use a context manager before setting up a chain:
Using a context manager: (recommended)
from im_openai.langchain import prompt_watch_tracing
with prompt_watch_tracing("4b2a6608-86cd-4819-aba6-479f9edd8bfb", "sport-emoji"):
chain = LLMChain(llm=...)
chain.run("Hello world", inputs={"name": "world"})
The api_key
parameter is visible from your project's settings page.
the api_name parameter can also be passed directly to a template when you create it, so that it can be tracked separately from other templates:
from langchain import OpenAI, PromptTemplate, LLMChain
with prompt_watch_tracing("4b2a6608-86cd-4819-aba6-479f9edd8bfb", "default-questions"):
template = PromptTemplate("""
Please answer the following question: {question}.
""",
input_variables=["question"])
llm = LLMChain(prompt=prompt, llm=OpenAI())
llm.run(question="What is the meaning of life?")
# Track user greetings separately under the `user-greeting` api name
greeting_prompt = PromptTemplate("""
Please greet our newest forum member, {user}. Be nice and enthusiastic but not overwhelming.
""",
input_variables=["user"],
additional_kwargs={"ip_api_name": "user-greeting"})
llm = LLMChain(prompt=prompt, llm=OpenAI(openai_api_key=...))
llm.run(user="Bob")
Advanced usage
You can patch directly:
from im_openai.langchain import prompt_watch_tracing
old_tracer = enable_prompt_watch_tracing("emojification", "sport-emoji")
template_chat=ChatPromptTemplate.from_messages([{
"role": "user",
"content": "Show me an emoji that matches the sport: {sport}"
}])
chain = LLMChain(llm=ChatOpenAI(), prompt=template_chat)
chain.run(sport="Soccer")
# optional, if you need to disable tracing later
disable_prompt_watch_tracing(old_tracer)
Additional Parameters
Each of the above APIs accept the same additional parameters. The OpenAI API requires a ip_
prefix for each parameter.
-
template_chat
/ip_template_chat
: The chat template to use for the request. This is a list of dictionaries with the following keys:role
: The role of the speaker. Either"system"
,"user"
or"ai"
.content
: The content of the message. This can be a string or a template string with{}
placeholders.
For example:
[ {"role": "ai", "content": "Hello, I'm {system_name}!"}, {"role": "user", "content": "Hi {system_name}, I'm {user_name}!"} ]
To represent an array of chat messages, use the artificial role
"chat_history"
withcontent
set to the variable name in substitution format:[{"role": "chat_history", "content": "{prev_messages}"}}]
-
template_text
/ip_template_text
: The text template to use for completion-style requests. This is a string or a template string with{}
placeholders, e.g."Hello, {user_name}!"
. -
chat_id
/ip_chat_id
: The id of a "chat session" - if the chat API is being used in a conversational context, then the same chat id can be provided so that the events are grouped together, in order. If not provided, this will be left blank.
These parameters are only available in the patched OpenAI client:
ip_template_params
: The parameters to use for template strings. This is a dictionary of key-value pairs. Note: This value is inferred in the Langchain wrapper.ip_event_id
: A unique UUID for a specific call. If not provided, one will be generated. Note: In the langchain wrapper, this value is inferred from therun_id
.ip_parent_event_id
: The UUID of the parent event. If not provided, one will be generated. Note: In the langchain wrapper, this value is inferred from theparent_run_id
.
Credits
This package was created with Cookiecutter* and the audreyr/cookiecutter-pypackage
* project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _audreyr/cookiecutter-pypackage
: https://github.com/audreyr/cookiecutter-pypackage
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for im_openai-0.10.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 764500c3d532de15c5196f22110e0879ab45e1883004ecf485cbb2daafe45b9b |
|
MD5 | 6434e8dada724c357ad8bbc3aa11489f |
|
BLAKE2b-256 | 3efa0b6d23c9a81dda8b79fd5b82f1ce7752a4a4077e70c5ceb493351ad2f792 |