Skip to main content

Wrapper library for openai to send events to the Imaginary Programming monitor

Project description

Imaginary Dev OpenAI wrapper

image

image

Documentation Status

Wrapper library for openai to send events to the Imaginary Programming monitor

Features

  • Patches the openai library to allow user to set an ip_api_key and ip_api_name for each request
  • Works out of the box with langchain

Get Started

To send events to Imaginary Programming, you'll need to create a project. From the project you'll need two things:

  1. API key: This is generated for the project and is used to identify the project and environment (dev, staging, prod) that the event is coming from.
  2. API Name: This uniquely identifies a particular prompt that you are using. This allows projects to have multiple prompts. You do not need to generate this in advance: if the API name does not exist, then it will be created automatically. This can be in any format but we recommend using a dash-separated format, e.g. my-prompt-name.

OpenAI

You can use the patched_openai context manager to patch your code.

To allow our tools to separate the "prompt" from the "prompt parameters", use TemplateChat and TemplateText to create templates.

Use TemplateChat For the ChatCompletion APIs:

from im_openai import patched_openai, TemplateChat

with patched_openai(api_key="...", api_name="sport-emoji"):
    import openai

    completion = openai.ChatCompletion.create(
        # Standard OpenAI parameters
        model="gpt-3.5-turbo",
        messages=TemplateChat(
            [{"role": "user", "content": "Show me an emoji that matches the sport: {sport}"}],
            {"sport": "soccer"},
        ),
    )

Use TemplateText for the Completion API:

from im_openai import patched_openai, TemplateText

with patched_openai(api_key="...", api_name="sport-emoji"):
    import openai

    completion = openai.Completion.create(
        # Standard OpenAI parameters
        model="text-davinci-003",
        prompt=TemplateText("Show me an emoji that matches the sport: {sport}", {"sport": "soccer"}),
    )

Advanced usage

Patching at startup

Rather than using a context manager, you can patch the library once at startup:

from im_openai import patch_openai
patch_openai(api_key="...")

Then, you can use the patched library as normal:

import openai

completion = openai.ChatCompletion.create(
    # Standard OpenAI parameters
    ...)
Manually passing parameters

While the use of TemplateText and TemplateChat are preferred, Most of the parameters passed during patch can also be passed directly to the create(), with an ip_ prefix.

completion = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",

    # Note we are passing the raw chat object here
    messages=[{"role": "user", "content": "Show me an emoji that matches the sport: soccer"}],

    # call configuration
    ip_api_key="...",
    ip_api_name="sport-emoji",

    # Here the prompt and parameters is passed seperately
    ip_template_params={"sport": "soccer"},
    ip_template_chat=[
        {"role": "user", "content": "Show me an emoji that matches the sport: {sport}"}
    ],
)

Langchain

For langchain, you can directly patch, or use a context manager before setting up a chain:

Using a context manager: (recommended)

from im_openai.langchain import prompt_watch_tracing

with prompt_watch_tracing("4b2a6608-86cd-4819-aba6-479f9edd8bfb", "sport-emoji"):
    chain = LLMChain(llm=...)
    chain.run("Hello world", inputs={"name": "world"})

The api_key parameter is visible from your project's settings page.

the api_name parameter can also be passed directly to a template when you create it, so that it can be tracked separately from other templates:

from langchain import OpenAI, PromptTemplate, LLMChain

with prompt_watch_tracing("4b2a6608-86cd-4819-aba6-479f9edd8bfb", "default-questions"):
    template = PromptTemplate("""
Please answer the following question: {question}.
""",
        input_variables=["question"])
    llm = LLMChain(prompt=prompt, llm=OpenAI())
    llm.run(question="What is the meaning of life?")

    # Track user greetings separately under the `user-greeting` api name
    greeting_prompt = PromptTemplate("""
Please greet our newest forum member, {user}. Be nice and enthusiastic but not overwhelming.
""",
        input_variables=["user"],
        additional_kwargs={"ip_api_name": "user-greeting"})
    llm = LLMChain(prompt=prompt, llm=OpenAI(openai_api_key=...))
    llm.run(user="Bob")

Advanced usage

You can patch directly:

from im_openai.langchain import prompt_watch_tracing

old_tracer = enable_prompt_watch_tracing("emojification", "sport-emoji")
template_chat=ChatPromptTemplate.from_messages([{
    "role": "user",
    "content": "Show me an emoji that matches the sport: {sport}"
}])
chain = LLMChain(llm=ChatOpenAI(), prompt=template_chat)
chain.run(sport="Soccer")

# optional, if you need to disable tracing later
disable_prompt_watch_tracing(old_tracer)

Additional Parameters

Each of the above APIs accept the same additional parameters. The OpenAI API requires a ip_ prefix for each parameter.

  • template_chat / ip_template_chat: The chat template to use for the request. This is a list of dictionaries with the following keys:

    • role: The role of the speaker. Either "system", "user" or "ai".
    • content: The content of the message. This can be a string or a template string with {} placeholders.

    For example:

    [
      {"role": "ai", "content": "Hello, I'm {system_name}!"},
      {"role": "user", "content": "Hi {system_name}, I'm {user_name}!"}
    ]
    

    To represent an array of chat messages, use the artificial role "chat_history" with content set to the variable name in substitution format: [{"role": "chat_history", "content": "{prev_messages}"}}]

  • template_text / ip_template_text: The text template to use for completion-style requests. This is a string or a template string with {} placeholders, e.g. "Hello, {user_name}!".

  • chat_id / ip_chat_id: The id of a "chat session" - if the chat API is being used in a conversational context, then the same chat id can be provided so that the events are grouped together, in order. If not provided, this will be left blank.

These parameters are only available in the patched OpenAI client:

  • ip_template_params: The parameters to use for template strings. This is a dictionary of key-value pairs. Note: This value is inferred in the Langchain wrapper.
  • ip_event_id: A unique UUID for a specific call. If not provided, one will be generated. Note: In the langchain wrapper, this value is inferred from the run_id.
  • ip_parent_event_id: The UUID of the parent event. If not provided, one will be generated. Note: In the langchain wrapper, this value is inferred from the parent_run_id.

Credits

This package was created with Cookiecutter* and the audreyr/cookiecutter-pypackage* project template.

.. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _audreyr/cookiecutter-pypackage: https://github.com/audreyr/cookiecutter-pypackage

======= History =======

0.1.0 (2023-06-20)

  • First release on PyPI.

0.1.1 (2023-06-23)

  • add TemplateString helper and support for data / params

0.1.2 (2023-06-23)

  • add support for original template too

0.2.0 (2023-06-26)

  • add explicit support for passing the "prompt template text"

0.3.0 (2023-06-28)

  • add support for chat templates (as objects instead of arrays)

0.4.0 (2023-06-29)

  • switch event reporting to be async / non-blocking

0.4.1 (2023-06-29)

  • add utility for formatting langchain messages

0.4.2 (2023-06-29)

  • remove stray breakpoint

0.4.3 (2023-06-30)

  • pass along chat_id
  • attempt to auto-convert langchain prompt templates

0.4.4 (2023-06-30)

  • remove stray prints

0.5.0 (2023-07-06)

  • Add langchain callbacks handlers

0.6.0 (2023-07-10)

  • Handle duplicate callbacks, agents, etc

0.6.1 (2023-07-12)

  • Fix prompt retrieval in deep chains

0.6.2 (2023-07-13)

  • Handle cases where input values are not strings

0.6.3 (2023-07-18)

  • Better support for server-generated event ids (pre-llm sends event, post-llm re-uses the same id)
  • more tests for different kinds of templates

0.6.4

  • include temporary patched version of loads()

0.7.0

  • breaking change: move im_openai.langchain_util to im_openai.langchain
  • add support for injecting callbacks into all langchain calls using tracing hooks

0.7.1

  • Pass along model params to the server

0.7.3

  • add explicit support for api_key

0.8.0

  • switch to api_key, pretend project_key isn't even a thing

0.8.1

  • Used root parent_run_id in langchain calls
  • Unified langchain run id accounting

0.8.2

  • added ability to pass ip_api_name into langchain template's additional_kwargs, like:
    template = TemplateString(
        "Hello, {{name}}!",
        additional_kwargs={"ip_api_name": "my-api"},
    )
    

0.8.3

  • Added context manager for basic openai calls
  • Better docs

0.8.4

  • Switched to load() now that it is in langchain proper
  • Resolved None to {varname} in templates rather than leaving it out

0.9.0

  • Complete rewrite of prompt resolution for chats: better support for agents

0.9.1

  • Thread through chat_id

0.9.2

  • Fixed typos in docs, clarified using ip_ parameters
  • Flushed out working TemplateText / TemplateChat templates

0.9.3

  • Simplified requirements a bit

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

im_openai-0.9.3.tar.gz (30.9 kB view hashes)

Uploaded Source

Built Distribution

im_openai-0.9.3-py2.py3-none-any.whl (20.3 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page