Skip to main content

LLM toolkit for lightning-fast, high-quality development

Project description

Tests Coverage Docs PyPI Version Stars Stars


Mirascope is an LLM toolkit for lightning-fast, high-quality development. Building with Mirascope feels like writing the Python code you’re already used to writing.

Installation

pip install mirascope

You can also install additional optional dependencies if you’re using those features:

pip install mirascope[anthropic]  # AnthropicCall, ...
pip install mirascope[gemini]     # GeminiCall, ...
pip install mirascope[wandb]      # WandbOpenAICall, ...
pip install mirascope[all]        # all optional dependencies

Examples

Colocation

Colocation is the core of our philosophy. Everything that can impact the quality of a call to an LLM — from the prompt to the model to the temperature — must live together so that we can properly version and test the quality of our calls over time. This is useful since we have all of the information including metadata that we could want for analysis, which is particularly important during rapid development.

import os

from mirascope import tags
from mirascope.openai import OpenAICall, OpenAICallParams

os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"


@tags(["version:0003"])
class Editor(OpenAICall):
    prompt_template = """
    SYSTEM:
    You are a top class manga editor.
    
    USER:
    I'm working on a new storyline. What do you think?
    {storyline}
    """
    
    storyline: str
    
    call_params = OpenAICallParams(model="gpt-4", temperature=0.4)


storyline = "..."
editor = Editor(storyline=storyline)

print(editor.messages())
# > [{'role': 'system', 'content': 'You are a top class manga editor.'}, {'role': 'user', 'content': "I'm working on a new storyline. What do you think?\n..."}]

critique = editor.call()
print(critique.content)
# > I think the beginning starts off great, but...

print(editor.dump() | critique.dump())
# {
#     "tags": ["version:0003"],
#     "template": "SYSTEM:\nYou are a top class manga editor.\n\nUSER:\nI'm working on a new storyline. What do you think?\n{storyline}",
#     "inputs": {"storyline": "..."},
#     "start_time": 1710452778501.079,
#     "end_time": 1710452779736.8418,
#     "output": {
#         "id": "chatcmpl-92nBykcXyTpxwAbTEM5BOKp99fVmv",
#         "choices": [
#             {
#                 "finish_reason": "stop",
#                 "index": 0,
#                 "logprobs": None,
#                 "message": {
#                     "content": "I think the beginning starts off great, but...",
#                     "role": "assistant",
#                     "function_call": None,
#                     "tool_calls": None,
#                 },
#             }
#         ],
#         "created": 1710452778,
#         "model": "gpt-4-0613",
#         "object": "chat.completion",
#         "system_fingerprint": None,
#         "usage": {"completion_tokens": 25, "prompt_tokens": 33, "total_tokens": 58},
#     },
# }

Chat History

Our template parser makes inserting chat history beyond easy:

from openai.types.chat import ChatCompletionMessageParam

from mirascope.openai import OpenAICall


class Librarian(OpenAICall):
    prompt_template = """
    SYSTEM: You are the world's greatest librarian.
    MESSAGES: {history}
    USER: {question}
    """

    question: str
    history: list[ChatCompletionMessageParam] = []


librarian = Librarian(question="", history=[])
while True:
    librarian.question = input("(User): ")
    response = librarian.call()
    librarian.history += [
        {"role": "user", "content": librarian.question},
        {"role": "assistant", "content": response.content},
    ]
    print(f"(Assistant): {response.content}")

#> (User): What fantasy book should I read?
#> (Assistant): Have you read the Name of the Wind?
#> (User): I have! What do you like about it?
#> (Assistant): I love the intricate world-building...

Tools (Function Calling)

We’ve made implementing and using tools (function calling) intuitive:

from typing import Literal

from mirascope.openai import OpenAICall, OpenAICallParams


def get_current_weather(
    location: str, unit: Literal["celsius", "fahrenheit"] = "fahrenheit"
):
    """Get the current weather in a given location."""
    if "tokyo" in location.lower():
        print(f"It is 10 degrees {unit} in Tokyo, Japan")
    elif "san francisco" in location.lower():
        print(f"It is 72 degrees {unit} in San Francisco, CA")
    elif "paris" in location.lower():
        print(f"It is 22 degress {unit} in Paris, France")
    else:
        print("I'm not sure what the weather is like in {location}")


class Forecast(OpenAICall):
    prompt_template = "What's the weather in Tokyo?"

    call_params = OpenAICallParams(model="gpt-4", tools=[get_current_weather])

tool = Forecast().call().tool
if tool:
    tool.fn(**tool.args)
	  #> It is 10 degrees fahrenheit in Tokyo, Japan

Chaining

Chaining multiple calls together for Chain of Thought (CoT) is as simple as writing a function:

import os
from functools import cached_property

from mirascope.openai import OpenAICall, OpenAICallParams

os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"


class ChefSelector(OpenAICall):
    prompt_template = "Name a chef who is really good at cooking {food_type} food"

    food_type: str

    call_params = OpenAICallParams(model="gpt-3.5-turbo-0125")


class RecipeRecommender(ChefSelector):
    prompt_template = """
    SYSTEM:
    Imagine that you are chef {chef}.
    Your task is to recommend recipes that you, {chef}, would be excited to serve.

    USER:
    Recommend a {food_type} recipe using {ingredient}.
    """

    ingredient: str
    
    call_params = OpenAICallParams(model="gpt-4")

    @cached_property  # !!! so multiple access doesn't make multiple calls
    def chef(self) -> str:
        """Uses `ChefSelector` to select the chef based on the food type."""
        return ChefSelector(food_type=self.food_type).call().content

response = RecipeRecommender(food_type="japanese", ingredient="apples").call()
print(response.content)
# > Certainly! Here's a recipe for a delicious and refreshing Japanese Apple Salad: ...

Extracting Structured Information

Convenience built on top of tools that makes extracting structured information reliable:

from typing import Literal, Type

from mirascope.openai import OpenAIExtractor
from pydantic import BaseModel


class TaskDetails(BaseModel):
    description: str
    due_date: str
    priority: Literal["low", "normal", "high"]


class TaskExtractor(OpenAIExtractor[TaskDetails]):
    extract_schema: Type[TaskDetails] = TaskDetails
    prompt_template = """
    Extract the task details from the following task:
    {task}
    """

    task: str


task = "Submit quarterly report by next Friday. Task is high priority."
task_details = TaskExtractor(task=task).extract()
assert isinstance(task_details, TaskDetails)
print(TaskDetails)
#> description='Submit quarterly report' due_date='next Friday' priority='high'

FastAPI Integration

Since we’ve built our BasePrompt on top of Pydantic, we integrate with tools like FastAPI out-of-the-box:

import os
from typing import Type

from fastapi import FastAPI
from mirascope.openai import OpenAIExtractor
from pydantic import BaseModel

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

app = FastAPI()


class Book(BaseModel):
    title: str
    author: str


class BookRecommender(OpenAIExtractor[Book]):
    extract_schema: Type[Book] = Book
    prompt_template = "Please recommend a {genre} book."

    genre: str


@app.post("/")
def root(book_recommender: BookRecommender) -> Book:
    """Generates a book based on provided `genre`."""
    return book_recommender.extract()

Roadmap

  • Extracting structured information using LLMs
  • RAG
  • Agents
  • Streaming extraction for tools (function calling)
  • Additional template parsing for more complex messages
    • Chat History
    • Better Tool Messages
    • Additional Metadata
    • Vision
  • Support for more LLM providers:
    • Anthropic
    • Gemini
    • Mistral
    • Groq
    • Cohere
    • HuggingFace
  • Integrations
    • Weights & Biases Trace
    • Weave by Weights & Biases
    • LangChain / LangSmith
    • … tell us what you’d like integrated!
  • Evaluating prompts and their quality by version

Versioning

Mirascope uses Semantic Versioning.

License

This project is licensed under the terms of the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mirascope-0.9.0.tar.gz (49.7 kB view hashes)

Uploaded Source

Built Distribution

mirascope-0.9.0-py3-none-any.whl (72.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page