[
  {
    "path": ".github/workflows/black.yml",
    "content": "name: Lint\n\non: [push, pull_request]\n\njobs:\n  lint:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v3\n      - uses: psf/black@stable\n"
  },
  {
    "path": ".github/workflows/python_tests.yml",
    "content": "name: Python Tests\n\non: [push, pull_request]\n\njobs:\n  build_and_test:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        python-version: [ \"3.10\", \"3.11\", \"3.12\" ]\n    steps:\n      - uses: actions/checkout@v4\n      - name: Set up Python ${{ matrix.python-version }}\n        uses: actions/setup-python@v5\n        with:\n          python-version: ${{ matrix.python-version }}\n      - name: Install dependencies\n        run: |\n          python -m pip install --upgrade pip\n          pip install pdm\n          pdm install\n      - name: Test with pytest\n        run: pdm run pytest\n"
  },
  {
    "path": ".gitignore",
    "content": "# DuckDB Stores\n*.db\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n*.json\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\ncover/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\n.pybuilder/\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n#   For a library or package, you might want to ignore these files since the code is\n#   intended to run in multiple environments; otherwise, check them in:\n# .python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# poetry\n#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.\n#   This is especially recommended for binary packages to ensure reproducibility, and is more\n#   commonly ignored for libraries.\n#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control\n#poetry.lock\n\n# pdm\n#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.\n#pdm.lock\n#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it\n#   in version control.\n#   https://pdm.fming.dev/latest/usage/project/#working-with-version-control\n.pdm.toml\n.pdm-python\n.pdm-build/\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n# PyCharm\n#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can\n#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore\n#  and can be added to the global gitignore or merged into this file.  For a more nuclear\n#  option (not recommended) you can uncomment the following to ignore the entire idea folder.\n#.idea/\n/store\n.db.wal\n.wal\n.db"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing\n\n## Development\n\nWe use [PDM](https://pdm-project.org/en/latest/) to manage dependencies and virtual environments. Make sure you have it installed and then run:\n\n```bash\npdm install\n```\n\n## Publishing\n\nConfigure the PyPI credentials through environment variables `PDM_PUBLISH_USERNAME=\"__token__\"` and `PDM_PUBLISH_PASSWORD=<your-pypi-token>` and run:\n\n```bash\npdm publish\n```\n"
  },
  {
    "path": "README.md",
    "content": "<div align=\"center\">\n\n<h1>🤗🔭 Observers 🔭🤗</h1>\n\n<h3 align=\"center\">A Lightweight Library for AI Observability</h3>\n\n</div>\n\n![Observers Logo](./assets/observers.png)\n\n## Installation\n\nFirst things first! You can install the SDK with pip as follows:\n\n```bash\npip install observers\n```\n\nOr if you want to use other LLM providers through AISuite or Litellm, you can install the SDK with pip as follows:\n\n```bash\npip install observers[aisuite] # or observers[litellm]\n```\n\nFor open telemetry, you can install the following:\n\n```bash\npip install observers[opentelemetry]\n```\n\n## Usage\n\nWe differentiate between observers and stores. Observers wrap generative AI APIs (like OpenAI or llama-index) and track their interactions. Stores are classes that sync these observations to different storage backends (like DuckDB or Hugging Face datasets).\n\nTo get started you can run the code below. It sends requests to a HF serverless endpoint and log the interactions into a Hub dataset, using the default store `DatasetsStore`. The dataset will be pushed to your personal workspace (http://hf.co/{your_username}). To learn how to configure stores, go to the next section.\n\n```python\nfrom openai import OpenAI\nfrom observers import wrap_openai\n\nopenai_client = OpenAI()\n\nclient = wrap_openai(openai_client)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Tell me a joke.\"}],\n)\nprint(response)\n```\n\n## Observers\n\n### Supported Observers\n\nWe support both sync and async versions of the following observers:\n\n- [OpenAI](https://openai.com/) and every other LLM provider that implements the [OpenAI API message formate](https://platform.openai.com/docs/api-reference)\n- [Hugging Face transformers](https://huggingface.co/docs/transformers/index), the transformers library by Hugging Face offers a simple API with its [TextGenerationPipeline](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TextGenerationPipeline) for running LLM inference.\n- [Hugging Face Inference Client](https://huggingface.co/docs/huggingface_hub/guides/inference), which is the official client for Hugging Face's [Serverless Inference API](https://huggingface.co/docs/api-inference/en/index), a fast API with a free tier for running LLM inference with models from the Hugging Face Hub.\n- [AISuite](https://github.com/andrewyng/aisuite), which is an LLM router by Andrew Ng and which maps to [a lot of LLM API providers](https://github.com/andrewyng/aisuite/tree/main/aisuite/providers) with a uniform interface.\n- [Litellm](https://docs.litellm.ai/docs/), which is a library that allows you to use [a lot of different LLM APIs](https://docs.litellm.ai/docs/providers) with a uniform interface.\n\n### Change OpenAI compliant LLM provider\n\nThe `wrap_openai` function allows you to wrap any OpenAI compliant LLM provider. Take a look at [the example doing this for Ollama](./examples/observers/ollama_example.py) for more details.\n\n## Stores\n\n### Supported Stores\n\n| Store | Example | Annotate | Local | Free | UI filters | SQL filters |\n|-------|---------|----------|-------|------|-------------|--------------|\n| [Hugging Face Datasets](https://huggingface.co/docs/huggingface_hub/en/package_reference/io-management#datasets) | [example](./examples/stores/datasets_example.py) | ❌ | ❌ | ✅ | ✅ | ✅ |\n| [DuckDB](https://duckdb.org/) | [example](./examples/stores/duckdb_example.py) | ❌ | ✅ | ✅ | ❌ | ✅ |\n| [Argilla](https://argilla.io/) | [example](./examples/stores/argilla_example.py) | ✅ | ❌ | ✅ | ✅ | ❌ |\n| [OpenTelemetry](https://opentelemetry.io/) | [example](./examples/stores/opentelemetry_example.py) | ︖* | ︖* | ︖* | ︖* | ︖* |\n| [Honeycomb](https://honeycomb.io/) | [example](./examples/stores/opentelemetry_example.py) | ✅ |❌| ✅ | ✅ | ✅ |\n* These features, for the OpenTelemetry store, depend upon the provider you use\n\n### Viewing / Querying\n\n#### Hugging Face Datasets Store\n\nTo view and query Hugging Face Datasets, you can use the [Hugging Face Datasets Viewer](https://huggingface.co/docs/hub/en/datasets-viewer). You can [find example datasets on the Hugging Face Hub](https://huggingface.co/datasets?other=observers). From within here, you can query the dataset using SQL or using your own UI. Take a look at [the example](./examples/stores/datasets_example.py) for more details.\n\n![Hugging Face Datasets Viewer](./assets/datasets.png)\n\n#### DuckDB Store\n\nThe default store is [DuckDB](https://duckdb.org/) and can be viewed and queried using the [DuckDB CLI](https://duckdb.org/#quickinstall). Take a look at [the example](./examples/stores/duckdb_example.py) for more details.\n\n```bash\n> duckdb store.db\n> from openai_records limit 10;\n┌──────────────────────┬──────────────────────┬──────────────────────┬──────────────────────┬───┬─────────┬──────────────────────┬───────────┐\n│          id          │        model         │      timestamp       │       messages       │ … │  error  │     raw_response     │ synced_at │\n│       varchar        │       varchar        │      timestamp       │ struct(\"role\" varc…  │   │ varchar │         json         │ timestamp │\n├──────────────────────┼──────────────────────┼──────────────────────┼──────────────────────┼───┼─────────┼──────────────────────┼───────────┤\n│ 89cb15f1-d902-4586…  │ Qwen/Qwen2.5-Coder…  │ 2024-11-19 17:12:3…  │ [{'role': user, 'c…  │ … │         │ {\"id\": \"\", \"choice…  │           │\n│ 415dd081-5000-4d1a…  │ Qwen/Qwen2.5-Coder…  │ 2024-11-19 17:28:5…  │ [{'role': user, 'c…  │ … │         │ {\"id\": \"\", \"choice…  │           │\n│ chatcmpl-926         │ llama3.1             │ 2024-11-19 17:31:5…  │ [{'role': user, 'c…  │ … │         │ {\"id\": \"chatcmpl-9…  │           │\n├──────────────────────┴──────────────────────┴──────────────────────┴──────────────────────┴───┴─────────┴──────────────────────┴───────────┤\n│ 3 rows                                                                                                                16 columns (7 shown) │\n└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n```\n\n#### Argilla Store\n\nThe Argilla Store allows you to sync your observations to [Argilla](https://argilla.io/). To use it, you first need to create a [free Argilla deployment on Hugging Face](https://docs.argilla.io/latest/getting_started/quickstart/). Take a look at [the example](./examples/stores/argilla_example.py) for more details.\n\n![Argilla Store](./assets/argilla.png)\n\n#### OpenTelemetry Store\n\nThe OpenTelemetry \"Store\" allows you to sync your observations to any provider that supports OpenTelemetry! Examples are provided for [Honeycomb](https://honeycomb.io), but any provider that supplies OpenTelemetry compatible environment variables should Just Work®, and your queries will be executed as usual in your provider, against _trace_ data coming from Observers.\n\n## Contributing\n\nSee [CONTRIBUTING.md](./CONTRIBUTING.md)\n"
  },
  {
    "path": "examples/models/aisuite_example.py",
    "content": "import os\n\nimport aisuite as ai\n\nfrom observers import wrap_aisuite\n\n# Initialize AI Suite client\nclient = ai.Client()\n\n# Wrap client to enable tracking\nclient = wrap_aisuite(client)\n\n# Set API keys\nos.environ[\"ANTHROPIC_API_KEY\"] = \"your-api-key\"\nos.environ[\"OPENAI_API_KEY\"] = \"your-api-key\"\n\n# Define models to test\nmodels = [\"openai:gpt-4o\", \"anthropic:claude-3-5-sonnet-20240620\"]\n\n# Define conversation messages\nmessages = [\n    {\"role\": \"system\", \"content\": \"Respond in Pirate English.\"},\n    {\"role\": \"user\", \"content\": \"Tell me a joke.\"},\n]\n\n# Get completions from each model\nfor model in models:\n    response = client.chat.completions.create(\n        model=model, messages=messages, temperature=0.75\n    )\n    print(response.choices[0].message.content)\n"
  },
  {
    "path": "examples/models/async_openai_example.py",
    "content": "import asyncio\nimport os\n\nfrom openai import AsyncOpenAI\n\nfrom observers import wrap_openai\n\nopenai_client = AsyncOpenAI(api_key=os.getenv(\"OPENAI_API_KEY\"))\n\nclient = wrap_openai(openai_client)\n\n\nasync def get_response() -> None:\n    response = await client.chat.completions.create(\n        model=\"gpt-4o\",\n        messages=[{\"role\": \"user\", \"content\": \"Tell me a joke.\"}],\n    )\n    print(response)\n\n\nif __name__ == \"__main__\":\n    import asyncio\n\n    asyncio.run(get_response())\n"
  },
  {
    "path": "examples/models/hf_client_example.py",
    "content": "import os\n\nfrom huggingface_hub import InferenceClient\n\nimport observers\n\napi_key = os.getenv(\"HF_TOKEN\")\n\n\n# Patch the HF client\nhf_client = InferenceClient(token=api_key)\nclient = observers.wrap_hf_client(hf_client)\n\nresponse = client.chat.completions.create(\n    model=\"Qwen/Qwen2.5-Coder-32B-Instruct\",\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Write a function in Python that checks if a string is a palindrome.\",\n        }\n    ],\n)\n"
  },
  {
    "path": "examples/models/litellm_example.py",
    "content": "import os\n\nfrom litellm import completion\n\nfrom observers import wrap_litellm\n\n# Ensure you have both API keys set in environment variables\nos.environ[\"OPENAI_API_KEY\"] = \"your-api-key\"\nos.environ[\"ANTHROPIC_API_KEY\"] = \"your-api-key\"\n\n# Wrap the completion function to enable tracking\nclient = wrap_litellm(completion)\n\n# Define models and messages\nmodels = [\"gpt-3.5-turbo\", \"claude-3-5-sonnet-20240620\"]\n\nmessages = [{\"content\": \"Hello, how are you?\", \"role\": \"user\"}]\n\n# Get completions from each model\nfor model in models:\n    response = client.chat.completions.create(\n        model=model, messages=messages, temperature=0.75\n    )\n    print(response.choices[0].message.content)\n"
  },
  {
    "path": "examples/models/ollama_example.py",
    "content": "from openai import OpenAI\n\nfrom observers import wrap_openai\n\n# Ollama is running locally at http://localhost:11434/v1\nopenai_client = OpenAI(base_url=\"http://localhost:11434/v1\")\n\nclient = wrap_openai(openai_client)\n\nresponse = client.chat.completions.create(\n    model=\"llama3.1\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Tell me a joke.\"},\n    ],\n)\nprint(response)\n"
  },
  {
    "path": "examples/models/openai_example.py",
    "content": "from openai import OpenAI\n\nfrom observers import wrap_openai\n\n\nopenai_client = OpenAI()\n\nclient = wrap_openai(openai_client)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Tell me a joke in the voice of a pirate.\"}],\n    temperature=0.5,\n)\n\nprint(response.choices[0].message.content)\n"
  },
  {
    "path": "examples/models/stream_async_hf_client_example.py",
    "content": "import os\n\nfrom huggingface_hub import AsyncInferenceClient\n\nimport observers\n\napi_key = os.getenv(\"HF_TOKEN\")\n\n\n# Patch the HF client\nhf_client = AsyncInferenceClient(token=api_key)\nclient = observers.wrap_hf_client(hf_client)\n\n\nasync def get_response() -> None:\n    response = await client.chat.completions.create(\n        model=\"Qwen/Qwen2.5-Coder-32B-Instruct\",\n        messages=[\n            {\n                \"role\": \"user\",\n                \"content\": \"Write a function in Python that checks if a string is a palindrome.\",\n            }\n        ],\n        stream=True,\n    )\n\n    async for chunk in response:\n        print(chunk)\n\n\nif __name__ == \"__main__\":\n    import asyncio\n\n    asyncio.run(get_response())\n"
  },
  {
    "path": "examples/models/stream_openai_example.py",
    "content": "import asyncio\nimport os\n\nfrom openai import AsyncOpenAI\n\nfrom observers import wrap_openai\n\nopenai_client = AsyncOpenAI(api_key=os.getenv(\"OPENAI_API_KEY\"))\n\nclient = wrap_openai(openai_client)\n\n\nasync def get_response() -> None:\n    response = await client.chat.completions.create(\n        model=\"gpt-4o\",\n        messages=[{\"role\": \"user\", \"content\": \"Tell me a joke.\"}],\n        stream=True,\n    )\n    async for chunk in response:\n        print(chunk)\n\n\nif __name__ == \"__main__\":\n    import asyncio\n\n    asyncio.run(get_response())\n"
  },
  {
    "path": "examples/models/transformers_example.py",
    "content": "import os\n\nfrom transformers import pipeline\n\nimport observers\n\ntoken = os.getenv(\"HF_TOKEN\")\npipe = pipeline(\n    \"text-generation\",\n    model=\"Qwen/Qwen2.5-0.5B-Instruct\",\n    token=token,\n)\nclient = observers.wrap_transformers(pipe)\nmessages = [\n    {\n        \"role\": \"system\",\n        \"content\": \"You are a pirate chatbot who always responds in pirate speak!\",\n    },\n    {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\nresponse = client.chat.completions.create(\n    messages=messages,\n    max_new_tokens=256,\n)\nprint(response)\n"
  },
  {
    "path": "examples/openai_function_calling_example.py",
    "content": "from openai import OpenAI\n\nfrom observers import wrap_openai\nfrom observers.stores import DatasetsStore\n\nstore = DatasetsStore(\n    repo_name=\"gpt-4o-function-calling-traces\",\n    every=5,  # sync every 5 minutes\n)\n\nopenai_client = OpenAI()\n\ntools = [\n    {\n        \"type\": \"function\",\n        \"function\": {\n            \"name\": \"get_delivery_date\",\n            \"description\": \"Get the delivery date for a customer's order. Call this whenever you need to know the delivery date, for example when a customer asks 'Where is my package'\",\n            \"parameters\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"order_id\": {\n                        \"type\": \"string\",\n                        \"description\": \"The customer's order ID.\",\n                    },\n                },\n                \"required\": [\"order_id\"],\n                \"additionalProperties\": False,\n            },\n        },\n    }\n]\n\nmessages = [\n    {\n        \"role\": \"system\",\n        \"content\": \"You are a helpful customer support assistant. Use the supplied tools to assist the user.\",\n    },\n    {\n        \"role\": \"user\",\n        \"content\": \"Hi, can you tell me the delivery date for my order? It's order 1234567890.\",\n    },\n]\n\n\nclient = wrap_openai(openai_client, store=store)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=messages,\n    tools=tools,\n)\n"
  },
  {
    "path": "examples/stores/argilla_example.py",
    "content": "from argilla import TextQuestion  # noqa\nfrom observers import wrap_openai\nfrom observers.stores import ArgillaStore\nfrom openai import OpenAI\n\napi_url = \"<argilla-api-url>\"\napi_key = \"<argilla-api-key>\"\n\nstore = ArgillaStore(\n    api_url=api_url,\n    api_key=api_key,\n    # questions=[TextQuestion(name=\"text\")], optional\n)\n\nopenai_client = OpenAI()\n\nclient = wrap_openai(openai_client, store=store)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Tell me a joke.\"}],\n)\n\nprint(response.choices[0].message.content)\n"
  },
  {
    "path": "examples/stores/datasets_example.py",
    "content": "from observers import wrap_openai\nfrom observers.stores import DatasetsStore\nfrom openai import OpenAI\n\nstore = DatasetsStore(\n    repo_name=\"gpt-4o-traces\",\n    every=5,  # sync every 5 minutes\n)\n\nopenai_client = OpenAI()\n\nclient = wrap_openai(openai_client, store=store)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Tell me a joke.\"}],\n)\n\nprint(response.choices[0].message.content)\n"
  },
  {
    "path": "examples/stores/duckdb_example.py",
    "content": "from observers import wrap_openai\nfrom observers.stores import DuckDBStore\nfrom openai import OpenAI\n\nstore = DuckDBStore()\n\nopenai_client = OpenAI()\n\nclient = wrap_openai(openai_client, store=store)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Tell me a joke.\"}],\n)\n"
  },
  {
    "path": "examples/stores/opentelemetry_example.py",
    "content": "import os\n\nfrom openai import OpenAI\n\nfrom observers import wrap_openai\nfrom observers.stores.opentelemetry import OpenTelemetryStore\n\n\n# Use your usual environment variables to configure OpenTelemetry\n# Here's an example for Honeycomb\nos.environ.setdefault(\"OTEL_SERVICE_NAME\", \"llm-observer-example\")\nos.environ.setdefault(\"OTEL_EXPORTER_OTLP_PROTOCOL\", \"http/protobuf\")\nos.environ.setdefault(\"OTEL_EXPORTER_OTLP_ENDPOINT\", \"https://api.honeycomb.io\")\n\n# Note: Keeping the sensitive ingest key in actual env vars, not in code\n# export OTEL_EXPORTER_OTLP_HEADERS=\"x-honeycomb-team=<api-key>\"\n\nstore = OpenTelemetryStore()\n\nopenai_client = OpenAI()\n\nclient = wrap_openai(openai_client, store=store)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-4o\", messages=[{\"role\": \"user\", \"content\": \"Tell me a joke.\"}]\n)\n\n# The OpenTelemetryStore links multiple completions into a trace\nresponse = client.chat.completions.create(\n    model=\"gpt-4o\", messages=[{\"role\": \"user\", \"content\": \"Tell me another joke.\"}]\n)\n# Now query your Opentelemetry Compatible observability store as you usually do!\n"
  },
  {
    "path": "examples/vision_example.py",
    "content": "from openai import OpenAI\n\nfrom observers import wrap_openai\nfrom observers.stores import DatasetsStore\n\n\nstore = DatasetsStore(\n    repo_name=\"gpt-4o-mini-vision-traces\",\n    every=5,  # sync every 5 minutes\n)\n\nopenai_client = OpenAI()\nclient = wrap_openai(openai_client, store=store)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\"type\": \"text\", \"text\": \"What’s in this image?\"},\n                {\n                    \"type\": \"image_url\",\n                    \"image_url\": {\n                        \"url\": \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\",\n                    },\n                },\n            ],\n        }\n    ],\n    max_tokens=300,\n)\n\nprint(response.choices[0].message.content)\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[project]\nname = \"observers\"\nversion = \"0.2.0\"\ndescription = \"🤗 Observers: A Lightweight Library for AI Observability\"\nauthors = [\n    {name = \"davidberenstein1957\", email = \"david.m.berenstein@gmail.com\"},\n]\ntags = [\n    \"observability\",\n    \"monitoring\",\n    \"logging\",\n    \"model-monitoring\",\n    \"model-observability\",\n    \"generative-ai\",\n    \"ai\",\n    \"traceability\",\n    \"instrumentation\",\n    \"instrumentation-library\",\n    \"instrumentation-sdk\",\n]\nrequires-python = \"<3.13,>=3.10\"\nreadme = \"README.md\"\nlicense = {text = \"Apache 2\"}\n\ndependencies = [\n    \"duckdb>=1.0.0\",\n    \"datasets>=3.0.0\",\n    \"openai>=1.50.0\",\n    \"argilla>=2.3.0\",\n]\n\n[project.optional-dependencies]\naisuite = [\n    \"aisuite[all]>=0.1.6\",\n]\ndev = [\n    \"pytest>=8.3.3\",\n    \"black>=24.10.0\",\n    \"jinja2>=3.1.4\",\n    \"pytest-asyncio>=0.25.1\",\n]\nlitellm = [\n    \"litellm>=1.52\",\n]\ntransformers = [\n    \"transformers>=4.46.0\",\n    \"torch>=2\",\n]\nopentelemetry = [\n    \"opentelemetry-api>=1.28.0\",\n    \"opentelemetry-sdk>=1.28.0\",\n    \"opentelemetry-exporter-otlp-proto-grpc>=1.28.0\",\n]\n\n[build-system]\nrequires = [\"pdm-backend\"]\nbuild-backend = \"pdm.backend\"\n\n\n[tool.pdm]\ndistribution = true\n"
  },
  {
    "path": "src/observers/__init__.py",
    "content": "from typing import List\n\nfrom .models.aisuite import wrap_aisuite\nfrom .models.base import ChatCompletionObserver, ChatCompletionRecord\nfrom .models.hf_client import wrap_hf_client\nfrom .models.litellm import wrap_litellm\nfrom .models.openai import OpenAIRecord, wrap_openai\nfrom .models.transformers import TransformersRecord, wrap_transformers\nfrom .stores.base import Store\nfrom .stores.datasets import DatasetsStore\n\n__all__: List[str] = [\n    \"ChatCompletionObserver\",\n    \"ChatCompletionRecord\",\n    \"TransformersRecord\",\n    \"OpenAIRecord\",\n    \"wrap_openai\",\n    \"wrap_transformers\",\n    \"DatasetsStore\",\n    \"Store\",\n    \"wrap_aisuite\",\n    \"wrap_litellm\",\n    \"wrap_hf_client\",\n    \"ArgillaStore\",\n    \"DuckDBStore\",\n]\n"
  },
  {
    "path": "src/observers/base.py",
    "content": "import uuid\nfrom abc import ABC, abstractmethod\nfrom dataclasses import dataclass, field\nfrom typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional\nfrom typing_extensions import Literal\n\nif TYPE_CHECKING:\n    from argilla import Argilla\n\n\n@dataclass\nclass Function:\n    \"\"\"Function tool call information\"\"\"\n\n    name: str\n    arguments: str\n\n\n@dataclass\nclass ToolCall:\n    \"\"\"Tool call information\"\"\"\n\n    id: str\n    type: Literal[\"function\"]\n    function: Function\n\n\n@dataclass\nclass Message:\n    role: Literal[\"system\", \"user\", \"assistant\", \"function\"]\n    content: str\n    tool_calls: Optional[List[ToolCall]] = None\n    \"\"\"The tool calls generated by the model, such as function calls.\"\"\"\n\n    function_call: Optional[Function] = None\n    \"\"\"Deprecated and replaced by `tool_calls`.\n\n    The name and arguments of a function that should be called, as generated by the\n    model.\n    \"\"\"\n\n\n@dataclass\nclass Record(ABC):\n    \"\"\"\n    Base class for storing model response information\n    \"\"\"\n\n    client_name: str = field(init=False)\n    id: str = field(default_factory=lambda: str(uuid.uuid4()))\n    tags: List[str] = None\n    properties: Dict[str, Any] = None\n    error: Optional[str] = None\n    raw_response: Optional[Dict] = None\n\n    @property\n    @abstractmethod\n    def json_fields(self):\n        \"\"\"Return the DuckDB JSON fields for the record\"\"\"\n        pass\n\n    @property\n    @abstractmethod\n    def image_fields(self):\n        \"\"\"Return the DuckDB image fields for the record\"\"\"\n        pass\n\n    @property\n    @abstractmethod\n    def table_columns(self):\n        \"\"\"Return the DuckDB table columns for the record\"\"\"\n        pass\n\n    @property\n    @abstractmethod\n    def duckdb_schema(self):\n        \"\"\"Return the DuckDB schema for the record\"\"\"\n        pass\n\n    @property\n    @abstractmethod\n    def table_name(self):\n        \"\"\"Return the DuckDB table name for the record\"\"\"\n        pass\n\n    @abstractmethod\n    def argilla_settings(self, client: \"Argilla\"):\n        \"\"\"Return the Argilla settings for the record\"\"\"\n        pass\n"
  },
  {
    "path": "src/observers/frameworks/__init__.py",
    "content": ""
  },
  {
    "path": "src/observers/models/__init__.py",
    "content": ""
  },
  {
    "path": "src/observers/models/aisuite.py",
    "content": "from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\n\nfrom observers.models.base import (\n    AsyncChatCompletionObserver,\n    ChatCompletionObserver,\n)\nfrom observers.models.openai import OpenAIRecord\n\nif TYPE_CHECKING:\n    from aisuite import Client\n\n    from observers.stores.argilla import ArgillaStore\n    from observers.stores.datasets import DatasetsStore\n    from observers.stores.duckdb import DuckDBStore\n\n\nclass AisuiteRecord(OpenAIRecord):\n    client_name: str = \"aisuite\"\n\n\ndef wrap_aisuite(\n    client: \"Client\",\n    store: Optional[Union[\"DatasetsStore\", \"DuckDBStore\", \"ArgillaStore\"]] = None,\n    tags: Optional[List[str]] = None,\n    properties: Optional[Dict[str, Any]] = None,\n    logging_rate: Optional[float] = 1,\n) -> Union[AsyncChatCompletionObserver, ChatCompletionObserver]:\n    \"\"\"Wraps Aisuite client to track API calls in a Store.\n\n    Args:\n        client (`Union[InferenceClient, AsyncInferenceClient]`):\n            The HF Inference Client to wrap.\n        store (`Union[DuckDBStore, DatasetsStore]`, *optional*):\n            The store to use to save the records.\n        tags (`List[str]`, *optional*):\n            The tags to associate with records.\n        properties (`Dict[str, Any]`, *optional*):\n            The properties to associate with records.\n        logging_rate (`float`, *optional*):\n            The logging rate to use for logging, defaults to 1\n\n    Returns:\n        `ChatCompletionObserver`:\n            The observer that wraps the Aisuite client.\n    \"\"\"\n    return ChatCompletionObserver(\n        client=client,\n        create=client.chat.completions.create,\n        format_input=lambda messages, **kwargs: kwargs | {\"messages\": messages},\n        parse_response=AisuiteRecord.from_response,\n        store=store,\n        tags=tags,\n        properties=properties,\n        logging_rate=logging_rate,\n    )\n"
  },
  {
    "path": "src/observers/models/base.py",
    "content": "import datetime\nimport random\nfrom dataclasses import dataclass, field\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union\n\nfrom typing_extensions import Self\n\nfrom observers.base import Message, Record\nfrom observers.stores.datasets import DatasetsStore\n\nif TYPE_CHECKING:\n    from argilla import Argilla\n\n    from observers.stores.duckdb import DuckDBStore\n\n\n@dataclass\nclass ChatCompletionRecord(Record):\n    \"\"\"\n    Data class for storing chat completion records.\n    \"\"\"\n\n    model: str = None\n    timestamp: str = field(default_factory=lambda: datetime.datetime.now().isoformat())\n    arguments: Optional[Dict[str, Any]] = None\n\n    messages: List[Message] = None\n    assistant_message: Optional[str] = None\n    completion_tokens: Optional[int] = None\n    prompt_tokens: Optional[int] = None\n    total_tokens: Optional[int] = None\n    finish_reason: str = None\n    tool_calls: Optional[Any] = None\n    function_call: Optional[Any] = None\n\n    @classmethod\n    def from_response(cls, response=None, error=None, model=None, **kwargs):\n        \"\"\"Create a response record from an API response or error\"\"\"\n        pass\n\n    @property\n    def table_columns(self):\n        return [\n            \"id\",\n            \"model\",\n            \"timestamp\",\n            \"messages\",\n            \"assistant_message\",\n            \"completion_tokens\",\n            \"prompt_tokens\",\n            \"total_tokens\",\n            \"finish_reason\",\n            \"tool_calls\",\n            \"function_call\",\n            \"tags\",\n            \"properties\",\n            \"error\",\n            \"raw_response\",\n            \"arguments\",\n        ]\n\n    @property\n    def duckdb_schema(self):\n        return f\"\"\"\n        CREATE TABLE IF NOT EXISTS {self.table_name} (\n            id VARCHAR PRIMARY KEY,\n            model VARCHAR,\n            timestamp TIMESTAMP,\n            messages JSON,\n            assistant_message TEXT,\n            completion_tokens INTEGER,\n            prompt_tokens INTEGER,\n            total_tokens INTEGER,\n            finish_reason VARCHAR,\n            tool_calls JSON,\n            function_call JSON,\n            tags VARCHAR[],\n            properties JSON,\n            error VARCHAR,\n            raw_response JSON,\n            arguments JSON,\n        )\n        \"\"\"\n\n    def argilla_settings(self, client: \"Argilla\"):\n        import argilla as rg\n        from argilla import Settings\n\n        return Settings(\n            fields=[\n                rg.ChatField(\n                    name=\"messages\",\n                    description=\"The messages sent to the assistant.\",\n                    _client=client,\n                ),\n                rg.TextField(\n                    name=\"assistant_message\",\n                    description=\"The response from the assistant.\",\n                    required=False,\n                    client=client,\n                ),\n                rg.CustomField(\n                    name=\"tool_calls\",\n                    template=\"{{ json record.fields.tool_calls }}\",\n                    description=\"The tool calls made by the assistant.\",\n                    required=False,\n                    _client=client,\n                ),\n                rg.CustomField(\n                    name=\"function_call\",\n                    template=\"{{ json record.fields.function_call }}\",\n                    description=\"The function call made by the assistant.\",\n                    required=False,\n                    _client=client,\n                ),\n                rg.CustomField(\n                    name=\"properties\",\n                    template=\"{{ json record.fields.properties }}\",\n                    description=\"The properties associated with the response.\",\n                    required=False,\n                    _client=client,\n                ),\n                rg.CustomField(\n                    name=\"raw_response\",\n                    template=\"{{ json record.fields.raw_response }}\",\n                    description=\"The raw response from the API.\",\n                    required=False,\n                    _client=client,\n                ),\n            ],\n            questions=[\n                rg.RatingQuestion(\n                    name=\"rating\",\n                    description=\"How would you rate the response? 1 being the worst and 5 being the best.\",\n                    values=[1, 2, 3, 4, 5],\n                ),\n                rg.TextQuestion(\n                    name=\"improved_response\",\n                    description=\"If you would like to improve the response, please provide a better response here.\",\n                    required=False,\n                ),\n                rg.TextQuestion(\n                    name=\"context\",\n                    description=\"If you would like to provide more context for the response or rating, please provide it here.\",\n                    required=False,\n                ),\n            ],\n            metadata=[\n                rg.IntegerMetadataProperty(name=\"completion_tokens\", client=client),\n                rg.IntegerMetadataProperty(name=\"prompt_tokens\", client=client),\n                rg.IntegerMetadataProperty(name=\"total_tokens\", client=client),\n                rg.TermsMetadataProperty(name=\"model\", client=client),\n                rg.TermsMetadataProperty(name=\"finish_reason\", client=client),\n                rg.TermsMetadataProperty(name=\"tags\", client=client),\n            ],\n        )\n\n    @property\n    def table_name(self):\n        return f\"{self.client_name}_records\"\n\n    @property\n    def json_fields(self):\n        return [\n            \"tool_calls\",\n            \"function_call\",\n            \"tags\",\n            \"properties\",\n            \"raw_response\",\n            \"arguments\",\n        ]\n\n    @property\n    def image_fields(self):\n        return []\n\n    @property\n    def text_fields(self):\n        return []\n\n\nclass ChatCompletionObserver:\n    \"\"\"\n    Observer that provides an interface for tracking chat completions.\n    Args:\n        client (`Any`):\n            The client to use for the chat completions.\n        create (`Callable[..., Any]`):\n            The function to use to create the chat completions., eg `chat.completions.create` for OpenAI client.\n        format_input (`Callable[[Dict[str, Any], Any], Any]`):\n            The function to use to format the input messages.\n        parse_response (`Callable[[Any], Dict[str, Any]]`):\n            The function to use to parse the response.\n        store (`Union[\"DuckDBStore\", DatasetsStore]`, *optional*):\n            The store to use to save the records.\n        tags (`List[str]`, *optional*):\n            The tags to associate with records.\n        properties (`Dict[str, Any]`, *optional*):\n            The properties to associate with records.\n        logging_rate (`float`, *optional*):\n            The logging rate to use for logging, defaults to 1\n    \"\"\"\n\n    def __init__(\n        self,\n        client: Any,\n        create: Callable[..., Any],\n        format_input: Callable[[Dict[str, Any], Any], Any],\n        parse_response: Callable[[Any], Dict[str, Any]],\n        store: Optional[Union[\"DuckDBStore\", DatasetsStore]] = None,\n        tags: Optional[List[str]] = None,\n        properties: Optional[Dict[str, Any]] = None,\n        logging_rate: Optional[float] = 1,\n        **kwargs: Any,\n    ):\n        self.client = client\n        self.create_fn = create\n        self.format_input = format_input\n        self.parse_response = parse_response\n        self.store = store or DatasetsStore.connect()\n        self.tags = tags or []\n        self.properties = properties or {}\n        self.kwargs = kwargs\n        self.logging_rate = logging_rate\n\n    @property\n    def chat(self) -> Self:\n        return self\n\n    @property\n    def completions(self) -> Self:\n        return self\n\n    def _log_record(\n        self, response, error=None, model=None, messages=None, arguments=None\n    ):\n        record = self.parse_response(\n            response,\n            error=error,\n            model=model,\n            messages=messages,\n            tags=self.tags,\n            properties=self.properties,\n            arguments=arguments,\n        )\n        if random.random() < self.logging_rate:\n            self.store.add(record)\n        return record\n\n    def create(\n        self,\n        messages: Dict[str, Any],\n        **kwargs: Any,\n    ) -> Any:\n        \"\"\"Creates a completion.\n\n        Args:\n            messages (`Dict[str, Any]`):\n                The messages to send to the assistant.\n            **kwargs:\n                Additional arguments passed to the create function. If stream=True is passed,\n                the function will return a generator yielding streamed responses.\n\n        Returns:\n            Any:\n                The response from the assistant, or a generator if streaming.\n        \"\"\"\n        response = None\n        kwargs = self.handle_kwargs(kwargs)\n        excluded_args = {\"model\", \"messages\", \"tags\", \"properties\"}\n        arguments = {k: v for k, v in kwargs.items() if k not in excluded_args}\n        model = kwargs.get(\"model\")\n        input_data = self.format_input(messages, **kwargs)\n\n        if kwargs.get(\"stream\", False):\n\n            def stream_responses():\n                response_buffer = []\n                try:\n                    for chunk in self.create_fn(**input_data):\n                        yield chunk\n                        response_buffer.append(chunk)\n                    self._log_record(\n                        response_buffer,\n                        model=model,\n                        messages=messages,\n                        arguments=arguments,\n                    )\n                except Exception as e:\n                    self._log_record(\n                        response_buffer,\n                        error=e,\n                        model=model,\n                        messages=messages,\n                        arguments=arguments,\n                    )\n                    raise\n\n            return stream_responses()\n\n        try:\n            response = self.create_fn(**input_data)\n            self._log_record(\n                response, model=model, messages=messages, arguments=arguments\n            )\n            return response\n        except Exception as e:\n            self._log_record(\n                response, error=e, model=model, messages=messages, arguments=arguments\n            )\n            raise\n\n    def handle_kwargs(self, kwargs: dict[str, Any]) -> dict[str, Any]:\n        \"\"\"\n        Handle and process keyword arguments for the API call.\n\n        This method merges the provided kwargs with the default kwargs stored in the instance.\n        It ensures that any kwargs passed to the method call take precedence over the default ones.\n        \"\"\"\n        return {**self.kwargs, **kwargs}\n\n    def __getattr__(self, attr: str) -> Any:\n        if attr not in {\"create\", \"chat\", \"messages\"}:\n            return getattr(self.client, attr)\n        return getattr(self, attr)\n\n\nclass AsyncChatCompletionObserver(ChatCompletionObserver):\n    \"\"\"\n    Async observer that provides an interface for tracking chat completions\n    Args:\n        client (`Any`):\n            The async client to use for the chat completions.\n        create (`Callable[..., Awaitable[Any]]`):\n            The async function to use to create the chat completions.\n        format_input (`Callable[[Dict[str, Any], Any], Any]`):\n            The function to use to format the input messages.\n        parse_response (`Callable[[Any], Dict[str, Any]]`):\n            The function to use to parse the response.\n        store (`Union[\"DuckDBStore\", DatasetsStore]`, *optional*):\n            The store to use to save the records.\n        tags (`List[str]`, *optional*):\n            The tags to include in the records.\n        properties (`Dict[str, Any]`, *optional*):\n            The properties to include in the records.\n        logging_rate (`float`, *optional*):\n            The logging rate to use for logging, defaults to 1\n    \"\"\"\n\n    async def _log_record_async(\n        self, response, error=None, model=None, messages=None, arguments=None\n    ):\n        record = self.parse_response(\n            response,\n            error=error,\n            model=model,\n            messages=messages,\n            tags=self.tags,\n            properties=self.properties,\n            arguments=arguments,\n        )\n        if random.random() < self.logging_rate:\n            await self.store.add_async(record)\n        return record\n\n    async def create(\n        self,\n        messages: Dict[str, Any],\n        **kwargs: Any,\n    ) -> Any:\n        \"\"\"Create an async completion.\n\n        Args:\n            messages (`Dict[str, Any]`):\n                The messages to send to the assistant.\n        Returns:\n            Any:\n                The response from the assistant.\n        \"\"\"\n        response = None\n        kwargs = self.handle_kwargs(kwargs)\n        excluded_args = {\"model\", \"messages\", \"tags\", \"properties\"}\n        arguments = {k: v for k, v in kwargs.items() if k not in excluded_args}\n        model = kwargs.get(\"model\")\n        input_data = self.format_input(messages, **kwargs)\n\n        if kwargs.get(\"stream\", False):\n\n            async def stream_responses():\n                response_buffer = []\n                try:\n                    async for chunk in await self.create_fn(**input_data):\n                        yield chunk\n                        response_buffer.append(chunk)\n                    await self._log_record_async(\n                        response_buffer,\n                        model=model,\n                        messages=messages,\n                        arguments=arguments,\n                    )\n                except Exception as e:\n                    await self._log_record_async(\n                        response_buffer,\n                        error=e,\n                        model=model,\n                        messages=messages,\n                        arguments=arguments,\n                    )\n                    raise\n\n            return stream_responses()\n\n        try:\n            response = await self.create_fn(**input_data)\n            await self._log_record_async(\n                response, model=model, messages=messages, arguments=arguments\n            )\n            return response\n        except Exception as e:\n            await self._log_record_async(\n                response, error=e, model=model, messages=messages, arguments=arguments\n            )\n            raise\n\n    async def __aenter__(self) -> \"AsyncChatCompletionObserver\":\n        return self\n\n    async def __aexit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None:\n        await self.store.close_async()\n"
  },
  {
    "path": "src/observers/models/hf_client.py",
    "content": "import uuid\nfrom dataclasses import asdict\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\n\nfrom huggingface_hub import AsyncInferenceClient, InferenceClient\n\nfrom observers.models.base import (\n    AsyncChatCompletionObserver,\n    ChatCompletionObserver,\n    ChatCompletionRecord,\n)\n\nif TYPE_CHECKING:\n    from huggingface_hub import (\n        ChatCompletionOutput,\n        ChatCompletionStreamOutput,\n    )\n\n    from observers.stores.datasets import DatasetsStore\n    from observers.stores.duckdb import DuckDBStore\n\n\nclass HFRecord(ChatCompletionRecord):\n    client_name: str = \"hf_client\"\n\n    @classmethod\n    def from_response(\n        cls,\n        response: Union[\n            None,\n            List[\"ChatCompletionStreamOutput\"],\n            \"ChatCompletionOutput\",\n        ] = None,\n        error=None,\n        **kwargs,\n    ) -> \"HFRecord\":\n        \"\"\"Create a response record from an API response or error\n\n        Args:\n            response: The response from the API.\n            error: The error from the API.\n            **kwargs: Additional arguments passed to the observer.\n        \"\"\"\n        if not response:\n            return cls(finish_reason=\"error\", error=str(error), **kwargs)\n\n        # Handle streaming responses\n        if isinstance(response, list):\n            first_dump = asdict(response[0])\n            last_dump = asdict(response[-1])\n            id = first_dump.get(\"id\") or str(uuid.uuid4())\n\n            choices = last_dump.get(\"choices\", [{}])[0]\n            delta = choices.get(\"delta\", {})\n\n            content = \"\"\n            total_tokens = prompt_tokens = completion_tokens = 0\n            raw_response = {}\n\n            for i, r in enumerate(response):\n                r_dump = asdict(r)\n                raw_response[i] = r_dump\n                usage = r_dump.get(\"usage\", {})\n                total_tokens += usage.get(\"total_tokens\", 0)\n                prompt_tokens += usage.get(\"prompt_tokens\", 0)\n                completion_tokens += usage.get(\"completion_tokens\", 0)\n                content += (\n                    r_dump.get(\"choices\", [{}])[0].get(\"delta\", {}).get(\"content\") or \"\"\n                )\n\n            return cls(\n                id=id,\n                completion_tokens=completion_tokens,\n                prompt_tokens=prompt_tokens,\n                total_tokens=total_tokens,\n                assistant_message=content,\n                finish_reason=choices.get(\"finish_reason\"),\n                tool_calls=delta.get(\"tool_calls\"),\n                function_call=delta.get(\"function_call\"),\n                raw_response=raw_response,\n                **kwargs,\n            )\n\n        # Handle non-streaming responses\n        response_dump = asdict(response)\n        choices = response_dump.get(\"choices\", [{}])[0].get(\"message\", {})\n        usage = response_dump.get(\"usage\", {})\n\n        return cls(\n            id=response_dump.get(\"id\") or str(uuid.uuid4()),\n            completion_tokens=usage.get(\"completion_tokens\"),\n            prompt_tokens=usage.get(\"prompt_tokens\"),\n            total_tokens=usage.get(\"total_tokens\"),\n            assistant_message=choices.get(\"content\"),\n            finish_reason=response_dump.get(\"choices\", [{}])[0].get(\"finish_reason\"),\n            tool_calls=choices.get(\"tool_calls\"),\n            function_call=choices.get(\"function_call\"),\n            raw_response=response_dump,\n            **kwargs,\n        )\n\n\ndef wrap_hf_client(\n    client: Union[\"InferenceClient\", \"AsyncInferenceClient\"],\n    store: Optional[Union[\"DuckDBStore\", \"DatasetsStore\"]] = None,\n    tags: Optional[List[str]] = None,\n    properties: Optional[Dict[str, Any]] = None,\n    logging_rate: Optional[float] = 1,\n) -> Union[\"AsyncChatCompletionObserver\", \"ChatCompletionObserver\"]:\n    \"\"\"\n    Wraps Hugging Face's Inference Client in an observer.\n\n    Args:\n        client (`Union[InferenceClient, AsyncInferenceClient]`):\n            The HF Inference Client to wrap.\n        store (`Union[DuckDBStore, DatasetsStore]`, *optional*):\n            The store to use to save the records.\n        tags (`List[str]`, *optional*):\n            The tags to associate with records.\n        properties (`Dict[str, Any]`, *optional*):\n            The properties to associate with records.\n        logging_rate (`float`, *optional*):\n            The logging rate to use for logging, defaults to 1\n\n    Returns:\n        `Union[AsyncChatCompletionObserver, ChatCompletionObserver]`:\n            The observer that wraps the HF Inference Client.\n    \"\"\"\n    observer_args = {\n        \"client\": client,\n        \"create\": client.chat.completions.create,\n        \"format_input\": lambda inputs, **kwargs: {\"messages\": inputs, **kwargs},\n        \"parse_response\": HFRecord.from_response,\n        \"store\": store,\n        \"tags\": tags,\n        \"properties\": properties,\n        \"logging_rate\": logging_rate,\n    }\n    if isinstance(client, AsyncInferenceClient):\n        return AsyncChatCompletionObserver(**observer_args)\n    return ChatCompletionObserver(**observer_args)\n"
  },
  {
    "path": "src/observers/models/litellm.py",
    "content": "from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\n\nfrom observers.models.base import (\n    AsyncChatCompletionObserver,\n    ChatCompletionObserver,\n)\nfrom observers.models.openai import OpenAIRecord\n\nif TYPE_CHECKING:\n    from litellm import acompletion, completion\n\n    from observers.stores.argilla import ArgillaStore\n    from observers.stores.datasets import DatasetsStore\n    from observers.stores.duckdb import DuckDBStore\n\n\nclass LitellmRecord(OpenAIRecord):\n    client_name: str = \"litellm\"\n\n\ndef wrap_litellm(\n    client: Union[\"completion\", \"acompletion\"],\n    store: Optional[Union[\"DatasetsStore\", \"DuckDBStore\", \"ArgillaStore\"]] = None,\n    tags: Optional[List[str]] = None,\n    properties: Optional[Dict[str, Any]] = None,\n    logging_rate: Optional[float] = 1,\n) -> Union[AsyncChatCompletionObserver, ChatCompletionObserver]:\n    \"\"\"\n    Wrap Litellm completion function to track API calls in a Store.\n\n    Args:\n        client (`Union[InferenceClient, AsyncInferenceClient]`):\n            The HF Inference Client to wrap.\n        store (`Union[DuckDBStore, DatasetsStore]`, *optional*):\n            The store to use to save the records.\n        tags (`List[str]`, *optional*):\n            The tags to associate with records.\n        properties (`Dict[str, Any]`, *optional*):\n            The properties to associate with records.\n        logging_rate (`float`, *optional*):\n            The logging rate to use for logging, defaults to 1\n\n    Returns:\n        `Union[AsyncChatCompletionObserver, ChatCompletionObserver]`:\n            The observer that wraps the Litellm completion function.\n    \"\"\"\n    observer_args = {\n        \"client\": client,\n        \"create\": client,\n        \"format_input\": lambda inputs, **kwargs: {\"messages\": inputs, **kwargs},\n        \"parse_response\": LitellmRecord.from_response,\n        \"store\": store,\n        \"tags\": tags,\n        \"properties\": properties,\n        \"logging_rate\": logging_rate,\n    }\n    if client.__name__ == \"acompletion\":\n        return AsyncChatCompletionObserver(**observer_args)\n\n    return ChatCompletionObserver(**observer_args)\n"
  },
  {
    "path": "src/observers/models/openai.py",
    "content": "import uuid\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\nfrom observers.stores.duckdb import DuckDBStore\nfrom openai import AsyncOpenAI, OpenAI\nfrom typing_extensions import Self\n\nfrom observers.models.base import (\n    AsyncChatCompletionObserver,\n    ChatCompletionObserver,\n    ChatCompletionRecord,\n)\n\nif TYPE_CHECKING:\n    from openai.types.chat import ChatCompletion, ChatCompletionChunk\n\n    from observers.stores.datasets import DatasetsStore\n\n\nclass OpenAIRecord(ChatCompletionRecord):\n    client_name: str = \"openai\"\n\n    @classmethod\n    def from_response(\n        cls,\n        response: Union[List[\"ChatCompletionChunk\"], \"ChatCompletion\"] = None,\n        error=None,\n        messages=None,\n        **kwargs,\n    ) -> Self:\n        \"\"\"Create a response record from an API response or error\"\"\"\n        if not response:\n            return cls(\n                finish_reason=\"error\", error=str(error), messages=messages, **kwargs\n            )\n\n        # Handle streaming responses\n        if isinstance(response, list):\n            first_dump = response[0].model_dump()\n            last_dump = response[-1].model_dump()\n            content = \"\"\n\n            completion_tokens = prompt_tokens = total_tokens = 0\n\n            choices = last_dump.get(\"choices\", [{}])[0]\n            delta = choices.get(\"delta\", {})\n\n            raw_response = {}\n            for i, r in enumerate(response):\n                r_dump = r.model_dump()\n                raw_response[i] = r_dump\n                content += (\n                    r_dump.get(\"choices\", [{}])[0].get(\"delta\", {}).get(\"content\") or \"\"\n                )\n                usage = r_dump.get(\"usage\", {}) or {}\n                completion_tokens += usage.get(\"completion_tokens\", 0)\n                prompt_tokens += usage.get(\"prompt_tokens\", 0)\n                total_tokens += usage.get(\"total_tokens\", 0)\n\n            return cls(\n                id=first_dump.get(\"id\") or str(uuid.uuid4()),\n                messages=messages,\n                completion_tokens=completion_tokens,\n                prompt_tokens=prompt_tokens,\n                total_tokens=total_tokens,\n                assistant_message=content,\n                finish_reason=choices.get(\"finish_reason\"),\n                tool_calls=delta.get(\"tool_calls\"),\n                function_call=delta.get(\"function_call\"),\n                raw_response=raw_response,\n                **kwargs,\n            )\n\n        # Handle non-streaming responses\n        response_dump = response.model_dump()\n        choices = response_dump.get(\"choices\", [{}])[0].get(\"message\", {})\n        usage = response_dump.get(\"usage\", {}) or {}\n        return cls(\n            id=response.id or str(uuid.uuid4()),\n            messages=messages,\n            completion_tokens=usage.get(\"completion_tokens\"),\n            prompt_tokens=usage.get(\"prompt_tokens\"),\n            total_tokens=usage.get(\"total_tokens\"),\n            assistant_message=choices.get(\"content\"),\n            finish_reason=response_dump.get(\"choices\", [{}])[0].get(\"finish_reason\"),\n            tool_calls=choices.get(\"tool_calls\"),\n            function_call=choices.get(\"function_call\"),\n            raw_response=response_dump,\n            **kwargs,\n        )\n\n\ndef wrap_openai(\n    client: Union[\"OpenAI\", \"AsyncOpenAI\"],\n    store: Optional[Union[\"DuckDBStore\", \"DatasetsStore\"]] = DuckDBStore(),\n    tags: Optional[List[str]] = None,\n    properties: Optional[Dict[str, Any]] = None,\n    logging_rate: Optional[float] = 1,\n) -> Union[ChatCompletionObserver, AsyncChatCompletionObserver]:\n    \"\"\"\n    Wraps an OpenAI client in an observer.\n\n    Args:\n        client (`Union[OpenAI, AsyncOpenAI]`):\n            The OpenAI client to wrap.\n        store (`Union[DuckDBStore, DatasetsStore]`, *optional*):\n            The store to use to save the records.\n        tags (`List[str]`, *optional*):\n            The tags to associate with records.\n        properties (`Dict[str, Any]`, *optional*):\n            The properties to associate with records.\n        logging_rate (`float`, *optional*):\n            The logging rate to use for logging, defaults to 1\n\n    Returns:\n        `Union[ChatCompletionObserver, AsyncChatCompletionObserver]`:\n            The observer that wraps the OpenAI client.\n    \"\"\"\n    observer_args = {\n        \"client\": client,\n        \"create\": client.chat.completions.create,\n        \"format_input\": lambda messages, **kwargs: kwargs | {\"messages\": messages},\n        \"parse_response\": OpenAIRecord.from_response,\n        \"store\": store,\n        \"tags\": tags,\n        \"properties\": properties,\n        \"logging_rate\": logging_rate,\n    }\n    if isinstance(client, AsyncOpenAI):\n        return AsyncChatCompletionObserver(**observer_args)\n    return ChatCompletionObserver(**observer_args)\n"
  },
  {
    "path": "src/observers/models/transformers.py",
    "content": "import uuid\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\n\nfrom observers.models.base import (\n    ChatCompletionObserver,\n    ChatCompletionRecord,\n)\n\nif TYPE_CHECKING:\n    from transformers import TextGenerationPipeline\n\n    from observers.stores.datasets import DatasetsStore\n    from observers.stores.duckdb import DuckDBStore\n\n\nclass TransformersRecord(ChatCompletionRecord):\n    \"\"\"\n    Data class for storing transformer records.\n    \"\"\"\n\n    client_name: str = \"transformers\"\n\n    @classmethod\n    def from_response(\n        cls,\n        response: Dict[str, Any] = None,\n        error: Exception = None,\n        model: Optional[str] = None,\n        **kwargs,\n    ) -> \"TransformersRecord\":\n        if not response:\n            return cls(finish_reason=\"error\", error=str(error), **kwargs)\n        generated_text = response[0][\"generated_text\"][-1]\n        return cls(\n            id=str(uuid.uuid4()),\n            assistant_message=generated_text.get(\"content\"),\n            tool_calls=generated_text.get(\"tool_calls\"),\n            raw_response=response,\n            **kwargs,\n        )\n\n\ndef wrap_transformers(\n    client: \"TextGenerationPipeline\",\n    store: Optional[Union[\"DuckDBStore\", \"DatasetsStore\"]] = None,\n    tags: Optional[List[str]] = None,\n    properties: Optional[Dict[str, Any]] = None,\n    logging_rate: Optional[float] = 1,\n) -> ChatCompletionObserver:\n    \"\"\"\n    Wraps a transformers client in an observer.\n\n    Args:\n        client (`transformers.TextGenerationPipeline`):\n            The transformers pipeline to wrap.\n        store (`Union[DuckDBStore, DatasetsStore]`, *optional*):\n            The store to use to save the records.\n        tags (`List[str]`, *optional*):\n            The tags to associate with records.\n        properties (`Dict[str, Any]`, *optional*):\n            The properties to associate with records.\n        logging_rate (`float`, *optional*):\n            The logging rate to use for logging, defaults to 1\n\n    Returns:\n        `ChatCompletionObserver`:\n            The observer that wraps the transformers pipeline.\n    \"\"\"\n    return ChatCompletionObserver(\n        client=client,\n        create=client.__call__,\n        format_input=lambda inputs, **kwargs: {\"text_inputs\": inputs, **kwargs},\n        parse_response=TransformersRecord.from_response,\n        store=store,\n        tags=tags,\n        properties=properties,\n        logging_rate=logging_rate,\n    )\n"
  },
  {
    "path": "src/observers/stores/__init__.py",
    "content": "from observers.stores.argilla import ArgillaStore\nfrom observers.stores.datasets import DatasetsStore\nfrom observers.stores.duckdb import DuckDBStore\n\n__all__ = [\"ArgillaStore\", \"DatasetsStore\", \"DuckDBStore\"]\n"
  },
  {
    "path": "src/observers/stores/argilla.py",
    "content": "import uuid\nfrom dataclasses import asdict, dataclass, field\nfrom typing import TYPE_CHECKING, List, Optional, Union\n\nimport argilla as rg\nfrom argilla import (\n    Argilla,\n    LabelQuestion,\n    MultiLabelQuestion,\n    RankingQuestion,\n    RatingQuestion,\n    SpanQuestion,\n    TextQuestion,\n)\n\nfrom observers.stores.base import Store\n\n\nif TYPE_CHECKING:\n    from observers.base import Record\n\n\n@dataclass\nclass ArgillaStore(Store):\n    \"\"\"\n    Argilla store\n    \"\"\"\n\n    api_url: Optional[str] = field(default=None)\n    api_key: Optional[str] = field(default=None)\n    dataset_name: Optional[str] = field(default=None)\n    workspace_name: Optional[str] = field(default=None)\n    questions: Optional[\n        List[\n            Union[\n                TextQuestion,\n                LabelQuestion,\n                SpanQuestion,\n                RatingQuestion,\n                RankingQuestion,\n                MultiLabelQuestion,\n            ]\n        ]\n    ] = field(default=None)\n\n    _dataset: Optional[rg.Dataset] = None\n    _dataset_keys: Optional[List[str]] = None\n    _client: Optional[Argilla] = None\n\n    def __post_init__(self) -> None:\n        \"\"\"Initialize the store\"\"\"\n        self._client = Argilla(api_url=self.api_url, api_key=self.api_key)\n\n    def _init_table(self, record: \"Record\") -> None:\n        dataset_name = (\n            self.dataset_name or f\"{record.table_name}_{uuid.uuid4().hex[:8]}\"\n        )\n        workspace_name = self.workspace_name or self._client.me.username\n        workspace = self._client.workspaces(name=workspace_name)\n        if not workspace:\n            workspace = self._client.workspaces.add(rg.Workspace(name=workspace_name))\n        dataset = self._client.datasets(name=dataset_name, workspace=workspace_name)\n\n        if not dataset:\n            settings = record.argilla_settings(self._client)\n            if self.questions:\n                settings.questions = self.questions\n            dataset = rg.Dataset(\n                name=dataset_name,\n                workspace=workspace_name,\n                settings=settings,\n                client=self._client,\n            ).create()\n        elif self.questions:\n            raise ValueError(\n                \"Custom questions are not supported for existing datasets.\"\n            )\n        self._dataset = dataset\n        dataset_keys = (\n            [field.name for field in dataset.settings.fields]\n            + [question.name for question in dataset.settings.questions]\n            + [term.name for term in dataset.settings.metadata]\n            + [vector.name for vector in dataset.settings.vectors]\n        )\n        self._dataset_keys = dataset_keys\n\n    @classmethod\n    def connect(\n        cls,\n        api_url: Optional[str] = None,\n        api_key: Optional[str] = None,\n        dataset_name: Optional[str] = None,\n        workspace_name: Optional[str] = None,\n    ) -> \"ArgillaStore\":\n        \"\"\"Create a new store instance with custom settings\"\"\"\n        return cls(\n            api_url=api_url,\n            api_key=api_key,\n            dataset_name=dataset_name,\n            workspace_name=workspace_name,\n        )\n\n    def add(self, record: \"Record\") -> None:\n        \"\"\"Add a new record to the database\"\"\"\n        if not self._dataset:\n            self._init_table(record)\n\n        record_dict = asdict(record)\n\n        for text_field in record.text_fields:\n            if text_field in record_dict:\n                record_dict[f\"{text_field}_length\"] = len(record_dict[text_field])\n\n        record_dict = {k: v for k, v in record_dict.items() if k in self._dataset_keys}\n        self._dataset.records.log([record_dict])\n\n    async def add_async(self, record: \"Record\"):\n        \"\"\"\n        Add a new record to the database asynchronously\n\n        Args:\n            record (`Record`):\n                The record to add to the database.\n        \"\"\"\n        if not self._dataset:\n            self._init_table(record)\n\n        record_dict = asdict(record)\n\n        for text_field in record.text_fields:\n            if text_field in record_dict:\n                record_dict[f\"{text_field}_length\"] = len(record_dict[text_field])\n\n        record_dict = {k: v for k, v in record_dict.items() if k in self._dataset_keys}\n        # Use argilla's native async API\n        await self._dataset.records.log(\n            [record_dict],\n            background=True,\n            verbose=False,\n        )\n"
  },
  {
    "path": "src/observers/stores/base.py",
    "content": "from abc import ABC, abstractmethod\nfrom dataclasses import dataclass\nfrom typing import TYPE_CHECKING\n\n\nif TYPE_CHECKING:\n    from observers.base import Record\n\n\n@dataclass\nclass Store(ABC):\n    \"\"\"\n    Base class for storing records\n    \"\"\"\n\n    @abstractmethod\n    def add(self, record: \"Record\"):\n        \"\"\"Add a new record to the store\"\"\"\n        pass\n\n    @abstractmethod\n    async def add_async(self, record: \"Record\"):\n        \"\"\"Add a new record to the store asynchronously\"\"\"\n        pass\n\n    @abstractmethod\n    def connect(self):\n        \"\"\"Connect to the store\"\"\"\n        pass\n\n    @abstractmethod\n    def _init_table(self, record: \"Record\"):\n        \"\"\"Initialize the table\"\"\"\n        pass\n"
  },
  {
    "path": "src/observers/stores/datasets.py",
    "content": "import asyncio\nimport atexit\nimport base64\nimport hashlib\nimport json\nimport os\nimport tempfile\nimport uuid\nfrom dataclasses import asdict, dataclass, field\nfrom io import BytesIO\nfrom typing import TYPE_CHECKING, List, Optional\n\nfrom datasets.utils.logging import disable_progress_bar\nfrom huggingface_hub import CommitScheduler, login, metadata_update, whoami\nfrom PIL import Image\n\nfrom observers.stores.base import Store\n\nif TYPE_CHECKING:\n    from observers.base import Record\n\n\ndisable_progress_bar()\n\n\n@dataclass\nclass DatasetsStore(Store):\n    \"\"\"\n    Datasets store\n    \"\"\"\n\n    org_name: Optional[str] = field(default=None)\n    repo_name: Optional[str] = field(default=None)\n    folder_path: Optional[str] = field(default=None)\n    every: Optional[int] = field(default=5)\n    path_in_repo: Optional[str] = field(default=None)\n    revision: Optional[str] = field(default=None)\n    private: Optional[bool] = field(default=None)\n    token: Optional[str] = field(default=None)\n    allow_patterns: Optional[List[str]] = field(default=None)\n    ignore_patterns: Optional[List[str]] = field(default=None)\n    squash_history: Optional[bool] = field(default=None)\n\n    _filename: Optional[str] = field(default=None)\n    _scheduler: Optional[CommitScheduler] = None\n    _temp_dir: Optional[str] = field(default=None, init=False)\n\n    def __post_init__(self):\n        \"\"\"Initialize the store and create temporary directory\"\"\"\n        if self.ignore_patterns is None:\n            self.ignore_patterns = [\"*.json\"]\n\n        try:\n            whoami(token=self.token or os.getenv(\"HF_TOKEN\"))\n        except Exception:\n            login()\n\n        if self.folder_path is None:\n            self._temp_dir = tempfile.mkdtemp(prefix=\"observers_dataset_\")\n            self.folder_path = self._temp_dir\n            atexit.register(self._cleanup)\n        else:\n            os.makedirs(self.folder_path, exist_ok=True)\n\n    def _cleanup(self):\n        \"\"\"Clean up temporary directory on exit\"\"\"\n        if self._temp_dir and os.path.exists(self._temp_dir):\n            import shutil\n\n            shutil.rmtree(self._temp_dir)\n\n    def _init_table(self, record: \"Record\"):\n        import logging\n\n        logging.getLogger(\"huggingface_hub\").setLevel(logging.ERROR)\n\n        repo_name = self.repo_name or f\"{record.table_name}_{uuid.uuid4().hex[:8]}\"\n        org_name = self.org_name or whoami(token=self.token).get(\"name\")\n        repo_id = f\"{org_name}/{repo_name}\"\n        self._filename = f\"{record.table_name}_{uuid.uuid4()}.json\"\n        self._scheduler = CommitScheduler(\n            repo_id=repo_id,\n            folder_path=self.folder_path,\n            every=self.every,\n            path_in_repo=self.path_in_repo,\n            repo_type=\"dataset\",\n            revision=self.revision,\n            private=self.private,\n            token=self.token,\n            allow_patterns=self.allow_patterns,\n            ignore_patterns=self.ignore_patterns,\n            squash_history=self.squash_history,\n        )\n        self._scheduler.private = self.private\n        metadata_update(\n            repo_id=repo_id,\n            metadata={\"tags\": [\"observers\", record.table_name.split(\"_\")[0]]},\n            repo_type=\"dataset\",\n            token=self.token,\n            overwrite=True,\n        )\n\n    @classmethod\n    def connect(\n        cls,\n        org_name: Optional[str] = None,\n        repo_name: Optional[str] = None,\n        folder_path: Optional[str] = None,\n        every: Optional[int] = 5,\n        path_in_repo: Optional[str] = None,\n        revision: Optional[str] = None,\n        private: Optional[bool] = None,\n        token: Optional[str] = None,\n        allow_patterns: Optional[List[str]] = None,\n        ignore_patterns: Optional[List[str]] = None,\n        squash_history: Optional[bool] = None,\n    ) -> \"DatasetsStore\":\n        \"\"\"Create a new store instance with optional custom path\"\"\"\n        return cls(\n            org_name=org_name,\n            repo_name=repo_name,\n            folder_path=folder_path,\n            every=every,\n            path_in_repo=path_in_repo,\n            revision=revision,\n            private=private,\n            token=token,\n            allow_patterns=allow_patterns,\n            ignore_patterns=ignore_patterns,\n            squash_history=squash_history,\n        )\n\n    def add(self, record: \"Record\"):\n        \"\"\"Add a new record to the database\"\"\"\n        if not self._scheduler:\n            self._init_table(record)\n\n        with self._scheduler.lock:\n            with (self._scheduler.folder_path / self._filename).open(\"a\") as f:\n                record_dict = asdict(record)\n\n                # Handle JSON fields\n                for json_field in record.json_fields:\n                    if record_dict[json_field]:\n                        record_dict[json_field] = json.dumps(record_dict[json_field])\n\n                # Handle image fields\n                for image_field in record.image_fields:\n                    if record_dict[image_field]:\n                        image_folder = self._scheduler.folder_path / \"images\"\n                        image_folder.mkdir(exist_ok=True)\n\n                        # Generate unique filename based on record content\n                        filtered_dict = {\n                            k: v\n                            for k, v in sorted(record_dict.items())\n                            if k not in [\"uri\", image_field, \"id\"]\n                        }\n                        content_hash = hashlib.sha256(\n                            json.dumps(obj=filtered_dict, sort_keys=True).encode()\n                        ).hexdigest()\n                        image_path = image_folder / f\"{content_hash}.png\"\n\n                        # Save image and update record\n                        image_bytes = base64.b64decode(\n                            record_dict[image_field][\"bytes\"]\n                        )\n                        Image.open(BytesIO(image_bytes)).save(image_path)\n                        record_dict[image_field].update(\n                            {\"path\": str(image_path), \"bytes\": None}\n                        )\n\n                # Clean up empty dictionaries\n                record_dict = {\n                    k: None if v == {} else v for k, v in record_dict.items()\n                }\n                sorted_dict = {\n                    col: record_dict.get(col) for col in record.table_columns\n                }\n                try:\n                    f.write(json.dumps(sorted_dict) + \"\\n\")\n                    f.flush()\n                except Exception:\n                    raise\n\n    async def add_async(self, record: \"Record\"):\n        \"\"\"Add a new record to the database asynchronously\"\"\"\n        await asyncio.to_thread(self.add, record)\n\n    async def close_async(self):\n        \"\"\"Close the dataset store asynchronously\"\"\"\n        if self._scheduler:\n            await asyncio.to_thread(self._scheduler.__exit__, None, None, None)\n            self._scheduler = None\n\n    def close(self):\n        \"\"\"Close the dataset store synchronously\"\"\"\n        if self._scheduler:\n            self._scheduler.__exit__(None, None, None)\n            self._scheduler = None\n"
  },
  {
    "path": "src/observers/stores/duckdb.py",
    "content": "import asyncio\nimport glob\nimport json\nimport os\nimport re\nfrom dataclasses import asdict, dataclass, field\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, List, Optional\n\nimport duckdb\n\nfrom observers.stores.sql_base import SQLStore\n\nif TYPE_CHECKING:\n    from observers.base import Record\n\nDEFAULT_DB_NAME = \"store.db\"\n\n\n@dataclass\nclass DuckDBStore(SQLStore):\n    \"\"\"\n    DuckDB store\n    \"\"\"\n\n    path: str = field(\n        default_factory=lambda: os.path.join(os.getcwd(), DEFAULT_DB_NAME)\n    )\n    _tables: List[str] = field(default_factory=list)\n    _conn: Optional[duckdb.DuckDBPyConnection] = None\n\n    def __post_init__(self):\n        \"\"\"Initialize database connection and table\"\"\"\n        if self._conn is None:\n            self._conn = duckdb.connect(self.path)\n            self._tables = self._get_tables()\n            self._get_current_schema_version()\n            self._apply_pending_migrations()\n\n    @classmethod\n    def connect(cls, path: Optional[str] = None) -> \"DuckDBStore\":\n        \"\"\"Create a new store instance with optional custom path\"\"\"\n        if not path:\n            path = os.path.join(os.getcwd(), DEFAULT_DB_NAME)\n        return cls(path=path)\n\n    def _init_table(self, record: \"Record\") -> str:\n        self._conn.execute(record.duckdb_schema)\n        self._tables.append(record.table_name)\n\n    def _get_tables(self) -> List[str]:\n        \"\"\"Get all tables in the database\"\"\"\n        return [table[0] for table in self._conn.execute(\"SHOW TABLES\").fetchall()]\n\n    def add(self, record: \"Record\"):\n        \"\"\"Add a new record to the database\"\"\"\n        if record.table_name not in self._tables:\n            self._init_table(record)\n\n        record_dict = asdict(record)\n\n        for json_field in record.json_fields:\n            if record_dict[json_field]:\n                record_dict[json_field] = json.dumps(record_dict[json_field])\n\n        placeholders = \", \".join(\n            [\"$\" + str(i + 1) for i in range(len(record.table_columns))]\n        )\n\n        # Sort record_dict based on table_columns order\n        if hasattr(record, \"table_columns\"):\n            sorted_dict = {k: record_dict[k] for k in record.table_columns}\n            record_dict = sorted_dict\n\n        self._conn.execute(\n            f\"INSERT INTO {record.table_name} VALUES ({placeholders})\",\n            [\n                record_dict[k] if k in record_dict else None\n                for k in record.table_columns\n            ],\n        )\n\n    async def add_async(self, record: \"Record\"):\n        \"\"\"Add a new record to the database asynchronously\"\"\"\n        await asyncio.to_thread(self.add, record)\n\n    def close(self) -> None:\n        \"\"\"Close the database connection\"\"\"\n        if self._conn:\n            self._conn.close()\n            self._conn = None\n\n    def __enter__(self):\n        return self\n\n    def _migrate_schema(self, migration_script: str):\n        \"\"\"Apply a schema migration\"\"\"\n        self._conn.execute(migration_script)\n\n    def _get_current_schema_version(self) -> int:\n        \"\"\"Get the current schema version, creating the table if it doesn't exist\"\"\"\n        table_exists = self._conn.execute(\n            \"SELECT COUNT(*) FROM information_schema.tables WHERE table_name = 'schema_version'\"\n        ).fetchone()[0]\n\n        # create the schema_version table if it doesn't exist\n        if not table_exists:\n            self._conn.execute(\n                \"\"\"\n                CREATE TABLE schema_version (\n                    version INTEGER PRIMARY KEY,\n                    migration_name VARCHAR,\n                    applied_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP\n                )\n            \"\"\"\n            )\n            self._conn.execute(\n                \"INSERT INTO schema_version (version, migration_name) VALUES (0, 'initial')\"\n            )\n\n        # retrieve the current schema version\n        result = self._conn.execute(\n            \"SELECT version FROM schema_version ORDER BY version DESC LIMIT 1\"\n        ).fetchone()\n        return result[0] if result else 0\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        self.close()\n\n    def _get_migrations_path(self) -> Path:\n        \"\"\"Get the path to migrations directory\"\"\"\n        return Path(__file__).parent / \"migrations\"\n\n    def _get_available_migrations(self) -> List[tuple[int, str]]:\n        \"\"\"Get all available migration files sorted by version\"\"\"\n        migrations_path = self._get_migrations_path()\n        migration_files = glob.glob(str(migrations_path / \"*.sql\"))\n\n        # extract version and path using regex\n        migrations = []\n        for file_path in migration_files:\n            # Match migration files in format: any_prefix_NUMBER_any_suffix.sql\n            # e.g., \"001_create_users.sql\" or \"v1_init.sql\" - extracts \"1\" as version\n            if match := re.match(r\".*?(\\d+)_.+\\.sql$\", file_path):\n                version = int(match.group(1))\n                migrations.append((version, file_path))\n\n        return sorted(migrations)\n\n    def _apply_pending_migrations(self):\n        \"\"\"Apply any pending migrations\"\"\"\n        current_version = self._get_current_schema_version()\n        available_migrations = self._get_available_migrations()\n\n        for version, migration_path in available_migrations:\n            if version > current_version:\n                with open(migration_path, \"r\") as f:\n                    migration_script = f.read()\n\n                migration_name = Path(\n                    migration_path\n                ).stem  # Gets filename without extension\n\n                self._conn.execute(\"BEGIN TRANSACTION\")\n                try:\n                    self._migrate_schema(migration_script)\n                    self._conn.execute(\n                        \"INSERT INTO schema_version (version, migration_name) VALUES (?, ?)\",\n                        [version, migration_name],\n                    )\n                    self._conn.execute(\"COMMIT\")\n                except Exception as e:\n                    self._conn.execute(\"ROLLBACK\")\n                    raise Exception(f\"Migration {version} failed: {str(e)}\")\n\n    def _check_table_exists(self, table_name: str) -> bool:\n        \"\"\"Check if a table exists in the database\"\"\"\n        result = self._conn.execute(\n            \"SELECT COUNT(*) FROM information_schema.tables WHERE table_name = ?\",\n            [table_name],\n        ).fetchone()[0]\n        return bool(result)\n\n    def _create_version_table(self):\n        \"\"\"Create the schema version table\"\"\"\n        self._conn.execute(\n            \"\"\"\n            CREATE TABLE IF NOT EXISTS schema_version (\n                version INTEGER PRIMARY KEY,\n                migration_name VARCHAR,\n                applied_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP\n            )\n        \"\"\"\n        )\n\n    def _execute(self, query: str, params: Optional[List] = None):\n        \"\"\"Execute a SQL query\"\"\"\n        return self._conn.execute(query, params if params else [])\n"
  },
  {
    "path": "src/observers/stores/migrations/001_create_schema_version.sql",
    "content": "CREATE TABLE IF NOT EXISTS schema_version (\n    version INTEGER PRIMARY KEY,\n    applied_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,\n    migration_name VARCHAR,\n    checksum VARCHAR\n);\n\nCREATE TABLE IF NOT EXISTS openai_records (\n    id VARCHAR PRIMARY KEY,\n    model VARCHAR,\n    timestamp TIMESTAMP,\n    messages JSON,\n    assistant_message TEXT,\n    completion_tokens INTEGER,\n    prompt_tokens INTEGER,\n    total_tokens INTEGER,\n    finish_reason VARCHAR,\n    tool_calls JSON,\n    function_call JSON,\n    tags VARCHAR[],\n    properties JSON,\n    error VARCHAR,\n    raw_response JSON,\n    arguments JSON\n);\n\n-- Initialize with version 0 if table is empty\nINSERT INTO schema_version (version, migration_name) \nSELECT 0, 'initial' \nWHERE NOT EXISTS (SELECT 1 FROM schema_version);\n"
  },
  {
    "path": "src/observers/stores/migrations/002_add_arguments_field.sql",
    "content": "ALTER TABLE IF EXISTS openai_records \nADD COLUMN IF NOT EXISTS arguments JSON;\n\nALTER TABLE IF EXISTS openai_records \nDROP COLUMN IF EXISTS synced_at;\n"
  },
  {
    "path": "src/observers/stores/migrations/__init__.py",
    "content": ""
  },
  {
    "path": "src/observers/stores/opentelemetry.py",
    "content": "# stdlib features\nimport asyncio\nfrom dataclasses import dataclass\nfrom importlib.metadata import PackageNotFoundError, version\nfrom typing import Optional\n\n# Actual dependencies\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.trace import Span, Tracer, TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor, SpanExporter\n\n# Observers internal interfaces\nfrom observers.base import Record\nfrom observers.stores.base import Store\n\n\ndef flatten_dict(d, prefix=\"\"):\n    \"\"\"Flatten a python dictionary, turning nested keys into dotted keys\"\"\"\n    flat = {}\n    for k, v in d.items():\n        if v:\n            if type(v) is dict:\n                if prefix:\n                    flat.extend(flatten_dict(v, f\"{prefix}.{k}\"))\n            else:\n                if prefix:\n                    flat[(f\"{prefix}.{k}\")] = v\n                else:\n                    flat[k] = v\n\n\ndef get_version():\n    try:\n        return version(\"observers\")\n    except PackageNotFoundError:\n        return \"unknown\"\n\n\n@dataclass\nclass OpenTelemetryStore(Store):\n    \"\"\"\n    OpenTelemetry Store\n    \"\"\"\n\n    # These are here largely to ease future refactors/conform to\n    # the style of the other stores. They have defaults set in the constructor,\n    # but, set here as well.\n    tracer: Optional[Tracer] = None\n    root_span: Optional[Span] = None\n    exporter: Optional[SpanExporter] = None\n    namespace: str = \"observers.dev/observers\"\n\n    def __post_init__(self):\n        if not self.tracer:\n            provider = TracerProvider(\n                resource=Resource.create(\n                    {\n                        \"instrument.name\": self.namespace,\n                        \"instrument.version\": get_version(),\n                    }\n                ),\n            )\n            if not self.exporter:\n                provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))\n            else:\n                provider.add_span_processor(BatchSpanProcessor(self.exporter))\n            trace.set_tracer_provider(provider)\n            self.tracer = trace.get_tracer(self.namespace)\n        if not self.root_span:\n            # if we initialize a span here, then all subsequent 'add's can be\n            # added to a continuous trace\n            with self.tracer.start_as_current_span(f\"{self.namespace}.init\") as span:\n                span.set_attribute(\"connected\", True)\n                self.root_span = span\n\n    def add(self, record: Record):\n        \"\"\"Add a new record to the store\"\"\"\n        with trace.use_span(self.root_span):\n            with self.tracer.start_as_current_span(f\"{self.namespace}.add\") as span:\n                # Split out to be easily edited if the record api changes\n                event_fields = [\n                    \"assistant_message\",\n                    \"completion_tokens\",\n                    \"total_tokens\",\n                    \"prompt_tokens\",\n                    \"finish_reason\",\n                    \"tool_calls\",\n                    \"function_call\",\n                    \"tags\",\n                    \"properties\",\n                    \"error\",\n                    \"model\",\n                    \"timestamp\",\n                    \"id\",\n                ]\n                for field in event_fields:\n                    data = record.__getattribute__(field)\n                    if data:\n                        if type(data) is dict:\n                            intermediate = flatten_dict(data, field)\n                            for k, v in intermediate:\n                                span.set_attribute(k, v)\n                        else:\n                            span.set_attribute(field, data)\n                # Special case for `messages` as it is a list of dicts\n                messages = [str(message) for message in record.messages]\n                span.set_attribute(\"messages\", messages)\n\n    @classmethod\n    def connect(cls, tracer=None, root_span=None, namespace=None, exporter=None):\n        \"\"\"Create an ObservabilityStore, optionally starting from a prior tracer or trace,\n        assigning a custom namespace, or setting an alternate exporter\"\"\"\n        return cls(tracer, root_span, namespace, exporter)\n\n    def _init_table(self, record: \"Record\"):\n        \"\"\"Initialize the dataset (no op)\"\"\"\n        # We don't usually do this in otel, a dataset is (typically)\n        # initialized by writing to it, but, it's part of the Store interface.\n\n    async def add_async(self, record: Record):\n        \"\"\"Add a new record to the store asynchronously\"\"\"\n        await asyncio.to_thread(self.add, record)\n"
  },
  {
    "path": "src/observers/stores/sql_base.py",
    "content": "from abc import abstractmethod\nfrom dataclasses import dataclass\nfrom typing import List, Optional\n\nfrom observers.stores.base import Store\n\n\n@dataclass\nclass SQLStore(Store):\n    \"\"\"Base class for SQL-based stores with migration capabilities\"\"\"\n\n    @abstractmethod\n    def _check_table_exists(self, table_name: str) -> bool:\n        \"\"\"Check if a table exists in the database\"\"\"\n        pass\n\n    @abstractmethod\n    def _create_version_table(self):\n        \"\"\"Create the schema version table\"\"\"\n        pass\n\n    @abstractmethod\n    def _execute(self, query: str, params: Optional[List] = None):\n        \"\"\"Execute a SQL query\"\"\"\n        pass\n\n    @abstractmethod\n    def _migrate_schema(self, migration_script: str):\n        \"\"\"Execute a migration script\"\"\"\n        pass\n\n    @abstractmethod\n    def close(self) -> None:\n        \"\"\"Close the database connection\"\"\"\n        pass\n\n    @abstractmethod\n    def _get_current_schema_version(self) -> int:\n        \"\"\"Get the current schema version\"\"\"\n        pass\n\n    @abstractmethod\n    def _apply_pending_migrations(self):\n        \"\"\"Apply any pending migrations\"\"\"\n        pass\n"
  },
  {
    "path": "tests/__init__.py",
    "content": ""
  },
  {
    "path": "tests/conftest.py",
    "content": "from unittest.mock import AsyncMock, MagicMock, create_autospec\n\nimport pytest\n\nfrom observers.stores.datasets import DatasetsStore\n\n\n@pytest.fixture(autouse=True)\ndef mock_store(monkeypatch):\n    \"\"\"Mock the datasets store for all tests\"\"\"\n\n    async def mock_add_async(*args, **kwargs):\n        return None\n\n    async def mock_close_async(*args, **kwargs):\n        return None\n\n    def mock_add(*args, **kwargs):\n        return None\n\n    def mock_close(*args, **kwargs):\n        return None\n\n    store_mock = create_autospec(DatasetsStore, spec_set=False, instance=True)\n    store_mock.add_async = AsyncMock(side_effect=mock_add_async)\n    store_mock.close_async = AsyncMock(side_effect=mock_close_async)\n    store_mock.add = MagicMock(side_effect=mock_add)\n    store_mock.close = MagicMock(side_effect=mock_close)\n\n    def mock_connect(*args, **kwargs):\n        return store_mock\n\n    # Patch both the class and the connect method\n    monkeypatch.setattr(\"observers.stores.datasets.DatasetsStore.connect\", mock_connect)\n    monkeypatch.setattr(\n        \"observers.stores.datasets.DatasetsStore\", lambda *args, **kwargs: store_mock\n    )\n\n    return store_mock\n"
  },
  {
    "path": "tests/integration/models/test_async_examples.py",
    "content": "import asyncio\nimport os\nimport uuid\nfrom unittest.mock import MagicMock, create_autospec\n\nimport pytest\nfrom openai import AsyncOpenAI\nfrom openai.types.chat import ChatCompletion, ChatCompletionMessage\nfrom openai.types.chat.chat_completion import Choice, CompletionUsage\n\n\ndef get_async_example_files() -> list[str]:\n    \"\"\"Get list of asynchronous example files to test\n\n    Returns:\n        list[str]: List of paths to asynchronous example files\n    \"\"\"\n    examples_dir = \"examples/models\"\n    if not os.path.exists(examples_dir):\n        return []\n\n    async_files = []\n    for f in os.listdir(examples_dir):\n        if not f.endswith(\".py\"):\n            continue\n\n        filepath = os.path.join(examples_dir, f)\n        with open(filepath) as file:\n            content = file.read()\n            if \"async\" in content and \"stream\" not in content:\n                async_files.append(filepath)\n\n    return async_files\n\n\n@pytest.fixture\ndef mock_clients(monkeypatch):\n    \"\"\"Fixture providing mocked API clients\"\"\"\n\n    # Add async mock client\n    async def async_openai_fake_return(*args, **kwargs):\n        return ChatCompletion(\n            id=str(uuid.uuid4()),\n            choices=[\n                Choice(\n                    message=ChatCompletionMessage(\n                        content=\"\", role=\"assistant\", tool_calls=None, audio=None\n                    ),\n                    finish_reason=\"stop\",\n                    index=0,\n                    logprobs=None,\n                )\n            ],\n            model=\"gpt-4\",\n            usage=CompletionUsage(\n                prompt_tokens=10, completion_tokens=10, total_tokens=20\n            ),\n            created=1727238800,\n            object=\"chat.completion\",\n            system_fingerprint=None,\n        )\n\n    async_base_mock = create_autospec(AsyncOpenAI, spec_set=False, instance=True)\n    async_base_mock.chat = MagicMock()\n    async_base_mock.chat.completions = MagicMock()\n    async_base_mock.chat.completions.create = MagicMock(\n        side_effect=async_openai_fake_return\n    )\n\n    monkeypatch.setattr(\"openai.AsyncOpenAI\", lambda *args, **kwargs: async_base_mock)\n\n\n@pytest.mark.parametrize(\"example_path\", get_async_example_files())\n@pytest.mark.asyncio\nasync def test_async_example_files(example_path, mock_clients):\n    \"\"\"Test that async example files execute without errors\"\"\"\n    print(f\"Executing async example: {os.path.basename(example_path)}\")\n\n    with open(example_path) as f:\n        content = f.read()\n\n    exec_globals = {}\n    exec(content, exec_globals)\n    async_functions = [\n        f\n        for f in exec_globals.values()\n        if callable(f) and asyncio.iscoroutinefunction(f)\n    ]\n    if async_functions:\n        await async_functions[0]()\n    else:\n        pytest.fail(f\"No async functions found in {os.path.basename(example_path)}\")\n"
  },
  {
    "path": "tests/integration/models/test_examples.py",
    "content": "import os\nimport uuid\nfrom unittest.mock import MagicMock, patch\n\nimport litellm\nimport pytest\nfrom huggingface_hub import ChatCompletionOutput\nfrom openai.types.chat import ChatCompletion, ChatCompletionMessage\nfrom openai.types.chat.chat_completion import Choice, CompletionUsage\n\n\ndef get_sync_example_files() -> list[str]:\n    \"\"\"\n    Get list of synchronous example files to test\n    \"\"\"\n    examples_dir = \"examples/models\"\n    if not os.path.exists(examples_dir):\n        return []\n\n    sync_files = []\n    for f in os.listdir(examples_dir):\n        if not f.endswith(\".py\"):\n            continue\n\n        filepath = os.path.join(examples_dir, f)\n        with open(filepath) as file:\n            content = file.read()\n            if (\n                \"async def\" not in content\n                and \"await\" not in content\n                and \"stream=True\" not in content\n            ):\n                sync_files.append(filepath)\n\n    return sync_files\n\n\n@pytest.fixture(scope=\"function\")\ndef mock_clients():\n    \"\"\"Fixture providing mocked API clients\"\"\"\n\n    def openai_fake_return(*args, **kwargs):\n        return ChatCompletion(\n            id=str(uuid.uuid4()),\n            choices=[\n                Choice(\n                    message=ChatCompletionMessage(\n                        content=\"\", role=\"assistant\", tool_calls=None, audio=None\n                    ),\n                    finish_reason=\"stop\",\n                    index=0,\n                    logprobs=None,\n                )\n            ],\n            model=\"gpt-4\",\n            usage=CompletionUsage(\n                prompt_tokens=10, completion_tokens=10, total_tokens=20\n            ),\n            created=1727238800,\n            object=\"chat.completion\",\n            system_fingerprint=None,\n        )\n\n    def hf_fake_return(*args, **kwargs):\n        return ChatCompletionOutput(\n            id=str(uuid.uuid4()),\n            model=\"Qwen/Qwen2.5-Coder-32B-Instruct\",\n            choices=[{\"message\": {\"content\": \"Hello, world!\"}}],\n            created=1727238800,\n            usage={\"prompt_tokens\": 10, \"completion_tokens\": 10, \"total_tokens\": 20},\n            system_fingerprint=None,\n        )\n\n    # Create base mock for other clients\n    base_mock = MagicMock()\n    base_mock.chat.completions.create = MagicMock(side_effect=openai_fake_return)\n\n    hf_mock = MagicMock()\n    hf_mock.chat.completions.create = MagicMock(side_effect=hf_fake_return)\n\n    mocks = {\n        # Sync clients\n        \"openai.OpenAI\": patch(\"openai.OpenAI\", return_value=base_mock),\n        \"litellm.completion\": patch(\"litellm.completion\", litellm.mock_completion),\n        \"aisuite.Client\": patch(\"aisuite.Client\", return_value=base_mock),\n        \"huggingface_hub.InferenceClient\": patch(\n            \"huggingface_hub.InferenceClient\", return_value=hf_mock\n        ),\n    }\n\n    # Start all patches\n    for mock in mocks.values():\n        mock.start()\n\n    yield\n\n    # Stop all patches\n    for mock in mocks.values():\n        mock.stop()\n\n\n@pytest.mark.parametrize(\"example_path\", get_sync_example_files())\ndef test_sync_example_files(example_path, mock_clients):\n    \"\"\"Test that synchronous example files execute without errors\"\"\"\n    if \"async def\" in open(example_path).read() or \"await\" in open(example_path).read():\n        pytest.skip(\"Skipping async example in sync test\")\n\n    print(f\"Executing sync example: {os.path.basename(example_path)}\")\n\n    try:\n        with open(example_path) as f:\n            exec(f.read())\n    except Exception as e:\n        pytest.fail(f\"Failed to execute {os.path.basename(example_path)}: {str(e)}\")\n"
  },
  {
    "path": "tests/integration/models/test_stream_examples.py",
    "content": "import asyncio\nimport os\nimport uuid\nfrom unittest.mock import MagicMock, create_autospec\n\nimport pytest\nfrom huggingface_hub import (\n    AsyncInferenceClient,\n    ChatCompletionStreamOutput,\n    ChatCompletionStreamOutputChoice,\n    ChatCompletionStreamOutputDelta,\n)\nfrom openai import AsyncOpenAI\nfrom openai.types.chat.chat_completion_chunk import (\n    ChatCompletionChunk,\n    Choice,\n    ChoiceDelta,\n)\n\n\ndef get_async_example_files() -> list[str]:\n    \"\"\"Get list of asynchronous example files to test\n\n    Returns:\n        list[str]: List of paths to asynchronous example files\n    \"\"\"\n    examples_dir = \"examples/models\"\n    if not os.path.exists(examples_dir):\n        return []\n\n    async_files = []\n    for f in os.listdir(examples_dir):\n        if not f.endswith(\".py\"):\n            continue\n\n        filepath = os.path.join(examples_dir, f)\n        with open(filepath) as file:\n            content = file.read()\n            if \"stream=True\" in content:\n                async_files.append(filepath)\n\n    return async_files\n\n\n@pytest.fixture\ndef mock_clients(monkeypatch):\n    \"\"\"Fixture providing mocked API clients\"\"\"\n\n    # Add async mock client\n    async def async_openai_fake_return(*args, **kwargs):\n        async def async_iter():\n            yield ChatCompletionChunk(\n                id=str(uuid.uuid4()),\n                choices=[\n                    Choice(\n                        index=0,\n                        delta=ChoiceDelta(\n                            content=\"chunk0\",\n                        ),\n                    )\n                ],\n                model=\"gpt-4o\",\n                usage={\n                    \"prompt_tokens\": 10,\n                    \"completion_tokens\": 10,\n                    \"total_tokens\": 20,\n                },\n                created=1727238800,\n                system_fingerprint=None,\n                object=\"chat.completion.chunk\",\n            )\n\n        return async_iter()\n\n    async_base_mock = create_autospec(AsyncOpenAI, spec_set=False, instance=True)\n    async_base_mock.chat = MagicMock()\n    async_base_mock.chat.completions = MagicMock()\n    async_base_mock.chat.completions.create = MagicMock(\n        side_effect=async_openai_fake_return\n    )\n\n    monkeypatch.setattr(\"openai.AsyncOpenAI\", lambda *args, **kwargs: async_base_mock)\n\n    # Add HF mock client\n    async def hf_fake_return(*args, **kwargs):\n        async def async_iter():\n            yield ChatCompletionStreamOutput(\n                id=str(uuid.uuid4()),\n                choices=[\n                    ChatCompletionStreamOutputChoice(\n                        index=0,\n                        delta=ChatCompletionStreamOutputDelta(\n                            content=\"chunk0\",\n                            role=\"assistant\",\n                        ),\n                    )\n                ],\n                model=\"gpt2\",\n                usage={\n                    \"prompt_tokens\": 10,\n                    \"completion_tokens\": 10,\n                    \"total_tokens\": 20,\n                },\n                created=1727238800,\n                system_fingerprint=None,\n            )\n\n        return async_iter()\n\n    hf_base_mock = create_autospec(AsyncInferenceClient, spec_set=False, instance=True)\n    hf_base_mock.chat = MagicMock()\n    hf_base_mock.chat.completions = MagicMock()\n    hf_base_mock.chat.completions.create = MagicMock(side_effect=hf_fake_return)\n\n    monkeypatch.setattr(\n        \"huggingface_hub.AsyncInferenceClient\", lambda *args, **kwargs: hf_base_mock\n    )\n\n\n@pytest.mark.parametrize(\"example_path\", get_async_example_files())\n@pytest.mark.asyncio\nasync def test_async_example_files(example_path, mock_clients):\n    \"\"\"Test that async example files execute without errors\"\"\"\n    print(f\"Executing async example: {os.path.basename(example_path)}\")\n\n    with open(example_path) as f:\n        content = f.read()\n\n    exec_globals = {}\n    exec(content, exec_globals)\n    async_functions = [\n        f\n        for f in exec_globals.values()\n        if callable(f) and asyncio.iscoroutinefunction(f)\n    ]\n    if async_functions:\n        await async_functions[0]()\n    else:\n        pytest.fail(f\"No async functions found in {os.path.basename(example_path)}\")\n"
  },
  {
    "path": "tests/unit/stores/test_datasets.py",
    "content": "import os\nimport pytest\nfrom unittest.mock import patch\nfrom observers.stores.datasets import DatasetsStore\n\n\n@pytest.fixture\ndef mock_whoami():\n    with patch(\"observers.stores.datasets.whoami\") as mock:\n        mock.return_value = {}\n        yield mock\n\n\n@pytest.fixture\ndef mock_login():\n    with patch(\"observers.stores.datasets.login\") as mock:\n        yield mock\n\n\n@pytest.fixture\ndef datasets_store(mock_whoami, mock_login):\n    store = DatasetsStore()\n    yield store\n    store._cleanup()\n\n\ndef test_temp_dir_creation(datasets_store):\n    \"\"\"Test that temporary directory is created during initialization\"\"\"\n    assert datasets_store._temp_dir is not None\n    assert os.path.exists(datasets_store._temp_dir)\n\n\ndef test_temp_dir_cleanup(datasets_store):\n    \"\"\"Test that temporary directory is cleaned up properly\"\"\"\n    temp_dir = datasets_store._temp_dir\n    assert os.path.exists(temp_dir)\n\n    datasets_store._cleanup()\n    assert not os.path.exists(temp_dir)\n\n\ndef test_folder_path_defaults_to_temp_dir(datasets_store):\n    \"\"\"Test that folder_path defaults to temp_dir when not provided\"\"\"\n    assert datasets_store.folder_path == datasets_store._temp_dir\n\n\ndef test_custom_folder_path(mock_whoami, mock_login, tmp_path):\n    \"\"\"Test that custom folder_path is respected and not deleted during cleanup\"\"\"\n    custom_path = str(tmp_path / \"custom_datasets\")\n    os.makedirs(custom_path, exist_ok=True)\n\n    store = DatasetsStore(folder_path=custom_path)\n    assert store.folder_path == custom_path\n    assert store._temp_dir is None\n\n    store._cleanup()\n    assert os.path.exists(\n        custom_path\n    ), \"Custom folder should not be deleted during cleanup\"\n"
  }
]