main 766a6732a3ef cached
19 files
83.9 KB
19.8k tokens
39 symbols
1 requests
Download .txt
Repository: langchain-ai/langgraph-supervisor
Branch: main
Commit: 766a6732a3ef
Files: 19
Total size: 83.9 KB

Directory structure:
gitextract_1p582z2o/

├── .github/
│   ├── actions/
│   │   └── uv_setup/
│   │       └── action.yml
│   └── workflows/
│       ├── _lint.yml
│       ├── _test.yml
│       ├── ci.yml
│       └── release.yml
├── .gitignore
├── LICENSE
├── Makefile
├── README.md
├── langgraph_supervisor/
│   ├── __init__.py
│   ├── agent_name.py
│   ├── handoff.py
│   ├── py.typed
│   └── supervisor.py
├── pyproject.toml
└── tests/
    ├── __init__.py
    ├── test_agent_name.py
    ├── test_supervisor.py
    └── test_supervisor_functional_api.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/actions/uv_setup/action.yml
================================================
# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching

name: uv-install
description: Set up Python and uv

inputs:
  python-version:
    description: Python version, supporting MAJOR.MINOR only
    required: true

env:
  UV_VERSION: "0.5.25"

runs:
  using: composite
  steps:
    - name: Install uv and set the python version
      uses: astral-sh/setup-uv@v5
      with:
        version: ${{ env.UV_VERSION }}
        python-version: ${{ inputs.python-version }}


================================================
FILE: .github/workflows/_lint.yml
================================================
name: lint

on:
  workflow_call:
    inputs:
      working-directory:
        required: true
        type: string
        description: "From which folder this pipeline executes"
      python-version:
        required: true
        type: string
        description: "Python version to use"

env:
  WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}

  # This env var allows us to get inline annotations when ruff has complaints.
  RUFF_OUTPUT_FORMAT: github

  UV_FROZEN: "true"

permissions:
  contents: read

jobs:
  build:
    name: "make lint #${{ inputs.python-version }}"
    runs-on: ubuntu-latest
    timeout-minutes: 20
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python ${{ inputs.python-version }} + uv
        uses: "./.github/actions/uv_setup"
        with:
          python-version: ${{ inputs.python-version }}

      - name: Install dependencies
        working-directory: ${{ inputs.working-directory }}
        run: |
          uv sync --group test

      - name: Analysing the code with our lint
        working-directory: ${{ inputs.working-directory }}
        run: |
          make lint


================================================
FILE: .github/workflows/_test.yml
================================================
name: test

on:
  workflow_call:
    inputs:
      working-directory:
        required: true
        type: string
        description: "From which folder this pipeline executes"
      python-version:
        required: true
        type: string
        description: "Python version to use"

env:
  UV_FROZEN: "true"
  UV_NO_SYNC: "true"

permissions:
  contents: read

jobs:
  build:
    defaults:
      run:
        working-directory: ${{ inputs.working-directory }}
    runs-on: ubuntu-latest
    timeout-minutes: 20
    name: "make test #${{ inputs.python-version }}"
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python ${{ inputs.python-version }} + uv
        uses: "./.github/actions/uv_setup"
        id: setup-python
        with:
          python-version: ${{ inputs.python-version }}
      - name: Install dependencies
        shell: bash
        run: uv sync --group test

      - name: Run core tests
        shell: bash
        run: |
          make test


================================================
FILE: .github/workflows/ci.yml
================================================
---
name: Run CI Tests

on:
  push:
    branches: [ main ]
  pull_request:
  workflow_dispatch:  # Allows to trigger the workflow manually in GitHub UI

# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

permissions:
  contents: read

jobs:
  lint:
    strategy:
      matrix:
        # Only lint on the min and max supported Python versions.
        # It's extremely unlikely that there's a lint issue on any version in between
        # that doesn't show up on the min or max versions.
        #
        # GitHub rate-limits how many jobs can be running at any one time.
        # Starting new jobs is also relatively slow,
        # so linting on fewer versions makes CI faster.
        python-version:
          - "3.12"
    uses:
      ./.github/workflows/_lint.yml
    with:
      working-directory: .
      python-version: ${{ matrix.python-version }}
    secrets: inherit
  test:
    strategy:
      matrix:
        # Only lint on the min and max supported Python versions.
        # It's extremely unlikely that there's a lint issue on any version in between
        # that doesn't show up on the min or max versions.
        #
        # GitHub rate-limits how many jobs can be running at any one time.
        # Starting new jobs is also relatively slow,
        # so linting on fewer versions makes CI faster.
        python-version:
          - "3.10"
          - "3.12"
    uses:
      ./.github/workflows/_test.yml
    with:
      working-directory: .
      python-version: ${{ matrix.python-version }}
    secrets: inherit
  ci_success:
    name: "CI Success"
    needs: [lint, test]
    if: |
      always()
    runs-on: ubuntu-latest
    env:
      JOBS_JSON: ${{ toJSON(needs) }}
      RESULTS_JSON: ${{ toJSON(needs.*.result) }}
      EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
    steps:
      - name: "CI Success"
        run: |
          echo $JOBS_JSON
          echo $RESULTS_JSON
          echo "Exiting with $EXIT_CODE"
          exit $EXIT_CODE



================================================
FILE: .github/workflows/release.yml
================================================
name: release
run-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}
on:
  workflow_call:
    inputs:
      working-directory:
        required: true
        type: string
        description: "From which folder this pipeline executes"
  workflow_dispatch:
    inputs:
      working-directory:
        description: "From which folder this pipeline executes"
        default: "."
      dangerous-nonmain-release:
        required: false
        type: boolean
        default: false
        description: "Release from a non-main branch (danger!)"

env:
  PYTHON_VERSION: "3.11"
  UV_FROZEN: "true"
  UV_NO_SYNC: "true"

jobs:
  build:
    if: github.ref == 'refs/heads/main' || inputs.dangerous-nonmain-release
    environment: Scheduled testing
    runs-on: ubuntu-latest
    permissions:
      contents: read

    outputs:
      pkg-name: ${{ steps.check-version.outputs.pkg-name }}
      version: ${{ steps.check-version.outputs.version }}

    steps:
      - uses: actions/checkout@v4

      - name: Set up Python + uv
        uses: "./.github/actions/uv_setup"
        with:
          python-version: ${{ env.PYTHON_VERSION }}

      # We want to keep this build stage *separate* from the release stage,
      # so that there's no sharing of permissions between them.
      # The release stage has trusted publishing and GitHub repo contents write access,
      # and we want to keep the scope of that access limited just to the release job.
      # Otherwise, a malicious `build` step (e.g. via a compromised dependency)
      # could get access to our GitHub or PyPI credentials.
      #
      # Per the trusted publishing GitHub Action:
      # > It is strongly advised to separate jobs for building [...]
      # > from the publish job.
      # https://github.com/pypa/gh-action-pypi-publish#non-goals
      - name: Build project for distribution
        run: uv build
      - name: Upload build
        uses: actions/upload-artifact@v4
        with:
          name: dist
          path: ${{ inputs.working-directory }}/dist/

      - name: Check Version
        id: check-version
        shell: python
        working-directory: ${{ inputs.working-directory }}
        run: |
          import os
          import tomllib
          with open("pyproject.toml", "rb") as f:
              data = tomllib.load(f)
          pkg_name = data["project"]["name"]
          version = data["project"]["version"]
          with open(os.environ["GITHUB_OUTPUT"], "a") as f:
              f.write(f"pkg-name={pkg_name}\n")
              f.write(f"version={version}\n")
  publish:
    needs:
      - build
    runs-on: ubuntu-latest
    permissions:
      # This permission is used for trusted publishing:
      # https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
      #
      # Trusted publishing has to also be configured on PyPI for each package:
      # https://docs.pypi.org/trusted-publishers/adding-a-publisher/
      id-token: write

    defaults:
      run:
        working-directory: ${{ inputs.working-directory }}

    steps:
      - uses: actions/checkout@v4

      - name: Set up Python + uv
        uses: "./.github/actions/uv_setup"
        with:
          python-version: ${{ env.PYTHON_VERSION }}

      - uses: actions/download-artifact@v4
        with:
          name: dist
          path: ${{ inputs.working-directory }}/dist/

      - name: Publish package distributions to PyPI
        uses: pypa/gh-action-pypi-publish@release/v1
        with:
          packages-dir: ${{ inputs.working-directory }}/dist/
          verbose: true
          print-hash: true
          # Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
          attestations: false

  mark-release:
    needs:
      - build
      - publish
    runs-on: ubuntu-latest
    permissions:
      # This permission is needed by `ncipollo/release-action` to
      # create the GitHub release.
      contents: write

    defaults:
      run:
        working-directory: ${{ inputs.working-directory }}

    steps:
      - uses: actions/checkout@v4

      - name: Set up Python + uv
        uses: "./.github/actions/uv_setup"
        with:
          python-version: ${{ env.PYTHON_VERSION }}

      - uses: actions/download-artifact@v4
        with:
          name: dist
          path: ${{ inputs.working-directory }}/dist/

      - name: Create Tag
        uses: ncipollo/release-action@v1
        with:
          artifacts: "dist/*"
          token: ${{ secrets.GITHUB_TOKEN }}
          generateReleaseNotes: true
          tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
          body: ${{ needs.release-notes.outputs.release-body }}
          commit: main
          makeLatest: true

================================================
FILE: .gitignore
================================================
# Pyenv
.python-version
.ipynb_checkpoints/

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# Environments
.venv
.env

# mypy
.mypy_cache/
.dmypy.json
dmypy.json
.DS_Store


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2025 LangChain, Inc.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: Makefile
================================================
.PHONY: all lint format test help

# Default target executed when no arguments are given to make.
all: help

######################
# TESTING AND COVERAGE
######################

# Define a variable for the test file path.
TEST_FILE ?= tests/

test:
	uv run pytest -vv --disable-socket --allow-unix-socket $(TEST_FILE)

test_watch:
	uv run ptw . -- $(TEST_FILE)


######################
# LINTING AND FORMATTING
######################

# Define a variable for Python and notebook files.
lint format: PYTHON_FILES=.
lint_diff format_diff: PYTHON_FILES=$(shell git diff --relative=. --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$')

lint lint_diff:
	[ "$(PYTHON_FILES)" = "" ] ||	uv run ruff format $(PYTHON_FILES) --diff
	[ "$(PYTHON_FILES)" = "" ] ||	uv run ruff check $(PYTHON_FILES) --diff
	[ "$(PYTHON_FILES)" = "" ] || uvx ty check $(PYTHON_FILES)

format format_diff:
	[ "$(PYTHON_FILES)" = "" ] || uv run ruff check --fix $(PYTHON_FILES)
	[ "$(PYTHON_FILES)" = "" ] || uv run ruff format $(PYTHON_FILES)

	

######################
# HELP
######################

help:
	@echo '===================='
	@echo '-- LINTING --'
	@echo 'format                       - run code formatters'
	@echo 'lint                         - run linters'
	@echo '-- TESTS --'
	@echo 'test                         - run unit tests'
	@echo 'test TEST_FILE=<test_file>   - run all tests in file'
	@echo '-- DOCUMENTATION tasks are from the top-level Makefile --'




================================================
FILE: README.md
================================================
# 🤖 LangGraph Multi-Agent Supervisor

> **Note**: We now recommend using the **supervisor pattern directly via tools** rather than this library for most use cases. The tool-calling approach gives you more control over context engineering and is the recommended pattern in the [LangChain multi-agent guide](https://docs.langchain.com/oss/python/langchain/multi-agent). See our [supervisor tutorial](https://docs.langchain.com/oss/python/langchain/supervisor) for a step-by-step guide. We're making this library compatible with LangChain 1.0 to help users upgrade their existing code. If you find this library solves a problem that can't be easily addressed with the manual supervisor pattern, we'd love to hear about your use case!

A Python library for creating hierarchical multi-agent systems using [LangGraph](https://github.com/langchain-ai/langgraph). Hierarchical systems are a type of [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent) architecture where specialized agents are coordinated by a central **supervisor** agent. The supervisor controls all communication flow and task delegation, making decisions about which agent to invoke based on the current context and task requirements.

## Features

- 🤖 **Create a supervisor agent** to orchestrate multiple specialized agents
- 🛠️ **Tool-based agent handoff mechanism** for communication between agents
- 📝 **Flexible message history management** for conversation control

This library is built on top of [LangGraph](https://github.com/langchain-ai/langgraph), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraph/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/)

## Installation

```bash
pip install langgraph-supervisor
```

> [!Note]
> LangGraph Supervisor requires Python >= 3.10

## Quickstart

Here's a simple example of a supervisor managing two specialized agents:

![Supervisor Architecture](static/img/supervisor.png)

```bash
pip install langgraph-supervisor langchain-openai

export OPENAI_API_KEY=<your_api_key>
```

```python
from langchain_openai import ChatOpenAI

from langgraph_supervisor import create_supervisor
from langgraph.prebuilt import create_react_agent

model = ChatOpenAI(model="gpt-4o")

# Create specialized agents

def add(a: float, b: float) -> float:
    """Add two numbers."""
    return a + b

def multiply(a: float, b: float) -> float:
    """Multiply two numbers."""
    return a * b

def web_search(query: str) -> str:
    """Search the web for information."""
    return (
        "Here are the headcounts for each of the FAANG companies in 2024:\n"
        "1. **Facebook (Meta)**: 67,317 employees.\n"
        "2. **Apple**: 164,000 employees.\n"
        "3. **Amazon**: 1,551,000 employees.\n"
        "4. **Netflix**: 14,000 employees.\n"
        "5. **Google (Alphabet)**: 181,269 employees."
    )

math_agent = create_react_agent(
    model=model,
    tools=[add, multiply],
    name="math_expert",
    prompt="You are a math expert. Always use one tool at a time."
)

research_agent = create_react_agent(
    model=model,
    tools=[web_search],
    name="research_expert",
    prompt="You are a world class researcher with access to web search. Do not do any math."
)

# Create supervisor workflow
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    prompt=(
        "You are a team supervisor managing a research expert and a math expert. "
        "For current events, use research_agent. "
        "For math problems, use math_agent."
    )
)

# Compile and run
app = workflow.compile()
result = app.invoke({
    "messages": [
        {
            "role": "user",
            "content": "what's the combined headcount of the FAANG companies in 2024?"
        }
    ]
})
```

> [!TIP]
> For developing, debugging, and deploying AI agents and LLM applications, see [LangSmith](https://docs.langchain.com/langsmith/home).

## Message History Management

You can control how messages from worker agents are added to the overall conversation history of the multi-agent system:

Include full message history from an agent:

![Full History](static/img/full_history.png)

```python
workflow = create_supervisor(
    agents=[agent1, agent2],
    output_mode="full_history"
)
```

Include only the final agent response:

![Last Message](static/img/last_message.png)

```python
workflow = create_supervisor(
    agents=[agent1, agent2],
    output_mode="last_message"
)
```

## Multi-level Hierarchies

You can create multi-level hierarchical systems by creating a supervisor that manages multiple supervisors.

```python
research_team = create_supervisor(
    [research_agent, math_agent],
    model=model,
    supervisor_name="research_supervisor"
).compile(name="research_team")

writing_team = create_supervisor(
    [writing_agent, publishing_agent],
    model=model,
    supervisor_name="writing_supervisor"
).compile(name="writing_team")

top_level_supervisor = create_supervisor(
    [research_team, writing_team],
    model=model,
    supervisor_name="top_level_supervisor"
).compile(name="top_level_supervisor")
```

## Adding Memory

You can add [short-term](https://langchain-ai.github.io/langgraph/how-tos/persistence/) and [long-term](https://langchain-ai.github.io/langgraph/how-tos/cross-thread-persistence/) [memory](https://langchain-ai.github.io/langgraph/concepts/memory/) to your supervisor multi-agent system. Since `create_supervisor()` returns an instance of `StateGraph` that needs to be compiled before use, you can directly pass a [checkpointer](https://langchain-ai.github.io/langgraph/reference/checkpoints/#langgraph.checkpoint.base.BaseCheckpointSaver) or a [store](https://langchain-ai.github.io/langgraph/reference/store/#langgraph.store.base.BaseStore) instance to the `.compile()` method:

```python
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.store.memory import InMemoryStore

checkpointer = InMemorySaver()
store = InMemoryStore()

model = ...
research_agent = ...
math_agent = ...

workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    prompt="You are a team supervisor managing a research expert and a math expert.",
)

# Compile with checkpointer/store
app = workflow.compile(
    checkpointer=checkpointer,
    store=store
)
```

## How to customize

### Customizing handoff tools

By default, the supervisor uses handoff tools created with the prebuilt `create_handoff_tool`. You can also create your own, custom handoff tools. Here are some ideas on how you can modify the default implementation:

* change tool name and/or description
* add tool call arguments for the LLM to populate, for example a task description for the next agent
* change what data is passed to the subagent as part of the handoff: by default `create_handoff_tool` passes **full** message history (all of the messages generated in the supervisor up to this point), as well as a tool message indicating successful handoff.

Here is an example of how to pass customized handoff tools to `create_supervisor`:

```python
from langgraph_supervisor import create_handoff_tool
workflow = create_supervisor(
    [research_agent, math_agent],
    tools=[
        create_handoff_tool(agent_name="math_expert", name="assign_to_math_expert", description="Assign task to math expert"),
        create_handoff_tool(agent_name="research_expert", name="assign_to_research_expert", description="Assign task to research expert")
    ],
    model=model,
)
```

You can also control whether the handoff tool invocation messages are added to the state. By default, they are added (`add_handoff_messages=True`), but you can disable this if you want a more concise history:

```python
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    add_handoff_messages=False
)
```

Additionally, you can customize the prefix used for the automatically generated handoff tools:

```python
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    handoff_tool_prefix="delegate_to"
)
# This will create tools named: delegate_to_research_expert, delegate_to_math_expert
```

Here is an example of what a custom handoff tool might look like:

```python
from typing import Annotated

from langchain_core.tools import tool, BaseTool, InjectedToolCallId
from langchain_core.messages import ToolMessage
from langgraph.types import Command
from langgraph.prebuilt import InjectedState
from langgraph_supervisor.handoff import METADATA_KEY_HANDOFF_DESTINATION

def create_custom_handoff_tool(*, agent_name: str, name: str | None, description: str | None) -> BaseTool:

    @tool(name, description=description)
    def handoff_to_agent(
        # you can add additional tool call arguments for the LLM to populate
        # for example, you can ask the LLM to populate a task description for the next agent
        task_description: Annotated[str, "Detailed description of what the next agent should do, including all of the relevant context."],
        # you can inject the state of the agent that is calling the tool
        state: Annotated[dict, InjectedState],
        tool_call_id: Annotated[str, InjectedToolCallId],
    ):
        tool_message = ToolMessage(
            content=f"Successfully transferred to {agent_name}",
            name=name,
            tool_call_id=tool_call_id,
        )
        messages = state["messages"]
        return Command(
            goto=agent_name,
            graph=Command.PARENT,
            # NOTE: this is a state update that will be applied to the swarm multi-agent graph (i.e., the PARENT graph)
            update={
                "messages": messages + [tool_message],
                "active_agent": agent_name,
                # optionally pass the task description to the next agent
                # NOTE: individual agents would need to have `task_description` in their state schema
                # and would need to implement logic for how to consume it
                "task_description": task_description,
            },
        )

    handoff_to_agent.metadata = {METADATA_KEY_HANDOFF_DESTINATION: agent_name}
    return handoff_to_agent
```

### Message Forwarding

You can equip the supervisor with a tool to directly forward the last message received from a worker agent straight to the final output of the graph using `create_forward_message_tool`. This is useful when the supervisor determines that the worker's response is sufficient and doesn't require further processing or summarization by the supervisor itself. It saves tokens for the supervisor and avoids potential misrepresentation of the worker's response through paraphrasing.

```python
from langgraph_supervisor.handoff import create_forward_message_tool

# Assume research_agent and math_agent are defined as before

forwarding_tool = create_forward_message_tool("supervisor") # The argument is the name to assign to the resulting forwarded message
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    # Pass the forwarding tool along with any other custom or default handoff tools
    tools=[forwarding_tool]
)
```

This creates a tool named `forward_message` that the supervisor can invoke. The tool expects an argument `from_agent` specifying which agent's last message should be forwarded directly to the output.

## Using Functional API 

Here's a simple example of a supervisor managing two specialized agentic workflows created using Functional API:

```bash
pip install langgraph-supervisor langchain-openai

export OPENAI_API_KEY=<your_api_key>
```

```python
from langgraph.prebuilt import create_react_agent
from langgraph_supervisor import create_supervisor

from langchain_openai import ChatOpenAI

from langgraph.func import entrypoint, task
from langgraph.graph import add_messages

model = ChatOpenAI(model="gpt-4o")

# Create specialized agents

# Functional API - Agent 1 (Joke Generator)
@task
def generate_joke(messages):
    """First LLM call to generate initial joke"""
    system_message = {
        "role": "system", 
        "content": "Write a short joke"
    }
    msg = model.invoke(
        [system_message] + messages
    )
    return msg

@entrypoint()
def joke_agent(state):
    joke = generate_joke(state['messages']).result()
    messages = add_messages(state["messages"], [joke])
    return {"messages": messages}

joke_agent.name = "joke_agent"

# Graph API - Agent 2 (Research Expert)
def web_search(query: str) -> str:
    """Search the web for information."""
    return (
        "Here are the headcounts for each of the FAANG companies in 2024:\n"
        "1. **Facebook (Meta)**: 67,317 employees.\n"
        "2. **Apple**: 164,000 employees.\n"
        "3. **Amazon**: 1,551,000 employees.\n"
        "4. **Netflix**: 14,000 employees.\n"
        "5. **Google (Alphabet)**: 181,269 employees."
    )

research_agent = create_react_agent(
    model=model,
    tools=[web_search],
    name="research_expert",
    prompt="You are a world class researcher with access to web search. Do not do any math."
)

# Create supervisor workflow
workflow = create_supervisor(
    [research_agent, joke_agent],
    model=model,
    prompt=(
        "You are a team supervisor managing a research expert and a joke expert. "
        "For current events, use research_agent. "
        "For any jokes, use joke_agent."
    )
)

# Compile and run
app = workflow.compile()
result = app.invoke({
    "messages": [
        {
            "role": "user",
            "content": "Share a joke to relax and start vibe coding for my next project idea."
        }
    ]
})

for m in result["messages"]:
    m.pretty_print()
```


================================================
FILE: langgraph_supervisor/__init__.py
================================================
from langgraph_supervisor.handoff import (
    create_forward_message_tool,
    create_handoff_tool,
)
from langgraph_supervisor.supervisor import create_supervisor

__all__ = ["create_supervisor", "create_handoff_tool", "create_forward_message_tool"]


================================================
FILE: langgraph_supervisor/agent_name.py
================================================
import re
from typing import Any, Literal, Sequence, TypeGuard, cast

from langchain_core.language_models import LanguageModelLike
from langchain_core.messages import (
    AIMessage,
    BaseMessage,
    MessageLikeRepresentation,
    convert_to_messages,
)
from langchain_core.prompt_values import PromptValue
from langchain_core.runnables import RunnableLambda

NAME_PATTERN = re.compile(r"<name>(.*?)</name>", re.DOTALL)
CONTENT_PATTERN = re.compile(r"<content>(.*?)</content>", re.DOTALL)

AgentNameMode = Literal["inline"]


def _is_content_blocks_content(content: list[dict | str] | str) -> TypeGuard[list[dict]]:
    return (
        isinstance(content, list)
        and len(content) > 0
        and isinstance(content[0], dict)
        and "type" in content[0]
    )


def add_inline_agent_name(message: BaseMessage) -> BaseMessage:
    """Add name and content XML tags to the message content.

    Examples:

        >>> add_inline_agent_name(AIMessage(content="Hello", name="assistant"))
        AIMessage(content="<name>assistant</name><content>Hello</content>", name="assistant")

        >>> add_inline_agent_name(AIMessage(content=[{"type": "text", "text": "Hello"}], name="assistant"))
        AIMessage(content=[{"type": "text", "text": "<name>assistant</name><content>Hello</content>"}], name="assistant")
    """
    if not isinstance(message, AIMessage) or not message.name:
        return message

    formatted_message = message.model_copy()
    if _is_content_blocks_content(message.content):
        text_blocks = [block for block in message.content if block["type"] == "text"]
        non_text_blocks = [block for block in message.content if block["type"] != "text"]
        content = text_blocks[0]["text"] if text_blocks else ""
        formatted_content = f"<name>{message.name}</name><content>{content}</content>"
        formatted_message_content = [{"type": "text", "text": formatted_content}] + non_text_blocks
        formatted_message.content = formatted_message_content
    else:
        formatted_message.content = (
            f"<name>{message.name}</name><content>{formatted_message.content}</content>"
        )
    return formatted_message


def remove_inline_agent_name(message: BaseMessage) -> BaseMessage:
    """Remove explicit name and content XML tags from the AI message content.

    Examples:

        >>> remove_inline_agent_name(AIMessage(content="<name>assistant</name><content>Hello</content>", name="assistant"))
        AIMessage(content="Hello", name="assistant")

        >>> remove_inline_agent_name(AIMessage(content=[{"type": "text", "text": "<name>assistant</name><content>Hello</content>"}], name="assistant"))
        AIMessage(content=[{"type": "text", "text": "Hello"}], name="assistant")
    """
    if not isinstance(message, AIMessage) or not message.content:
        return message

    if is_content_blocks_content := _is_content_blocks_content(message.content):
        text_blocks = [
            block
            for block in message.content
            if isinstance(block, dict) and block["type"] == "text"
        ]
        if not text_blocks:
            return message

        non_text_blocks = [
            block
            for block in message.content
            if isinstance(block, dict) and block["type"] != "text"
        ]
        content = cast(dict[str, Any], text_blocks[0])["text"]
    else:
        content = message.content

    name_match: re.Match | None = NAME_PATTERN.search(content)
    content_match: re.Match | None = CONTENT_PATTERN.search(content)
    if not name_match or not content_match:
        return message

    parsed_content = content_match.group(1)
    parsed_message = message.model_copy()
    if is_content_blocks_content:
        content_blocks = non_text_blocks
        if parsed_content:
            content_blocks = [{"type": "text", "text": parsed_content}] + content_blocks

        parsed_message.content = cast(list[str | dict], content_blocks)
    else:
        parsed_message.content = parsed_content
    return parsed_message


def with_agent_name(
    model: LanguageModelLike,
    agent_name_mode: AgentNameMode,
) -> LanguageModelLike:
    """Attach formatted agent names to the messages passed to and from a language model.

    This is useful for making a message history with multiple agents more coherent.

    NOTE: agent name is consumed from the message.name field.
        If you're using an agent built with create_react_agent, name is automatically set.
        If you're building a custom agent, make sure to set the name on the AI message returned by the LLM.

    Args:
        model: Language model to add agent name formatting to.
        agent_name_mode: Use to specify how to expose the agent name to the LLM.
            - "inline": Add the agent name directly into the content field of the AI message using XML-style tags.
                Example: "How can I help you" -> "<name>agent_name</name><content>How can I help you?</content>".
    """
    if agent_name_mode == "inline":
        process_input_message = add_inline_agent_name
        process_output_message = remove_inline_agent_name
    else:
        raise ValueError(
            f"Invalid agent name mode: {agent_name_mode}. Needs to be one of: {AgentNameMode.__args__}"
        )

    def process_input_messages(
        input: Sequence[MessageLikeRepresentation] | PromptValue,
    ) -> list[BaseMessage]:
        messages = convert_to_messages(input)
        return [process_input_message(message) for message in messages]

    chain = (
        process_input_messages
        | model
        | RunnableLambda(process_output_message, name="process_output_message")
    )

    return cast(LanguageModelLike, chain)


================================================
FILE: langgraph_supervisor/handoff.py
================================================
import re
import uuid
from typing import TypeGuard, cast

from langchain_core.messages import AIMessage, ToolCall, ToolMessage
from langchain_core.tools import BaseTool, InjectedToolCallId, tool
from langgraph.prebuilt import InjectedState
from langgraph.types import Command, Send
from typing_extensions import Annotated

WHITESPACE_RE = re.compile(r"\s+")
METADATA_KEY_HANDOFF_DESTINATION = "__handoff_destination"
METADATA_KEY_IS_HANDOFF_BACK = "__is_handoff_back"


def _normalize_agent_name(agent_name: str) -> str:
    """Normalize an agent name to be used inside the tool name."""
    return WHITESPACE_RE.sub("_", agent_name.strip()).lower()


def _has_multiple_content_blocks(content: str | list[str | dict]) -> TypeGuard[list[dict]]:
    """Check if content contains multiple content blocks."""
    return isinstance(content, list) and len(content) > 1 and isinstance(content[0], dict)


def _remove_non_handoff_tool_calls(
    last_ai_message: AIMessage, handoff_tool_call_id: str
) -> AIMessage:
    """Remove tool calls that are not meant for the agent."""
    # if the supervisor is calling multiple agents/tools in parallel,
    # we need to remove tool calls that are not meant for this agent
    # to ensure that the resulting message history is valid
    content = last_ai_message.content
    if _has_multiple_content_blocks(content):
        content = [
            content_block
            for content_block in content
            if (content_block["type"] == "tool_use" and content_block["id"] == handoff_tool_call_id)
            or content_block["type"] != "tool_use"
        ]

    last_ai_message = AIMessage(
        content=content,
        tool_calls=[
            tool_call
            for tool_call in last_ai_message.tool_calls
            if tool_call["id"] == handoff_tool_call_id
        ],
        name=last_ai_message.name,
        id=str(uuid.uuid4()),
    )
    return last_ai_message


def create_handoff_tool(
    *,
    agent_name: str,
    name: str | None = None,
    description: str | None = None,
    add_handoff_messages: bool = True,
) -> BaseTool:
    """Create a tool that can handoff control to the requested agent.

    Args:
        agent_name: The name of the agent to handoff control to, i.e. the name of the
            agent node in the multi-agent graph.

            Agent names should be simple, clear and unique, preferably in snake_case,
            although you are only limited to the names accepted by LangGraph
            nodes as well as the tool names accepted by LLM providers
            (the tool name will look like this: `transfer_to_<agent_name>`).
        name: Optional name of the tool to use for the handoff.

            If not provided, the tool name will be `transfer_to_<agent_name>`.
        description: Optional description for the handoff tool.

            If not provided, the description will be `Ask agent <agent_name> for help`.
        add_handoff_messages: Whether to add handoff messages to the message history.

            If `False`, the handoff messages will be omitted from the message history.
    """
    if name is None:
        name = f"transfer_to_{_normalize_agent_name(agent_name)}"

    if description is None:
        description = f"Ask agent '{agent_name}' for help"

    @tool(name, description=description)
    def handoff_to_agent(
        state: Annotated[dict, InjectedState],
        tool_call_id: Annotated[str, InjectedToolCallId],
    ) -> Command:
        tool_message = ToolMessage(
            content=f"Successfully transferred to {agent_name}",
            name=name,
            tool_call_id=tool_call_id,
            response_metadata={METADATA_KEY_HANDOFF_DESTINATION: agent_name},
        )
        last_ai_message = cast(AIMessage, state["messages"][-1])
        # Handle parallel handoffs
        if len(last_ai_message.tool_calls) > 1:
            handoff_messages = state["messages"][:-1]
            if add_handoff_messages:
                handoff_messages.extend(
                    (
                        _remove_non_handoff_tool_calls(last_ai_message, tool_call_id),
                        tool_message,
                    )
                )
            return Command(
                graph=Command.PARENT,
                # NOTE: we are using Send here to allow the ToolNode in langgraph.prebuilt
                # to handle parallel handoffs by combining all Send commands into a single command
                goto=[Send(agent_name, {**state, "messages": handoff_messages})],
            )
        # Handle single handoff
        else:
            if add_handoff_messages:
                handoff_messages = state["messages"] + [tool_message]
            else:
                handoff_messages = state["messages"][:-1]
            return Command(
                goto=agent_name,
                graph=Command.PARENT,
                update={**state, "messages": handoff_messages},
            )

    handoff_to_agent.metadata = {METADATA_KEY_HANDOFF_DESTINATION: agent_name}
    return handoff_to_agent


def create_handoff_back_messages(
    agent_name: str, supervisor_name: str
) -> tuple[AIMessage, ToolMessage]:
    """Create a pair of (AIMessage, ToolMessage) to add to the message history when returning control to the supervisor."""
    tool_call_id = str(uuid.uuid4())
    tool_name = f"transfer_back_to_{_normalize_agent_name(supervisor_name)}"
    tool_calls = [ToolCall(name=tool_name, args={}, id=tool_call_id)]
    return (
        AIMessage(
            content=f"Transferring back to {supervisor_name}",
            tool_calls=tool_calls,
            name=agent_name,
            response_metadata={METADATA_KEY_IS_HANDOFF_BACK: True},
        ),
        ToolMessage(
            content=f"Successfully transferred back to {supervisor_name}",
            name=tool_name,
            tool_call_id=tool_call_id,
            response_metadata={METADATA_KEY_IS_HANDOFF_BACK: True},
        ),
    )


def create_forward_message_tool(supervisor_name: str = "supervisor") -> BaseTool:
    """Create a tool the supervisor can use to forward a worker message by name.

    This helps avoid information loss any time the supervisor rewrites a worker query
    to the user and also can save some tokens.

    Args:
        supervisor_name: The name of the supervisor node (used for namespacing the tool).

    Returns:
        BaseTool: The `'forward_message'` tool.
    """
    tool_name = "forward_message"
    desc = (
        "Forwards the latest message from the specified agent to the user"
        " without any changes. Use this to preserve information fidelity, avoid"
        " misinterpretation of questions or responses, and save time."
    )

    @tool(tool_name, description=desc)
    def forward_message(
        from_agent: str,
        state: Annotated[dict, InjectedState],
    ) -> str | Command:
        target_message = next(
            (
                m
                for m in reversed(state["messages"])
                if isinstance(m, AIMessage)
                and (m.name or "").lower() == from_agent.lower()
                and not m.response_metadata.get(METADATA_KEY_IS_HANDOFF_BACK)
            ),
            None,
        )
        if not target_message:
            found_names = set(
                m.name for m in state["messages"] if isinstance(m, AIMessage) and m.name
            )
            return (
                f"Could not find message from source agent {from_agent}. Found names: {found_names}"
            )
        updates = [
            AIMessage(
                content=target_message.content,
                name=supervisor_name,
                id=str(uuid.uuid4()),
            ),
        ]

        return Command(
            graph=Command.PARENT,
            # NOTE: this does nothing.
            goto="__end__",
            # we also propagate the update to make sure the handoff messages are applied
            # to the parent graph's state
            update={**state, "messages": updates},
        )

    return forward_message


================================================
FILE: langgraph_supervisor/py.typed
================================================


================================================
FILE: langgraph_supervisor/supervisor.py
================================================
import inspect
from typing import Any, Callable, Literal, Optional, Sequence, Type, Union, cast, get_args
from uuid import UUID, uuid5
from warnings import warn

from langchain_core.language_models import BaseChatModel, LanguageModelLike
from langchain_core.messages import AnyMessage, ToolMessage
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import BaseTool
from langgraph._internal._config import patch_configurable
from langgraph._internal._runnable import RunnableCallable, RunnableLike
from langgraph._internal._typing import DeprecatedKwargs
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
from langgraph.prebuilt.chat_agent_executor import (
    AgentState,  # type: ignore[deprecated]
    AgentStateWithStructuredResponse,  # type: ignore[deprecated]
    Prompt,
    StateSchemaType,
    StructuredResponseSchema,
    _should_bind_tools,
    create_react_agent,  # type: ignore[deprecated]
)
from langgraph.pregel import Pregel
from langgraph.pregel.remote import RemoteGraph
from typing_extensions import Annotated, TypedDict, Unpack

from langgraph_supervisor.agent_name import AgentNameMode, with_agent_name
from langgraph_supervisor.handoff import (
    METADATA_KEY_HANDOFF_DESTINATION,
    _normalize_agent_name,
    create_handoff_back_messages,
    create_handoff_tool,
)

OutputMode = Literal["full_history", "last_message"]
"""Mode for adding agent outputs to the message history in the multi-agent workflow

- `full_history`: add the entire agent message history
- `last_message`: add only the last message
"""


MODELS_NO_PARALLEL_TOOL_CALLS = {"o3-mini", "o3", "o4-mini"}


def _supports_disable_parallel_tool_calls(model: LanguageModelLike) -> bool:
    if not isinstance(model, BaseChatModel):
        return False

    if (
        model_name := getattr(model, "model_name", None)
    ) and model_name in MODELS_NO_PARALLEL_TOOL_CALLS:
        return False

    if not hasattr(model, "bind_tools"):
        return False

    if "parallel_tool_calls" not in inspect.signature(model.bind_tools).parameters:
        return False

    return True


def _make_call_agent(
    agent: Pregel[Any],
    output_mode: OutputMode,
    add_handoff_back_messages: bool,
    supervisor_name: str,
) -> RunnableCallable:
    if output_mode not in get_args(OutputMode):
        raise ValueError(
            f"Invalid agent output mode: {output_mode}. Needs to be one of {get_args(OutputMode)}"
        )

    def _process_output(output: dict) -> dict:
        messages = output["messages"]
        if output_mode == "full_history":
            pass
        elif output_mode == "last_message":
            if isinstance(messages[-1], ToolMessage):
                messages = messages[-2:]
            else:
                messages = messages[-1:]

        else:
            raise ValueError(
                f"Invalid agent output mode: {output_mode}. "
                f"Needs to be one of {OutputMode.__args__}"
            )

        if add_handoff_back_messages:
            messages.extend(create_handoff_back_messages(agent.name, supervisor_name))

        return {
            **output,
            "messages": messages,
        }

    def call_agent(state: dict, config: RunnableConfig) -> dict:
        thread_id = config.get("configurable", {}).get("thread_id")
        output = agent.invoke(
            state,
            patch_configurable(
                config,
                {"thread_id": str(uuid5(UUID(str(thread_id)), agent.name)) if thread_id else None},
            )
            if isinstance(agent, RemoteGraph)
            else config,
        )
        return _process_output(output)

    async def acall_agent(state: dict, config: RunnableConfig) -> dict:
        thread_id = config.get("configurable", {}).get("thread_id")
        output = await agent.ainvoke(
            state,
            patch_configurable(
                config,
                {"thread_id": str(uuid5(UUID(str(thread_id)), agent.name)) if thread_id else None},
            )
            if isinstance(agent, RemoteGraph)
            else config,
        )
        return _process_output(output)

    return RunnableCallable(call_agent, acall_agent)


def _get_handoff_destinations(tools: Sequence[BaseTool | Callable]) -> list[str]:
    """Extract handoff destinations from provided tools.
    Args:
        tools: List of tools to inspect.
    Returns:
        List of agent names that are handoff destinations.
    """
    return [
        tool.metadata[METADATA_KEY_HANDOFF_DESTINATION]
        for tool in tools
        if isinstance(tool, BaseTool)
        and tool.metadata is not None
        and METADATA_KEY_HANDOFF_DESTINATION in tool.metadata
    ]


def _prepare_tool_node(
    tools: list[BaseTool | Callable] | ToolNode | None,
    handoff_tool_prefix: Optional[str],
    add_handoff_messages: bool,
    agent_names: set[str],
) -> ToolNode:
    """Prepare the ToolNode to use in supervisor agent."""
    if isinstance(tools, ToolNode):
        input_tool_node = tools
        tool_classes = list(tools.tools_by_name.values())
    elif tools:
        input_tool_node = ToolNode(tools)
        # get the tool functions wrapped in a tool class from the ToolNode
        tool_classes = list(input_tool_node.tools_by_name.values())
    else:
        input_tool_node = None
        tool_classes = []

    handoff_destinations = _get_handoff_destinations(tool_classes)
    if handoff_destinations:
        if missing_handoff_destinations := set(agent_names) - set(handoff_destinations):
            raise ValueError(
                "When providing custom handoff tools, you must provide them for all subagents. "
                f"Missing handoff tools for agents '{missing_handoff_destinations}'."
            )

        # Handoff tools should be already provided here
        tool_node = cast(ToolNode, input_tool_node)
    else:
        handoff_tools = [
            create_handoff_tool(
                agent_name=agent_name,
                name=(
                    None
                    if handoff_tool_prefix is None
                    else f"{handoff_tool_prefix}{_normalize_agent_name(agent_name)}"
                ),
                add_handoff_messages=add_handoff_messages,
            )
            for agent_name in agent_names
        ]
        all_tools = tool_classes + list(handoff_tools)

        # re-wrap the combined tools in a ToolNode
        # if the original input was a ToolNode, apply the same params
        if input_tool_node is not None:
            tool_node = ToolNode(
                all_tools,
                name=str(input_tool_node.name),
                tags=list(input_tool_node.tags) if input_tool_node.tags else None,
                handle_tool_errors=input_tool_node._handle_tool_errors,
                messages_key=input_tool_node._messages_key,
            )
        else:
            tool_node = ToolNode(all_tools)

    return tool_node


class _OuterState(TypedDict):
    """The state of the supervisor workflow."""

    messages: Annotated[Sequence[AnyMessage], add_messages]


def create_supervisor(
    agents: list[Pregel],
    *,
    model: LanguageModelLike,
    tools: list[BaseTool | Callable] | ToolNode | None = None,
    prompt: Prompt | None = None,
    response_format: Optional[
        Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]
    ] = None,
    pre_model_hook: Optional[RunnableLike] = None,
    post_model_hook: Optional[RunnableLike] = None,
    parallel_tool_calls: bool = False,
    state_schema: StateSchemaType | None = None,
    context_schema: Type[Any] | None = None,
    output_mode: OutputMode = "last_message",
    add_handoff_messages: bool = True,
    handoff_tool_prefix: Optional[str] = None,
    add_handoff_back_messages: Optional[bool] = None,
    supervisor_name: str = "supervisor",
    include_agent_name: AgentNameMode | None = None,
    **deprecated_kwargs: Unpack[DeprecatedKwargs],
) -> StateGraph:
    """Create a multi-agent supervisor.

    Args:
        agents: List of agents to manage.

            An agent can be a LangGraph [`CompiledStateGraph`][langgraph.graph.state.CompiledStateGraph],
            a functional API workflow, or any other [Pregel][langgraph.pregel.Pregel]
            object.
        model: Language model to use for the supervisor
        tools: Tools to use for the supervisor
        prompt: Optional prompt to use for the supervisor.

            Can be one of:

            - `str`: This is converted to a `SystemMessage` and added to the beginning of the list of messages in `state["messages"]`.
            - `SystemMessage`: this is added to the beginning of the list of messages in `state["messages"]`.
            - `Callable`: This function should take in full graph state and the output is then passed to the language model.
            - `Runnable`: This runnable should take in full graph state and the output is then passed to the language model.
        response_format: An optional schema for the final supervisor output.

            If provided, output will be formatted to match the given schema and returned in the `'structured_response'` state key.

            If not provided, `structured_response` will not be present in the output state.

            Can be passed in as:

            - An OpenAI function/tool schema,
            - A JSON Schema,
            - A TypedDict class,
            - A Pydantic class.
            - A tuple `(prompt, schema)`, where schema is one of the above.
                The prompt will be used together with the model that is being used to generate the structured response.

            !!! Important
                `response_format` requires the model to support `.with_structured_output`

            !!! Note
                `response_format` requires `structured_response` key in your state schema.

                You can use the prebuilt `langgraph.prebuilt.chat_agent_executor.AgentStateWithStructuredResponse`.
        pre_model_hook: An optional node to add before the LLM node in the supervisor agent (i.e., the node that calls the LLM).

            Useful for managing long message histories (e.g., message trimming, summarization, etc.).

            Pre-model hook must be a callable or a runnable that takes in current graph state and returns a state update in the form of

            ```python
            # At least one of `messages` or `llm_input_messages` MUST be provided
            {
                # If provided, will UPDATE the `messages` in the state
                "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES), ...],
                # If provided, will be used as the input to the LLM,
                # and will NOT UPDATE `messages` in the state
                "llm_input_messages": [...],
                # Any other state keys that need to be propagated
                ...
            }
            ```

            !!! Important
                At least one of `messages` or `llm_input_messages` MUST be provided and will be used as an input to the `agent` node.
                The rest of the keys will be added to the graph state.

            !!! Warning
                If you are returning `messages` in the pre-model hook, you should OVERWRITE the `messages` key by doing the following:

                ```python
                {
                    "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES), *new_messages]
                    ...
                }
                ```
        post_model_hook: An optional node to add after the LLM node in the supervisor agent (i.e., the node that calls the LLM).

            Useful for implementing human-in-the-loop, guardrails, validation, or other post-processing.

            Post-model hook must be a callable or a runnable that takes in current graph state and returns a state update.
        parallel_tool_calls: Whether to allow the supervisor LLM to call tools in parallel (only OpenAI and Anthropic).

            Use this to control whether the supervisor can hand off to multiple agents at once.

            If `True`, will enable parallel tool calls.

            If `False`, will disable parallel tool calls.

            !!! Important
                This is currently supported only by OpenAI and Anthropic models.
                To control parallel tool calling for other providers, add explicit instructions for tool use to the system prompt.
        state_schema: State schema to use for the supervisor graph.
        context_schema: Specifies the schema for the context object that will be passed to the workflow.
        output_mode: Mode for adding managed agents' outputs to the message history in the multi-agent workflow.

            Can be one of:

            - `full_history`: Add the entire agent message history
            - `last_message`: Add only the last message
        add_handoff_messages: Whether to add a pair of `(AIMessage, ToolMessage)` to the message history
            when a handoff occurs.
        handoff_tool_prefix: Optional prefix for the handoff tools (e.g., `'delegate_to_'` or `'transfer_to_'`)

            If provided, the handoff tools will be named `handoff_tool_prefix_agent_name`.

            If not provided, the handoff tools will be named `transfer_to_agent_name`.
        add_handoff_back_messages: Whether to add a pair of `(AIMessage, ToolMessage)` to the message history
            when returning control to the supervisor to indicate that a handoff has occurred.
        supervisor_name: Name of the supervisor node.
        include_agent_name: Use to specify how to expose the agent name to the underlying supervisor LLM.

            - `None`: Relies on the LLM provider using the name attribute on the AI message. Currently, only OpenAI supports this.
            - `'inline'`: Add the agent name directly into the content field of the AI message using XML-style tags.

                Example: `"How can I help you"` -> `"<name>agent_name</name><content>How can I help you?</content>"`

    Example:
        ```python
        from langchain_openai import ChatOpenAI

        from langgraph_supervisor import create_supervisor
        from langgraph.prebuilt import create_react_agent

        # Create specialized agents

        def add(a: float, b: float) -> float:
            '''Add two numbers.'''
            return a + b

        def web_search(query: str) -> str:
            '''Search the web for information.'''
            return 'Here are the headcounts for each of the FAANG companies in 2024...'

        math_agent = create_react_agent(
            model="openai:gpt-4o",
            tools=[add],
            name="math_expert",
        )

        research_agent = create_react_agent(
            model="openai:gpt-4o",
            tools=[web_search],
            name="research_expert",
        )

        # Create supervisor workflow
        workflow = create_supervisor(
            [research_agent, math_agent],
            model=ChatOpenAI(model="gpt-4o"),
        )

        # Compile and run
        app = workflow.compile()
        result = app.invoke({
            "messages": [
                {
                    "role": "user",
                    "content": "what's the combined headcount of the FAANG companies in 2024?"
                }
            ]
        })
        ```
    """
    if (config_schema := deprecated_kwargs.get("config_schema", None)) is not None:
        warn(
            "`config_schema` is deprecated. Please use `context_schema` instead.",
            DeprecationWarning,
            stacklevel=2,
        )
        context_schema = config_schema

    if add_handoff_back_messages is None:
        add_handoff_back_messages = add_handoff_messages

    supervisor_schema = state_schema or (
        AgentStateWithStructuredResponse if response_format is not None else AgentState  # type: ignore[deprecated]
    )
    workflow_schema = state_schema or _OuterState

    agent_names = set()
    for agent in agents:
        if agent.name is None or agent.name == "LangGraph":
            raise ValueError(
                "Please specify a name when you create your agent, either via `create_react_agent(..., name=agent_name)` "
                "or via `graph.compile(name=name)`."
            )

        if agent.name in agent_names:
            raise ValueError(
                f"Agent with name '{agent.name}' already exists. Agent names must be unique."
            )

        agent_names.add(agent.name)

    tool_node = _prepare_tool_node(
        tools,
        handoff_tool_prefix,
        add_handoff_messages,
        agent_names,
    )
    all_tools = list(tool_node.tools_by_name.values())

    if _should_bind_tools(model, all_tools):
        if _supports_disable_parallel_tool_calls(model):
            model = cast(BaseChatModel, model).bind_tools(
                all_tools, parallel_tool_calls=parallel_tool_calls
            )
        else:
            model = cast(BaseChatModel, model).bind_tools(all_tools)

    if include_agent_name:
        model = with_agent_name(model, include_agent_name)

    supervisor_agent = create_react_agent(  # type: ignore[deprecated]
        name=supervisor_name,
        model=model,
        tools=tool_node,
        prompt=prompt,
        state_schema=supervisor_schema,
        response_format=response_format,
        pre_model_hook=pre_model_hook,
        post_model_hook=post_model_hook,
    )

    builder = StateGraph(cast(Type[Any], workflow_schema), context_schema=context_schema)
    builder.add_node(supervisor_agent, destinations=tuple(agent_names) + (END,))
    builder.add_edge(START, supervisor_agent.name)
    for agent in agents:
        builder.add_node(
            agent.name,
            _make_call_agent(
                agent,
                output_mode,
                add_handoff_back_messages=add_handoff_back_messages,
                supervisor_name=supervisor_name,
            ),
        )
        builder.add_edge(agent.name, supervisor_agent.name)

    return builder


================================================
FILE: pyproject.toml
================================================
[build-system]
requires = ["pdm-backend"]
build-backend = "pdm.backend"

[project]
name = "langgraph-supervisor"
version = "0.0.31"
description = "An implementation of a supervisor multi-agent architecture using LangGraph"
authors = [
    {name = "Vadym Barda", email = "19161700+vbarda@users.noreply.github.com "}
]
license = "MIT"
license-files = ["LICENSE"]
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
    "langgraph>=1.0.2,<2.0.0",
    "langchain-core>=1.0.0,<2.0.0"
]

[project.urls]
Source = "https://github.com/langchain-ai/langgraph-supervisor-py"
Changelog = "https://github.com/langchain-ai/langgraph-supervisor-py/releases"
Twitter = "https://x.com/LangChainAI"
Slack = "https://www.langchain.com/join-community"
Reddit = "https://www.reddit.com/r/LangChain/"

[dependency-groups]
test = [
    "pytest>=8.0.0",
    "ruff>=0.9.4",
    "mypy>=1.8.0",
    "pytest-socket>=0.7.0",
    "types-setuptools>=69.0.0",
]

[tool.pytest.ini_options]
minversion = "8.0"
addopts = "-ra -q -v"
testpaths = [
    "tests",
]
python_files = ["test_*.py"]
python_functions = ["test_*"]

[tool.ruff]
line-length = 100
target-version = "py310"

[tool.ruff.lint]
select = [
    "E",  # pycodestyle errors
    "W",  # pycodestyle warnings
    "F",  # pyflakes
    "I",  # isort
    "B",  # flake8-bugbear
]
ignore = [
  "E501" # line-length
]


[tool.mypy]
python_version = "3.11"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = true
check_untyped_defs = true

[tool.ty.rules]
no-matching-overload = "ignore" 
call-non-callable = "ignore"
unresolved-import = "ignore"
[tool.ty.src]
exclude = ["tests"]


================================================
FILE: tests/__init__.py
================================================


================================================
FILE: tests/test_agent_name.py
================================================
from langchain_core.messages import AIMessage, HumanMessage

from langgraph_supervisor.agent_name import (
    add_inline_agent_name,
    remove_inline_agent_name,
)


def test_add_inline_agent_name() -> None:
    # Test that non-AI messages are returned unchanged.
    human_message = HumanMessage(content="Hello")
    result = add_inline_agent_name(human_message)
    assert result == human_message

    # Test that AI messages with no name are returned unchanged.
    ai_message = AIMessage(content="Hello world")
    result = add_inline_agent_name(ai_message)
    assert result == ai_message

    # Test that AI messages get formatted with name and content tags.
    ai_message = AIMessage(content="Hello world", name="assistant")
    result = add_inline_agent_name(ai_message)
    assert result.content == "<name>assistant</name><content>Hello world</content>"
    assert result.name == "assistant"


def test_add_inline_agent_name_content_blocks() -> None:
    content_blocks: list[str | dict] = [
        {"type": "text", "text": "Hello world"},
        {"type": "image", "image_url": "http://example.com/image.jpg"},
    ]
    ai_message = AIMessage(content=content_blocks, name="assistant")
    result = add_inline_agent_name(ai_message)
    assert result.content == [
        {"type": "text", "text": "<name>assistant</name><content>Hello world</content>"},
        {"type": "image", "image_url": "http://example.com/image.jpg"},
    ]

    # Test that content blocks without text blocks are returned unchanged
    content_blocks = [
        {"type": "image", "image_url": "http://example.com/image.jpg"},
        {"type": "file", "file_url": "http://example.com/document.pdf"},
    ]
    expected_content_blocks = [
        {"type": "text", "text": "<name>assistant</name><content></content>"}
    ] + content_blocks
    ai_message = AIMessage(content=content_blocks, name="assistant")
    result = add_inline_agent_name(ai_message)

    # The message should be returned unchanged
    assert result.content == expected_content_blocks


def test_remove_inline_agent_name() -> None:
    # Test that non-AI messages are returned unchanged.
    human_message = HumanMessage(content="Hello")
    result = remove_inline_agent_name(human_message)
    assert result == human_message

    # Test that messages with empty content are returned unchanged.
    ai_message = AIMessage(content="", name="assistant")
    result = remove_inline_agent_name(ai_message)
    assert result == ai_message

    # Test that messages without name/content tags are returned unchanged.
    ai_message = AIMessage(content="Hello world", name="assistant")
    result = remove_inline_agent_name(ai_message)
    assert result == ai_message

    # Test that content is correctly extracted from tags.
    ai_message = AIMessage(
        content="<name>assistant</name><content>Hello world</content>", name="assistant"
    )
    result = remove_inline_agent_name(ai_message)
    assert result.content == "Hello world"
    assert result.name == "assistant"


def test_remove_inline_agent_name_content_blocks() -> None:
    content_blocks: list[str | dict] = [
        {"type": "text", "text": "<name>assistant</name><content>Hello world</content>"},
        {"type": "image", "image_url": "http://example.com/image.jpg"},
    ]
    ai_message = AIMessage(content=content_blocks, name="assistant")
    result = remove_inline_agent_name(ai_message)

    expected_content = [
        {"type": "text", "text": "Hello world"},
        {"type": "image", "image_url": "http://example.com/image.jpg"},
    ]
    assert result.content == expected_content
    assert result.name == "assistant"

    # Test that content blocks without text blocks are returned unchanged
    content_blocks = [
        {"type": "text", "text": "<name>assistant</name><content></content>"},
        {"type": "image", "image_url": "http://example.com/image.jpg"},
        {"type": "file", "file_url": "http://example.com/document.pdf"},
    ]
    expected_content_blocks = content_blocks[1:]
    ai_message = AIMessage(content=content_blocks, name="assistant")
    result = remove_inline_agent_name(ai_message)
    assert result.content == expected_content_blocks


def test_remove_inline_agent_name_multiline_content() -> None:
    multiline_content = """<name>assistant</name><content>This is
a multiline
message</content>"""
    ai_message = AIMessage(content=multiline_content, name="assistant")
    result = remove_inline_agent_name(ai_message)
    assert result.content == "This is\na multiline\nmessage"


================================================
FILE: tests/test_supervisor.py
================================================
"""Tests for the supervisor module."""
# mypy: ignore-errors

from collections.abc import Callable, Sequence
from typing import Any, Optional, cast

import pytest
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.chat_models import BaseChatModel, LanguageModelInput
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage
from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.tools import BaseTool, tool
from langgraph.graph import MessagesState, StateGraph
from langgraph.prebuilt import create_react_agent

from langgraph_supervisor import create_supervisor
from langgraph_supervisor.agent_name import AgentNameMode, with_agent_name
from langgraph_supervisor.handoff import create_forward_message_tool


class FakeChatModel(BaseChatModel):
    idx: int = 0
    responses: Sequence[BaseMessage]

    @property
    def _llm_type(self) -> str:
        return "fake-tool-call-model"

    def _generate(
        self,
        messages: list[BaseMessage],
        stop: Optional[list[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: dict[str, Any],
    ) -> ChatResult:
        generation = ChatGeneration(message=self.responses[self.idx])
        self.idx += 1
        return ChatResult(generations=[generation])

    def bind_tools(
        self, tools: Sequence[dict[str, Any] | type | Callable | BaseTool], **kwargs: Any
    ) -> Runnable[LanguageModelInput, BaseMessage]:
        tool_dicts = [
            {
                "name": tool.name if isinstance(tool, BaseTool) else str(tool),
            }
            for tool in tools
        ]
        return self.bind(tools=tool_dicts)


supervisor_messages = [
    AIMessage(
        content="",
        tool_calls=[
            {
                "name": "transfer_to_research_expert",
                "args": {},
                "id": "call_gyQSgJQm5jJtPcF5ITe8GGGF",
                "type": "tool_call",
            }
        ],
    ),
    AIMessage(
        content="",
        tool_calls=[
            {
                "name": "transfer_to_math_expert",
                "args": {},
                "id": "call_zCExWE54g4B4oFZcwBh3Wumg",
                "type": "tool_call",
            }
        ],
    ),
    AIMessage(
        content="The combined headcount of the FAANG companies in 2024 is 1,977,586 employees.",
    ),
]

research_agent_messages = [
    AIMessage(
        content="",
        tool_calls=[
            {
                "name": "web_search",
                "args": {"query": "FAANG headcount 2024"},
                "id": "call_4sLYp7usFcIZBFcNsOGQiFzV",
                "type": "tool_call",
            },
        ],
    ),
    AIMessage(
        content="The headcount for the FAANG companies in 2024 is as follows:\n\n1. **Facebook (Meta)**: 67,317 employees\n2. **Amazon**: 1,551,000 employees\n3. **Apple**: 164,000 employees\n4. **Netflix**: 14,000 employees\n5. **Google (Alphabet)**: 181,269 employees\n\nTo find the combined headcount, simply add these numbers together.",
    ),
]

math_agent_messages = [
    AIMessage(
        content="",
        tool_calls=[
            {
                "name": "add",
                "args": {"a": 67317, "b": 1551000},
                "id": "call_BRvA6oAlgMA1whIkAn9gE3AS",
                "type": "tool_call",
            },
            {
                "name": "add",
                "args": {"a": 164000, "b": 14000},
                "id": "call_OLVb4v0pNDlsBsKBwDK4wb1W",
                "type": "tool_call",
            },
            {
                "name": "add",
                "args": {"a": 181269, "b": 0},
                "id": "call_5VEHaInDusJ9MU3i3tVJN6Hr",
                "type": "tool_call",
            },
        ],
    ),
    AIMessage(
        content="",
        tool_calls=[
            {
                "name": "add",
                "args": {"a": 1618317, "b": 178000},
                "id": "call_FdfUz8Gm3S5OQaVq2oQpMxeN",
                "type": "tool_call",
            },
            {
                "name": "add",
                "args": {"a": 181269, "b": 0},
                "id": "call_j5nna1KwGiI60wnVHM2319r6",
                "type": "tool_call",
            },
        ],
    ),
    AIMessage(
        content="",
        tool_calls=[
            {
                "name": "add",
                "args": {"a": 1796317, "b": 181269},
                "id": "call_4fNHtFvfOvsaSPb8YK1qNAiR",
                "type": "tool_call",
            }
        ],
    ),
    AIMessage(
        content="The combined headcount of the FAANG companies in 2024 is 1,977,586 employees.",
    ),
]


@pytest.mark.parametrize(
    "include_agent_name,include_individual_agent_name",
    [
        (None, None),
        (None, "inline"),
        ("inline", None),
        ("inline", "inline"),
    ],
)
def test_supervisor_basic_workflow(
    include_agent_name: AgentNameMode | None,
    include_individual_agent_name: AgentNameMode | None,
) -> None:
    """Test basic supervisor workflow with two agents."""

    # output_mode = "last_message"
    @tool
    def add(a: float, b: float) -> float:
        """Add two numbers."""
        return a + b

    @tool
    def web_search(query: str) -> str:
        """Search the web for information."""
        return (
            "Here are the headcounts for each of the FAANG companies in 2024:\n"
            "1. **Facebook (Meta)**: 67,317 employees.\n"
            "2. **Apple**: 164,000 employees.\n"
            "3. **Amazon**: 1,551,000 employees.\n"
            "4. **Netflix**: 14,000 employees.\n"
            "5. **Google (Alphabet)**: 181,269 employees."
        )

    math_model: FakeChatModel = FakeChatModel(responses=math_agent_messages)
    if include_individual_agent_name:
        math_model = cast(
            FakeChatModel,
            with_agent_name(math_model.bind_tools([add]), include_individual_agent_name),
        )

    math_agent = create_react_agent(
        model=math_model,
        tools=[add],
        name="math_expert",
    )

    research_model = FakeChatModel(responses=research_agent_messages)
    if include_individual_agent_name:
        research_model = cast(
            FakeChatModel,
            with_agent_name(research_model.bind_tools([web_search]), include_individual_agent_name),
        )

    research_agent = create_react_agent(
        model=research_model,
        tools=[web_search],
        name="research_expert",
    )

    workflow = create_supervisor(
        [math_agent, research_agent],
        model=FakeChatModel(responses=supervisor_messages),
        include_agent_name=include_agent_name,
    )

    app = workflow.compile()
    assert app is not None

    result = app.invoke(
        {
            "messages": [
                HumanMessage(
                    content="what's the combined headcount of the FAANG companies in 2024?"
                )
            ]
        }
    )

    assert len(result["messages"]) == 12
    # first supervisor handoff
    assert result["messages"][1] == supervisor_messages[0]
    # last research agent message
    assert result["messages"][3] == research_agent_messages[-1]
    # next supervisor handoff
    assert result["messages"][6] == supervisor_messages[1]
    # last math agent message
    assert result["messages"][8] == math_agent_messages[-1]
    # final supervisor message
    assert result["messages"][11] == supervisor_messages[-1]

    # output_mode = "full_history"
    math_agent = create_react_agent(
        model=FakeChatModel(responses=math_agent_messages),
        tools=[add],
        name="math_expert",
    )

    research_agent = create_react_agent(
        model=FakeChatModel(responses=research_agent_messages),
        tools=[web_search],
        name="research_expert",
    )

    workflow_full_history = create_supervisor(
        [math_agent, research_agent],
        model=FakeChatModel(responses=supervisor_messages),
        output_mode="full_history",
    )
    app_full_history = workflow_full_history.compile()
    result_full_history = app_full_history.invoke(
        {
            "messages": [
                HumanMessage(
                    content="what's the combined headcount of the FAANG companies in 2024?"
                )
            ]
        }
    )

    assert len(result_full_history["messages"]) == 23
    # first supervisor handoff
    assert result_full_history["messages"][1] == supervisor_messages[0]
    # all research agent AI messages
    assert result_full_history["messages"][3] == research_agent_messages[0]
    assert result_full_history["messages"][5] == research_agent_messages[1]
    # next supervisor handoff
    assert result_full_history["messages"][8] == supervisor_messages[1]
    # all math agent AI messages
    assert result_full_history["messages"][10] == math_agent_messages[0]
    assert result_full_history["messages"][14] == math_agent_messages[1]
    assert result_full_history["messages"][17] == math_agent_messages[2]
    # final supervisor message
    assert result_full_history["messages"][-1] == supervisor_messages[-1]


class FakeChatModelWithAssertion(FakeChatModel):
    assertion: Callable[[list[BaseMessage]], None]

    def _generate(
        self,
        messages: list[BaseMessage],
        stop: Optional[list[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: dict[str, Any],
    ) -> ChatResult:
        self.assertion(messages)
        return super()._generate(messages, stop, run_manager, **kwargs)


def get_tool_calls(msg: BaseMessage) -> list[dict[str, Any]] | None:
    tool_calls = getattr(msg, "tool_calls", None)
    if tool_calls is None:
        return None
    return [
        {"name": tc["name"], "args": tc["args"]} for tc in tool_calls if tc["type"] == "tool_call"
    ]


def as_dict(msg: BaseMessage) -> dict[str, Any]:
    return {
        "name": msg.name,
        "content": msg.content,
        "tool_calls": get_tool_calls(msg),
        "type": msg.type,
    }


class Expectations:
    def __init__(self, expected: list[list[dict[str, Any]]]) -> None:
        self.expected = expected.copy()

    def __call__(self, messages: list[BaseMessage]) -> None:
        expected = self.expected.pop(0)
        received = [as_dict(m) for m in messages]
        assert expected == received


def test_worker_hide_handoffs() -> None:
    """Test that the supervisor forwards a message to a specific agent and receives the correct response."""

    @tool
    def echo_tool(text: str) -> str:
        """Echo the input text."""
        return text

    expectations: list[list[dict[str, Any]]] = [
        [
            {
                "name": None,
                "content": "Scooby-dooby-doo",
                "tool_calls": None,
                "type": "human",
            }
        ],
        [
            {
                "name": None,
                "content": "Scooby-dooby-doo",
                "tool_calls": None,
                "type": "human",
            },
            {
                "name": "echo_agent",
                "content": "Echo 1!",
                "tool_calls": [],
                "type": "ai",
            },
            {"name": "supervisor", "content": "boo", "tool_calls": [], "type": "ai"},
            {
                "name": None,
                "content": "Huh take two?",
                "tool_calls": None,
                "type": "human",
            },
        ],
    ]

    echo_model = FakeChatModelWithAssertion(
        responses=[
            AIMessage(content="Echo 1!"),
            AIMessage(content="Echo 2!"),
        ],
        assertion=Expectations(expectations),
    )
    echo_agent = create_react_agent(
        model=echo_model.bind_tools([echo_tool]),
        tools=[echo_tool],
        name="echo_agent",
    )

    supervisor_messages = [
        AIMessage(
            content="",
            tool_calls=[
                {
                    "name": "transfer_to_echo_agent",
                    "args": {},
                    "id": "call_gyQSgJQm5jJtPcF5ITe8GGGF",
                    "type": "tool_call",
                }
            ],
        ),
        AIMessage(
            content="boo",
        ),
        AIMessage(
            content="",
            tool_calls=[
                {
                    "name": "transfer_to_echo_agent",
                    "args": {},
                    "id": "call_gyQSgJQm5jJtPcF5ITe8GGGG",
                    "type": "tool_call",
                }
            ],
        ),
        AIMessage(
            content="END",
        ),
    ]

    workflow = create_supervisor(
        [echo_agent],
        model=FakeChatModel(responses=supervisor_messages),
        add_handoff_messages=False,
    )
    app = workflow.compile()

    result = app.invoke({"messages": [HumanMessage(content="Scooby-dooby-doo")]})
    app.invoke({"messages": result["messages"] + [HumanMessage(content="Huh take two?")]})


def test_supervisor_message_forwarding() -> None:
    """Test that the supervisor forwards a message to a specific agent and receives the correct response."""

    @tool
    def echo_tool(text: str) -> str:
        """Echo the input text."""
        return text

    # Agent that simply echoes the message
    echo_model = FakeChatModel(
        responses=[
            AIMessage(content="Echo: test forwarding!"),
        ]
    )
    echo_agent = create_react_agent(
        model=echo_model.bind_tools([echo_tool]),
        tools=[echo_tool],
        name="echo_agent",
    )

    supervisor_messages = [
        AIMessage(
            content="",
            tool_calls=[
                {
                    "name": "transfer_to_echo_agent",
                    "args": {},
                    "id": "call_gyQSgJQm5jJtPcF5ITe8GGGF",
                    "type": "tool_call",
                }
            ],
        ),
        AIMessage(
            content="",
            tool_calls=[
                {
                    "name": "forward_message",
                    "args": {"from_agent": "echo_agent"},
                    "id": "abcd123",
                    "type": "tool_call",
                }
            ],
        ),
    ]

    forwarding = create_forward_message_tool("supervisor")
    workflow = create_supervisor(
        [echo_agent],
        model=FakeChatModel(responses=supervisor_messages),
        tools=[forwarding],
    )
    app = workflow.compile()

    result = app.invoke({"messages": [HumanMessage(content="Scooby-dooby-doo")]})

    def get_tool_calls(msg: BaseMessage) -> list[dict[str, Any]] | None:
        tool_calls = getattr(msg, "tool_calls", None)
        if tool_calls is None:
            return None
        return [
            {"name": tc["name"], "args": tc["args"]}
            for tc in tool_calls
            if tc["type"] == "tool_call"
        ]

    received = [
        {
            "name": msg.name,
            "content": msg.content,
            "tool_calls": get_tool_calls(msg),
            "type": msg.type,
        }
        for msg in result["messages"]
    ]

    expected = [
        {
            "name": None,
            "content": "Scooby-dooby-doo",
            "tool_calls": None,
            "type": "human",
        },
        {
            "name": "supervisor",
            "content": "",
            "tool_calls": [
                {
                    "name": "transfer_to_echo_agent",
                    "args": {},
                }
            ],
            "type": "ai",
        },
        {
            "name": "transfer_to_echo_agent",
            "content": "Successfully transferred to echo_agent",
            "tool_calls": None,
            "type": "tool",
        },
        {
            "name": "echo_agent",
            "content": "Echo: test forwarding!",
            "tool_calls": [],
            "type": "ai",
        },
        {
            "name": "echo_agent",
            "content": "Transferring back to supervisor",
            "tool_calls": [
                {
                    "name": "transfer_back_to_supervisor",
                    "args": {},
                }
            ],
            "type": "ai",
        },
        {
            "name": "transfer_back_to_supervisor",
            "content": "Successfully transferred back to supervisor",
            "tool_calls": None,
            "type": "tool",
        },
        {
            "name": "supervisor",
            "content": "Echo: test forwarding!",
            "tool_calls": [],
            "type": "ai",
        },
    ]
    assert received == expected


def test_metadata_passed_to_subagent() -> None:
    """Test that metadata from config is passed to sub-agents.

    This test verifies that when a config object with metadata is passed to the supervisor,
    the metadata is correctly passed to the sub-agent when it is invoked.
    """

    # Create a tracking agent to verify metadata is passed
    def test_node(_state: MessagesState, config: RunnableConfig) -> dict[str, list[BaseMessage]]:
        # Assert that the metadata is passed to the sub-agent
        assert config["metadata"]["test_key"] == "test_value"
        assert config["metadata"]["another_key"] == 123
        # Return a new message if the assertion passes.
        return {"messages": [AIMessage(content="Test response")]}

    tracking_agent_workflow = StateGraph(MessagesState)
    tracking_agent_workflow.add_node("test_node", test_node)
    tracking_agent_workflow.set_entry_point("test_node")
    tracking_agent_workflow.set_finish_point("test_node")
    tracking_agent = tracking_agent_workflow.compile()
    tracking_agent.name = "test_agent"

    # Create a supervisor with the tracking agent
    supervisor_model = FakeChatModel(
        responses=[
            AIMessage(
                content="",
                tool_calls=[
                    {
                        "name": "transfer_to_test_agent",
                        "args": {},
                        "id": "call_123",
                        "type": "tool_call",
                    }
                ],
            ),
            AIMessage(content="Final response"),
        ]
    )

    supervisor = create_supervisor(
        agents=[tracking_agent],
        model=supervisor_model,
    ).compile()

    # Create config with metadata
    test_metadata = {"test_key": "test_value", "another_key": 123}
    config: RunnableConfig = {"metadata": test_metadata}

    # Invoke the supervisor with the config
    result = supervisor.invoke({"messages": [HumanMessage(content="Test message")]}, config=config)
    # Get the last message in the messages list & verify it matches the value
    # returned from the node.
    assert result["messages"][-1].content == "Final response"


================================================
FILE: tests/test_supervisor_functional_api.py
================================================
"""Tests for the supervisor module using functional API."""
# mypy: ignore-errors

from typing import Any, Dict, List

from langchain_core.language_models.fake_chat_models import GenericFakeChatModel
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, SystemMessage
from langgraph.func import entrypoint, task
from langgraph.graph import add_messages

from langgraph_supervisor import create_supervisor


class FakeModel(GenericFakeChatModel):
    def bind_tools(self, *args: tuple, **kwargs: Any) -> "FakeModel":
        """Do nothing for now."""
        return self


def test_supervisor_functional_workflow() -> None:
    """Test supervisor workflow with a functional API agent."""
    model = FakeModel(
        messages=iter([AIMessage(content="Mocked response")]),
    )

    # Create a joke agent using functional API
    @task
    def generate_joke(messages: List[BaseMessage]) -> BaseMessage:
        """Generate a joke using the model."""
        return model.invoke([SystemMessage(content="Write a short joke")] + list(messages))

    @entrypoint()
    def joke_agent(state: Dict[str, Any]) -> Dict[str, Any]:
        """Joke agent entrypoint."""
        joke = generate_joke(state["messages"]).result()
        messages = add_messages(state["messages"], joke)
        return {"messages": messages}

    # Set agent name
    joke_agent.name = "joke_agent"

    # Create supervisor workflow
    workflow = create_supervisor(
        [joke_agent], model=model, prompt="You are a supervisor managing a joke expert."
    )

    # Compile and test
    app = workflow.compile()
    assert app is not None

    result = app.invoke({"messages": [HumanMessage(content="Tell me a joke!")]})

    # Verify results
    assert "messages" in result
    assert len(result["messages"]) > 0
    assert any("joke" in msg.content.lower() for msg in result["messages"])
Download .txt
gitextract_1p582z2o/

├── .github/
│   ├── actions/
│   │   └── uv_setup/
│   │       └── action.yml
│   └── workflows/
│       ├── _lint.yml
│       ├── _test.yml
│       ├── ci.yml
│       └── release.yml
├── .gitignore
├── LICENSE
├── Makefile
├── README.md
├── langgraph_supervisor/
│   ├── __init__.py
│   ├── agent_name.py
│   ├── handoff.py
│   ├── py.typed
│   └── supervisor.py
├── pyproject.toml
└── tests/
    ├── __init__.py
    ├── test_agent_name.py
    ├── test_supervisor.py
    └── test_supervisor_functional_api.py
Download .txt
SYMBOL INDEX (39 symbols across 6 files)

FILE: langgraph_supervisor/agent_name.py
  function _is_content_blocks_content (line 20) | def _is_content_blocks_content(content: list[dict | str] | str) -> TypeG...
  function add_inline_agent_name (line 29) | def add_inline_agent_name(message: BaseMessage) -> BaseMessage:
  function remove_inline_agent_name (line 58) | def remove_inline_agent_name(message: BaseMessage) -> BaseMessage:
  function with_agent_name (line 108) | def with_agent_name(

FILE: langgraph_supervisor/handoff.py
  function _normalize_agent_name (line 16) | def _normalize_agent_name(agent_name: str) -> str:
  function _has_multiple_content_blocks (line 21) | def _has_multiple_content_blocks(content: str | list[str | dict]) -> Typ...
  function _remove_non_handoff_tool_calls (line 26) | def _remove_non_handoff_tool_calls(
  function create_handoff_tool (line 55) | def create_handoff_tool(
  function create_handoff_back_messages (line 132) | def create_handoff_back_messages(
  function create_forward_message_tool (line 155) | def create_forward_message_tool(supervisor_name: str = "supervisor") -> ...

FILE: langgraph_supervisor/supervisor.py
  function _supports_disable_parallel_tool_calls (line 48) | def _supports_disable_parallel_tool_calls(model: LanguageModelLike) -> b...
  function _make_call_agent (line 66) | def _make_call_agent(
  function _get_handoff_destinations (line 130) | def _get_handoff_destinations(tools: Sequence[BaseTool | Callable]) -> l...
  function _prepare_tool_node (line 146) | def _prepare_tool_node(
  class _OuterState (line 205) | class _OuterState(TypedDict):
  function create_supervisor (line 211) | def create_supervisor(

FILE: tests/test_agent_name.py
  function test_add_inline_agent_name (line 9) | def test_add_inline_agent_name() -> None:
  function test_add_inline_agent_name_content_blocks (line 27) | def test_add_inline_agent_name_content_blocks() -> None:
  function test_remove_inline_agent_name (line 54) | def test_remove_inline_agent_name() -> None:
  function test_remove_inline_agent_name_content_blocks (line 79) | def test_remove_inline_agent_name_content_blocks() -> None:
  function test_remove_inline_agent_name_multiline_content (line 106) | def test_remove_inline_agent_name_multiline_content() -> None:

FILE: tests/test_supervisor.py
  class FakeChatModel (line 22) | class FakeChatModel(BaseChatModel):
    method _llm_type (line 27) | def _llm_type(self) -> str:
    method _generate (line 30) | def _generate(
    method bind_tools (line 41) | def bind_tools(
  function test_supervisor_basic_workflow (line 165) | def test_supervisor_basic_workflow(
  class FakeChatModelWithAssertion (line 291) | class FakeChatModelWithAssertion(FakeChatModel):
    method _generate (line 294) | def _generate(
  function get_tool_calls (line 305) | def get_tool_calls(msg: BaseMessage) -> list[dict[str, Any]] | None:
  function as_dict (line 314) | def as_dict(msg: BaseMessage) -> dict[str, Any]:
  class Expectations (line 323) | class Expectations:
    method __init__ (line 324) | def __init__(self, expected: list[list[dict[str, Any]]]) -> None:
    method __call__ (line 327) | def __call__(self, messages: list[BaseMessage]) -> None:
  function test_worker_hide_handoffs (line 333) | def test_worker_hide_handoffs() -> None:
  function test_supervisor_message_forwarding (line 428) | def test_supervisor_message_forwarding() -> None:
  function test_metadata_passed_to_subagent (line 560) | def test_metadata_passed_to_subagent() -> None:

FILE: tests/test_supervisor_functional_api.py
  class FakeModel (line 14) | class FakeModel(GenericFakeChatModel):
    method bind_tools (line 15) | def bind_tools(self, *args: tuple, **kwargs: Any) -> "FakeModel":
  function test_supervisor_functional_workflow (line 20) | def test_supervisor_functional_workflow() -> None:
Condensed preview — 19 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (91K chars).
[
  {
    "path": ".github/actions/uv_setup/action.yml",
    "chars": 480,
    "preview": "# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching\n\nname: uv-install\ndescription: Set up Python and uv"
  },
  {
    "path": ".github/workflows/_lint.yml",
    "chars": 1162,
    "preview": "name: lint\n\non:\n  workflow_call:\n    inputs:\n      working-directory:\n        required: true\n        type: string\n      "
  },
  {
    "path": ".github/workflows/_test.yml",
    "chars": 989,
    "preview": "name: test\n\non:\n  workflow_call:\n    inputs:\n      working-directory:\n        required: true\n        type: string\n      "
  },
  {
    "path": ".github/workflows/ci.yml",
    "chars": 2457,
    "preview": "---\nname: Run CI Tests\n\non:\n  push:\n    branches: [ main ]\n  pull_request:\n  workflow_dispatch:  # Allows to trigger the"
  },
  {
    "path": ".github/workflows/release.yml",
    "chars": 4768,
    "preview": "name: release\nrun-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}\non:\n  workflow_call:\n    inputs:"
  },
  {
    "path": ".gitignore",
    "chars": 410,
    "preview": "# Pyenv\n.python-version\n.ipynb_checkpoints/\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n"
  },
  {
    "path": "LICENSE",
    "chars": 1072,
    "preview": "MIT License\n\nCopyright (c) 2025 LangChain, Inc.\n\nPermission is hereby granted, free of charge, to any person obtaining a"
  },
  {
    "path": "Makefile",
    "chars": 1462,
    "preview": ".PHONY: all lint format test help\n\n# Default target executed when no arguments are given to make.\nall: help\n\n###########"
  },
  {
    "path": "README.md",
    "chars": 13914,
    "preview": "# 🤖 LangGraph Multi-Agent Supervisor\n\n> **Note**: We now recommend using the **supervisor pattern directly via tools** r"
  },
  {
    "path": "langgraph_supervisor/__init__.py",
    "chars": 252,
    "preview": "from langgraph_supervisor.handoff import (\n    create_forward_message_tool,\n    create_handoff_tool,\n)\nfrom langgraph_su"
  },
  {
    "path": "langgraph_supervisor/agent_name.py",
    "chars": 5732,
    "preview": "import re\nfrom typing import Any, Literal, Sequence, TypeGuard, cast\n\nfrom langchain_core.language_models import Languag"
  },
  {
    "path": "langgraph_supervisor/handoff.py",
    "chars": 8058,
    "preview": "import re\nimport uuid\nfrom typing import TypeGuard, cast\n\nfrom langchain_core.messages import AIMessage, ToolCall, ToolM"
  },
  {
    "path": "langgraph_supervisor/py.typed",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "langgraph_supervisor/supervisor.py",
    "chars": 18152,
    "preview": "import inspect\nfrom typing import Any, Callable, Literal, Optional, Sequence, Type, Union, cast, get_args\nfrom uuid impo"
  },
  {
    "path": "pyproject.toml",
    "chars": 1637,
    "preview": "[build-system]\nrequires = [\"pdm-backend\"]\nbuild-backend = \"pdm.backend\"\n\n[project]\nname = \"langgraph-supervisor\"\nversion"
  },
  {
    "path": "tests/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/test_agent_name.py",
    "chars": 4556,
    "preview": "from langchain_core.messages import AIMessage, HumanMessage\n\nfrom langgraph_supervisor.agent_name import (\n    add_inlin"
  },
  {
    "path": "tests/test_supervisor.py",
    "chars": 18918,
    "preview": "\"\"\"Tests for the supervisor module.\"\"\"\n# mypy: ignore-errors\n\nfrom collections.abc import Callable, Sequence\nfrom typing"
  },
  {
    "path": "tests/test_supervisor_functional_api.py",
    "chars": 1883,
    "preview": "\"\"\"Tests for the supervisor module using functional API.\"\"\"\n# mypy: ignore-errors\n\nfrom typing import Any, Dict, List\n\nf"
  }
]

About this extraction

This page contains the full source code of the langchain-ai/langgraph-supervisor GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 19 files (83.9 KB), approximately 19.8k tokens, and a symbol index with 39 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!