Full Code of unclecode/groqcall for AI

main 009c301a8a71 cached
50 files
414.1 KB
117.5k tokens
152 symbols
1 requests
Download .txt
Showing preview only (433K chars total). Download the full file or copy to clipboard to get everything.
Repository: unclecode/groqcall
Branch: main
Commit: 009c301a8a71
Files: 50
Total size: 414.1 KB

Directory structure:
gitextract_bsj671yq/

├── .github/
│   └── workflows/
│       └── main.yml
├── .gitignore
├── LICENSE
├── README.md
├── README_old.md
├── app/
│   ├── __init__.py
│   ├── config.py
│   ├── functions/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   └── duckduck.py
│   ├── libs/
│   │   ├── __init__.py
│   │   ├── base_handler.py
│   │   ├── chains copy.py
│   │   ├── chains.py
│   │   ├── context.py
│   │   ├── provider_handler.py
│   │   ├── tools_handler.py
│   │   └── vision_handler.py
│   ├── main.py
│   ├── models.py
│   ├── prompts.py
│   ├── providers.py
│   ├── reasoning/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   └── rerank.py
│   ├── routes/
│   │   ├── __init__.py
│   │   ├── examples.py
│   │   └── proxy.py
│   └── utils.py
├── cookbook/
│   ├── ai_assistant_custome_tools.py
│   ├── cinemax.json
│   ├── function_call_force_schema.py
│   ├── function_call_force_tool_choice.py
│   ├── function_call_ollama.py
│   ├── function_call_phidata.py
│   ├── function_call_vision.py
│   ├── function_call_with_schema.py
│   ├── function_call_without_schema.py
│   ├── functiona_call_groq_langchain.py
│   └── resources.py
├── examples/
│   ├── example_1.py
│   ├── example_2.py
│   ├── example_3.py
│   └── example_4.py
├── frontend/
│   ├── assets/
│   │   ├── README.md
│   │   ├── markdown.css
│   │   └── style.css
│   └── pages/
│       ├── index.html
│       └── index_old.html
└── requirements.txt

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/main.yml
================================================
name: Auto PR and Merge on Push by Specific User

on:
  push:
    branches:
      - main

jobs:
  auto-pr-and-merge:
    runs-on: ubuntu-latest
    steps:
      - name: Check User
        id: check_user
        run: |
          echo "user_matched=${{ github.actor == 'unclecode' }}"
          echo "user_matched=${{ github.actor == 'unclecode' }}" >> $GITHUB_ENV

      - name: Create Pull Request
        if: env.user_matched == 'true'
        id: create_pull_request
        uses: actions/github-script@v5
        with:
          script: |
            const payload = {
              owner: context.repo.owner,
              repo: context.repo.repo,
              head: 'main',
              base: 'live',
              title: 'Auto PR from main to live',
              body: 'Automatically generated PR to keep live branch up-to-date',
              draft: false,
            };
            
            // Create the pull request
            await github.rest.pulls.create(payload).then(pr => {
              core.setOutput('pr_number', pr.data.number);
            }).catch(err => core.setFailed(`Failed to create PR: ${err.message}`));

      - name: Merge Pull Request
        if: env.user_matched == 'true'
        uses: actions/github-script@v5
        with:
          script: |
            const pr_number = ${{ steps.create_pull_request.outputs.pr_number }};
            if (!pr_number) {
              core.setFailed('PR number is undefined, skipping merge.');
              return;
            }

            const payload = {
              owner: context.repo.owner,
              repo: context.repo.repo,
              pull_number: parseInt(pr_number, 10),
              merge_method: 'merge', // Options: 'merge', 'squash', or 'rebase'
            };
            
            // Attempt to merge the pull request
            await github.rest.pulls.merge(payload).then(response => {
              if (response.status !== 200) {
                core.setFailed('Failed to merge the pull request');
              }
            }).catch(err => core.setFailed(`Failed to merge PR: ${err.message}`));


================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
#   in version control.
#   https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/


app.log
.vscode
app/routes/proxy_all_in_one.py 

================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

1. Definitions.

"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.

"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).

"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."

"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.

5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS


================================================
FILE: README.md
================================================
# GroqCall.ai - Lightning-Fast LLM Function Calls

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q3is7qynCsx4s7FBznCfTMnokbKWIv1F?usp=sharing)
[![Version](https://img.shields.io/badge/version-0.0.5-blue.svg)](https://github.com/unclecode/groqcall)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

GroqCall is a proxy server that enables lightning-fast function calls for Groq's Language Processing Unit (LPU) and other AI providers. It simplifies the creation of AI assistants by offering a wide range of built-in functions hosted on the cloud.

## Quickstart

### Using the Pre-built Server

To quickly start using GroqCall without running it locally, make requests to one of the following base URLs:

- Cloud: `https://groqcall.ai/proxy/groq/v1`
- Local: `http://localhost:8000` (if running the proxy server locally)

### Running the Proxy Locally

1. Clone the repository:
```
git clone https://github.com/unclecode/groqcall.git
cd groqcall
```

2. Create and activate a virtual environment:
```
python -m venv venv
source venv/bin/activate
```

3. Install dependencies:
```
pip install -r requirements.txt
```

4. Run the FastAPI server:
```
./venv/bin/uvicorn --app-dir app/ main:app --reload
```

## Examples

### Using GroqCall with PhiData

```python
from phi.llm.openai.like import OpenAILike
from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo

my_groq = OpenAILike(
    model="mixtral-8x7b-32768",
    api_key="YOUR_GROQ_API_KEY",
    base_url="https://groqcall.ai/proxy/groq/v1"  # or "http://localhost:8000/proxy/groq/v1" if running locally
)

assistant = Assistant(
    llm=my_groq,
    tools=[DuckDuckGo()], 
    show_tool_calls=True, 
    markdown=True
)

assistant.print_response("What's happening in France? Summarize top stories with sources, very short and concise.", stream=False)
```

### Using GroqCall with Requests

#### FuncHub: Schema-less Function Calls

GroqCall introduces FuncHub, which allows you to make function calls without passing the function schema. 

```python
import requests

api_key = "YOUR_GROQ_API_KEY"
header = {
    "Authorization": f"Bearer {api_key}",
    "Content-Type": "application/json"
}

proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions" # or "http://localhost:8000/proxy/groq/v1/chat/completions" if running locally

request = {
    "messages": [
        {
            "role": "system",
            "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n<instructions>\n1. Use markdown to format your answers.\n</instructions>"
        },
        {
            "role": "user", 
            "content": "What's happening in France? Summarize top stories with sources, very short and concise."
        }
    ],
    "model": "mixtral-8x7b-32768",
    "tool_choice": "auto",
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "duckduck.search"
            }
        },
        {
            "type": "function",
            "function": {
                "name": "duckduck.news"
            }
        }
    ]
}

response = requests.post(
    proxy_url,
    headers=header,
    json=request
)

print(response.json()["choices"][0]["message"]["content"])
```

- If you notice, the function schema is not passed in the request. This is because GroqCall uses FuncHub to automatically detect and call the function based on the function name in the cloud, Therefore you dont't need to parse the first response, call the function, and pass again. Check "functions" folder to add your own functions. I will create more examples in the close future to explain how to add your own functions.

#### Passing Function Schemas

If you prefer to pass your own function schemas, refer to the [Function Schema example](https://github.com/unclecode/groqcall/blob/main/cookbook/function_call_with_schema.py) in the cookbook.

#### Rune proxy with Ollama locally

Function call proxy can be used with Ollama. You should first install Ollama and run it locally. Then refer to the [Ollama example](https://github.com/unclecode/groqcall/blob/main/cookbook/function_call_ollama.py) in the cookbook.

## Cookbook

Explore the [Cookbook](https://github.com/unclecode/groqcall/tree/main/cookbook) for more examples and use cases of GroqCall.

## Motivation

Groq is a startup that designs highly specialized processor chips aimed specifically at running inference on large language models. They've introduced what they call the Language Processing Unit (LPU), and the speed is astounding—capable of producing 500 to 800 tokens per second or more.

As an admirer of Groq and their community, I built this proxy to enable function calls using the OpenAI interface, allowing it to be called from any library. This engineering workaround has proven to be immensely useful in my company for various projects.

## Contributing

Contributions are welcome! If you have ideas, suggestions, or would like to contribute to this project, please reach out to me on Twitter (X) @unclecode or via email at unclecode@kidocode.com.

Let's collaborate and make this repository even more awesome! 🚀

## License

This project is licensed under the Apache License 2.0. See [LICENSE](https://github.com/unclecode/groqcall/blob/main/LICENSE) for more information.

================================================
FILE: README_old.md
================================================
# GroqCall.ai (I changed the name from FunckyCall to GroqCall)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q3is7qynCsx4s7FBznCfTMnokbKWIv1F?usp=sharing)
[![Version](https://img.shields.io/badge/version-0.0.1-blue.svg)](https://github.com/unclecode/groqcall)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

GroqCall is a proxy server provides function call for Groq's lightning-fast Language Processing Unit (LPU) and other AI providers. Additionally, the upcoming FuncyHub will offer a wide range of built-in functions, hosted on the cloud, making it easier to create AI assistants without the need to maintain function schemas in the codebase or or execute them through multiple calls.

## Motivation 🚀
Groq is a startup that designs highly specialized processor chips aimed specifically at running inference on large language models. They've introduced what they call the Language Processing Unit (LPU), and the speed is astounding—capable of producing 500 to 800 tokens per second or more. I've become a big fan of Groq and their community;


I admire what they're doing. It feels like after discovering electricity, the next challenge is moving it around quickly and efficiently. Groq is doing just that for Artificial Intelligence, making it easily accessible everywhere. They've opened up their API to the cloud, but as of now, they lack a function call capability.

Unable to wait for this feature, I built a proxy that enables function calls using the OpenAI interface, allowing it to be called from any library. This engineering workaround has proven to be immensely useful in my company for various projects. Here's the link to the GitHub repository where you can explore and play around with it. I've included some examples in this collaboration for you to check out.

<img width="150" src = "https://res.cloudinary.com/kidocode/image/upload/v1710148127/GroqChip-1-Die_lgi95d.jpg"/>

<img title="Powered by Groq" alt="Powered by Groq" width = "150" src="https://res.cloudinary.com/kidocode/image/upload/v1710142103/Stack_PBG_White_n6qdbj.svg">



## Running the Proxy Locally 🖥️
To run this proxy locally on your own machine, follow these steps:

1. Clone the GitHub repository:
```git clone https://github.com/unclecode/groqcall.git```

2. Navigate to the project directory:
```cd groqcall```

3. Create a virtual environment:
```python -m venv venv```

4. Activate virtual environment:
```source venv/bin/activate```

5. Install the required libraries:
```pip install -r requirements.txt```

6. Run the FastAPI server:
```./venv/bin/uvicorn --app-dir app/ main:app --reload```


## Using the Pre-built Server 🌐
For your convenience, I have already set up a server that you can use temporarily. This allows you to quickly start using the proxy without having to run it locally.

To use the pre-built server, simply make requests to the following base URL:
```https://groqcall.ai/proxy/groq/v1```


## Exploring GroqCall.ai 🚀
This README is organized into three main sections, each showcasing different aspects of GroqCall.ai:

- **Sending POST Requests**: Here, I explore the functionality of sending direct POST requests to LLMs using GroqCall.ai. This section highlights the flexibility and control offered by the library when interacting with LLMs.
- **FuncHub**: The second section introduces the concept of FuncHub, a useful feature that simplifies the process of executing functions. With FuncHub, there is no need to send the function JSON schema explicitly, as the functions are already hosted on the proxy server. This approach streamlines the workflow, allowing developers to obtain results with a single call without having to handle function call is production server.
- **Using GroqCall with PhiData**: In this section, I demonstrate how GroqCall.ai can be seamlessly integrated with other libraries such as my favorite one, the PhiData library, leveraging its built-in tools to connect to LLMs and perform external tool requests.


```python
# The following libraries are optional if you're interested in using PhiData or managing your tools on the client side.
!pip install phidata > /dev/null
!pip install openai > /dev/null
!pip install duckduckgo-search > /dev/null
```

## Sending POST request, with full functions implementation 


```python
from duckduckgo_search import DDGS
import requests, os
import json

# Here you pass your own GROQ API key
api_key=userdata.get("GROQ_API_KEY")
header = {
    "Authorization": f"Bearer {api_key}",
    "Content-Type": "application/json"
}
proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions"


def duckduckgo_search(query, max_results=None):
    """
    Use this function to search DuckDuckGo for a query.
    """
    with DDGS() as ddgs:
        return [r for r in ddgs.text(query, safesearch='off', max_results=max_results)]

def duckduckgo_news(query, max_results=None):
    """
    Use this function to get the latest news from DuckDuckGo.
    """
    with DDGS() as ddgs:
        return [r for r in ddgs.news(query, safesearch='off', max_results=max_results)]

function_map = {
    "duckduckgo_search": duckduckgo_search,
    "duckduckgo_news": duckduckgo_news,
}

request = {
    "messages": [
        {
            "role": "system",
            "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n<instructions>\n1. Use markdown to format your answers.\n</instructions>"
        },
        {
            "role": "user",
            "content": "Whats happening in France? Summarize top stories with sources, very short and concise."
        }
    ],
    "model": "mixtral-8x7b-32768",
    "tool_choice": "auto",
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "duckduckgo_search",
                "description": "Use this function to search DuckDuckGo for a query.\n\nArgs:\n    query(str): The query to search for.\n    max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n    The result from DuckDuckGo.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "query": {
                            "type": "string"
                        },
                        "max_results": {
                            "type": [
                                "number",
                                "null"
                            ]
                        }
                    }
                }
            }
        },
        {
            "type": "function",
            "function": {
                "name": "duckduckgo_news",
                "description": "Use this function to get the latest news from DuckDuckGo.\n\nArgs:\n    query(str): The query to search for.\n    max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n    The latest news from DuckDuckGo.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "query": {
                            "type": "string"
                        },
                        "max_results": {
                            "type": [
                                "number",
                                "null"
                            ]
                        }
                    }
                }
            }
        }
    ]
}

response = requests.post(
    proxy_url,
    headers=header,
    json=request
)
if response.status_code == 200:
    res = response.json()
    message = res['choices'][0]['message']
    tools_response_messages = []
    if not message['content'] and 'tool_calls' in message:
        for tool_call in message['tool_calls']:
            tool_name = tool_call['function']['name']
            tool_args = tool_call['function']['arguments']
            tool_args = json.loads(tool_args)
            if tool_name not in function_map:
                print(f"Error: {tool_name} is not a valid function name.")
                continue
            tool_func = function_map[tool_name]
            tool_response = tool_func(**tool_args)
            tools_response_messages.append({
                "role": "tool", "content": json.dumps(tool_response)
            })

        if tools_response_messages:
            request['messages'] += tools_response_messages
            response = requests.post(
                proxy_url,
                headers=header,
                json=request
            )
            if response.status_code == 200:
                res = response.json()
                print(res['choices'][0]['message']['content'])
            else:
                print("Error:", response.status_code, response.text)
    else:
        print(message['content'])
else:
    print("Error:", response.status_code, response.text)

```

## Schema-less Function Call 🤩
In this method, we only need to provide the function's name, which consists of two parts, acting as a sort of namespace. The first part identifies the library or toolkit containing the functions, and the second part specifies the function's name, assuming it's already available on the proxy server. I aim to collaborate with the community to incorporate all typical functions, eliminating the need for passing a schema. Without having to handle function calls ourselves, a single request to the proxy enables it to identify and execute the functions, retrieve responses from large language models, and return the results to us. Thanks to Groq, all of this occurs in just seconds.


```python
from duckduckgo_search import DDGS
import requests, os
api_key = userdata.get("GROQ_API_KEY")
header = {
    "Authorization": f"Bearer {api_key}",
    "Content-Type": "application/json"
}

proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions"


request = {
    "messages": [
        {
            "role": "system",
            "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n<instructions>\n1. Use markdown to format your answers.\n</instructions>",
        },
        {
            "role": "user",
            "content": "Whats happening in France? Summarize top stories with sources, very short and concise. Also please search about the histoy of france as well.",
        },
    ],
    "model": "mixtral-8x7b-32768",
    "tool_choice": "auto",
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "duckduck.search",
            },
        },
        {
            "type": "function",
            "function": {
                "name": "duckduck.news",
            },
        },
    ],
}

response = requests.post(
    proxy_url,
    headers=header,
    json=request,
)

if response.status_code == 200:
    res = response.json()
    print(res["choices"][0]["message"]["content"])
else:
    print("Error:", response.status_code, response.text)

```

## Using with PhiData
FindData is a favorite of mine for creating AI assistants, thanks to its beautifully simplified interface, unlike the complexity seen in the LangChain library and LlamaIndex. I use it for many projects and want to give kudos to their team. It's open source, and I recommend everyone check it out. You can explore more from this link https://github.com/phidatahq/phidata.


```python
from google.README import userdata
from phi.llm.openai.like import OpenAILike
from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo
import os, json


my_groq = OpenAILike(
        model="mixtral-8x7b-32768",
        api_key=userdata.get("GROQ_API_KEY"),
        base_url="https://groqcall.ai/proxy/groq/v1"
    )
assistant = Assistant(
    llm=my_groq,
    tools=[DuckDuckGo()], show_tool_calls=True, markdown=True
)
assistant.print_response("Whats happening in France? Summarize top stories with sources, very short and concise.", stream=False)


```

## Contributions Welcome! 🙌
I am excited to extend and grow this repository by adding more built-in functions and integrating additional services. If you are interested in contributing to this project and being a part of its development, I would love to collaborate with you! I plan to create a discord channel for this project, where we can discuss ideas, share knowledge, and work together to enhance the repository.

Here's how you can get involved:

1. Fork the repository and create your own branch.
2. Implement new functions, integrate additional services, or make improvements to the existing codebase.
3. Test your changes to ensure they work as expected.
4. Submit a pull request describing the changes you have made and why they are valuable.

If you have any ideas, suggestions, or would like to discuss potential contributions, feel free to reach out to me. You can contact me through the following channels:

- Twitter (X): @unclecode
- Email: unclecode@kidocode.com

### Copyright 2024 Unclecode (Hossein Tohidi)

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

I'm open to collaboration and excited to see how we can work together to enhance this project and provide value to the community. Let's connect and explore how we can help each other!

Together, let's make this repository even more awesome! 🚀


================================================
FILE: app/__init__.py
================================================


================================================
FILE: app/config.py
================================================
# To be developed
EVALUATION_CYCLES_COUNT=1
PARSE_ERROR_TRIES = 5

================================================
FILE: app/functions/__init__.py
================================================


================================================
FILE: app/functions/base.py
================================================
from pydantic import BaseModel
from typing import Dict

class Function:
    name: str
    description: str

    class Schema(BaseModel):
        pass

    @classmethod
    def get_schema(cls) -> Dict:
        schema_dict = {
            "name": cls.name,
            "description": cls.description,
            "parameters": cls.Schema.schema(),
        }
        return schema_dict

================================================
FILE: app/functions/duckduck.py
================================================
from pydantic import BaseModel, Field
from typing import Optional, Dict
import requests
import json
from duckduckgo_search import DDGS

# from .base import Function
from pydantic import Field
from typing import Optional
import requests
import json

from .base import Function

class SearchFunction(Function):
    name = "duckduck.search"
    description = "Use this function to search DuckDuckGo for a query.\n\nArgs:\n    query(str): The query to search for.\n    max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n    The result from DuckDuckGo."

    class Schema(Function.Schema):
        query: str = Field(..., description="The query to search for.")
        max_results: Optional[int] = Field(5, description="The maximum number of results to return.")

    @classmethod
    def run(cls, **kwargs):
        query = kwargs.get("query")
        max_results = kwargs.get("max_results")
        with DDGS() as ddgs:
            return [r for r in ddgs.text(query, safesearch='off', max_results=max_results)]
        

class NewsFunction(Function):
    name = "duckduck.news"
    description = "Use this function to get the latest news from DuckDuckGo.\n\nArgs:\n    query(str): The query to search for.\n    max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n    The latest news from DuckDuckGo."

    class Schema(Function.Schema):
        query: str = Field(..., description="The query to search for.")
        max_results: Optional[int] = Field(5, description="The maximum number of results to return.")

    @classmethod
    def run(cls, **kwargs):
        query = kwargs.get("query")
        max_results = kwargs.get("max_results")

        with DDGS() as ddgs:
            results = [r for r in ddgs.news(query, safesearch='off', max_results=max_results)]
        return results

================================================
FILE: app/libs/__init__.py
================================================
from .context import Context
from .base_handler import Handler, DefaultCompletionHandler, ExceptionHandler, FallbackHandler
from .provider_handler import ProviderSelectionHandler
from .vision_handler import ImageMessageHandler
from .tools_handler import ToolExtractionHandler, ToolResponseHandler

__all__ = [
    "Context",
    "Handler",
    "DefaultCompletionHandler",
    "ExceptionHandler",
    "ProviderSelectionHandler",
    "ImageMessageHandler",
    "ToolExtractionHandler",
    "ToolResponseHandler",
    "FallbackHandler",
]


================================================
FILE: app/libs/base_handler.py
================================================
from abc import ABC, abstractmethod
from .context import Context
from fastapi.responses import JSONResponse
import traceback

class Handler(ABC):
    """Abstract Handler class for building the chain of handlers."""

    _next_handler: "Handler" = None

    def set_next(self, handler: "Handler") -> "Handler":
        self._next_handler = handler
        return handler

    @abstractmethod
    async def handle(self, context: Context):
        if self._next_handler:
            try:
                return await self._next_handler.handle(context)
            except Exception as e:
                _exception_handler: "Handler" = ExceptionHandler()
                # Extract the stack trace and log the exception
                return await _exception_handler.handle(self._next_handler, context, e)


class DefaultCompletionHandler(Handler):
    async def handle(self, context: Context):
        if context.is_normal_chat:
            # Assuming context.client is set and has a method for creating chat completions
            completion = context.client.route(
                messages=context.messages,
                **context.client.clean_params(context.params),
            )
            context.response = completion.model_dump()
            return JSONResponse(content=context.response, status_code=200)

        return await super().handle(context)


class FallbackHandler(Handler):
    async def handle(self, context: Context):
        # This handler does not pass the request further down the chain.
        # It acts as a fallback when no other handler has processed the request.
        if not context.response:
            # The default action when no other handlers have processed the request
            context.response = {"message": "No suitable action found for the request."}
            return JSONResponse(content=context.response, status_code=400)

        # If there's already a response set in the context, it means one of the handlers has processed the request.
        return JSONResponse(content=context.response, status_code=200)


class ExceptionHandler(Handler):
    async def handle(self, handler: Handler, context: Context, exception: Exception):
        print(f"Error processing the request: {str(handler.__class__) } - {exception}")
        # print(traceback.format_exc())
        return JSONResponse(
            content={"error": "An unexpected error occurred, within handler " + str(handler.__class__) + " : " + str(exception)},
            status_code=500,
        )




================================================
FILE: app/libs/chains copy.py
================================================
from abc import ABC, abstractmethod
from typing import Any, Dict
from importlib import import_module
import json
import uuid
import traceback
from fastapi import Request
from fastapi.responses import JSONResponse
from providers import BaseProvider
from prompts import *
from providers import GroqProvider
import importlib
from utils import get_tool_call_response, create_logger, describe

missed_tool_logger = create_logger(
    "chain.missed_tools", ".logs/empty_tool_tool_response.log"
)


class Context:
    def __init__(self, request: Request, provider: str, body: Dict[str, Any]):
        self.request = request
        self.provider = provider
        self.body = body
        self.response = None

        # extract all keys from body except messages and tools and set in params
        self.params = {k: v for k, v in body.items() if k not in ["messages", "tools"]}

        # self.no_tool_behaviour = self.params.get("no_tool_behaviour", "return")
        self.no_tool_behaviour = self.params.get("no_tool_behaviour", "forward")
        self.params.pop("no_tool_behaviour", None)

        # Todo: For now, no stream, sorry ;)
        self.params["stream"] = False

        self.messages = body.get("messages", [])
        self.tools = body.get("tools", [])

        self.builtin_tools = [
            t for t in self.tools if "parameters" not in t["function"]
        ]
        self.builtin_tool_names = [t["function"]["name"] for t in self.builtin_tools]
        self.custom_tools = [t for t in self.tools if "parameters" in t["function"]]

        for bt in self.builtin_tools:
            func_namespace = bt["function"]["name"]
            if len(func_namespace.split(".")) == 2:
                module_name, func_class_name = func_namespace.split(".")
                func_class_name = f"{func_class_name.capitalize()}Function"
                # raise ValueError("Only one builtin function can be called at a time.")
                module = importlib.import_module(f"app.functions.{module_name}")
                func_class = getattr(module, func_class_name, None)
                schema_dict = func_class.get_schema()
                if schema_dict:
                    bt["function"] = schema_dict
                    bt["run"] = func_class.run
                    bt["extra"] = self.params.get("extra", {})
                    self.params.pop("extra", None)

        self.client: BaseProvider = None

    @property
    def last_message(self):
        return self.messages[-1] if self.messages else {}

    @property
    def is_tool_call(self):
        return bool(
            self.last_message["role"] == "user"
            and self.tools
            and self.params.get("tool_choice", "none") != "none"
        )

    @property
    def is_tool_response(self):
        return bool(self.last_message["role"] == "tool" and self.tools)

    @property
    def is_normal_chat(self):
        return bool(not self.is_tool_call and not self.is_tool_response)


class Handler(ABC):
    """Abstract Handler class for building the chain of handlers."""

    _next_handler: "Handler" = None

    def set_next(self, handler: "Handler") -> "Handler":
        self._next_handler = handler
        return handler

    @abstractmethod
    async def handle(self, context: Context):
        if self._next_handler:
            try:
                return await self._next_handler.handle(context)
            except Exception as e:
                _exception_handler: "Handler" = ExceptionHandler()
                # Extract the stack trace and log the exception
                return await _exception_handler.handle(context, e)


class ProviderSelectionHandler(Handler):
    @staticmethod
    def provider_exists(provider: str) -> bool:
        module_name = f"app.providers"
        class_name = f"{provider.capitalize()}Provider"
        try:
            provider_module = import_module(module_name)
            provider_class = getattr(provider_module, class_name)
            return bool(provider_class)
        except ImportError:
            return False

    async def handle(self, context: Context):
        # Construct the module path and class name based on the provider
        module_name = f"app.providers"
        class_name = f"{context.provider.capitalize()}Provider"

        try:
            # Dynamically import the module and class
            provider_module = import_module(module_name)
            provider_class = getattr(provider_module, class_name)

            if provider_class:
                context.client = provider_class(
                    api_key=context.api_token
                )  # Assuming an api_key parameter
                return await super().handle(context)
            else:
                raise ValueError(
                    f"Provider class {class_name} could not be found in {module_name}."
                )
        except ImportError as e:
            # Handle import error (e.g., module or class not found)
            print(f"Error importing {class_name} from {module_name}: {e}")
            context.response = {
                "error": f"An error occurred while trying to load the provider: {e}"
            }
            return JSONResponse(content=context.response, status_code=500)


class ImageMessageHandler(Handler):
    async def handle(self, context: Context):
        new_messages = []
        image_ref = 1
        for message in context.messages:
            if message["role"] == "user":
                if isinstance(message["content"], list):
                    prompt = None
                    for content in message["content"]:
                        if content["type"] == "text":
                            # new_messages.append({"role": message["role"], "content": content["text"]})
                            prompt = content["text"]
                        elif content["type"] == "image_url":
                            image_url = content["image_url"]["url"]
                            try:
                                prompt = prompt or IMAGE_DESCRIPTO_PROMPT
                                description = describe(prompt, image_url)
                                if description:
                                    description = get_image_desc_guide(image_ref, description)
                                    new_messages.append(
                                        {"role": message["role"], "content": description}
                                    )
                                    image_ref += 1
                                else:
                                    pass
                            except Exception as e:
                                print(f"Error describing image: {e}")
                                continue
                else:
                    new_messages.append(message)
            else:
                new_messages.append(message)

        context.messages = new_messages
        return await super().handle(context)
    

class ImageLLavaMessageHandler(Handler):
    async def handle(self, context: Context):
        new_messages = []
        image_ref = 1
        for message in context.messages:
            new_messages.append(message)
            if message["role"] == "user":
                if isinstance(message["content"], list):
                    for content in message["content"]:
                        if content["type"] == "text":
                            prompt = content["text"]
                        elif content["type"] == "image_url":
                            image_url = content["image_url"]["url"]
                            try:
                                description = describe(prompt, image_url)
                                new_messages.append(
                                    {"role": "assistant", "content": description}
                                )
                                image_ref += 1
                            except Exception as e:
                                print(f"Error describing image: {e}")
                                continue
        context.messages = new_messages
        return await super().handle(context)


class ToolExtractionHandler(Handler):
    async def handle(self, context: Context):
        body = context.body
        if context.is_tool_call:

            # Prepare the messages and tools for the tool extraction
            messages = [
                f"{m['role'].title()}: {m['content']}"
                for m in context.messages
                if m["role"] != "system"
            ]
            tools_json = json.dumps([t["function"] for t in context.tools], indent=4)

            # Process the tool_choice
            tool_choice = context.params.get("tool_choice", "auto")
            forced_mode = False
            if (
                type(tool_choice) == dict
                and tool_choice.get("type", None) == "function"
            ):
                tool_choice = tool_choice["function"].get("name", None)
                if not tool_choice:
                    raise ValueError(
                        "Invalid tool choice. 'tool_choice' is set to a dictionary with 'type' as 'function', but 'function' does not have a 'name' key."
                    )
                forced_mode = True

                # Regenerate the string tool_json and keep only the forced tool
                tools_json = json.dumps(
                    [
                        t["function"]
                        for t in context.tools
                        if t["function"]["name"] == tool_choice
                    ],
                    indent=4,
                )

            system_message = (
                SYSTEM_MESSAGE if not forced_mode else ENFORCED_SYSTAME_MESSAE
            )
            suffix = SUFFIX if not forced_mode else get_forced_tool_suffix(tool_choice)

            new_messages = [
                {"role": "system", "content": system_message},
                {
                    "role": "system",
                    "content": f"Conversation History:\n{''.join(messages)}\n\nTools: \n{tools_json}\n\n{suffix}",
                },
            ]

            completion, tool_calls = await self.process_tool_calls(
                context, new_messages
            )

            if not tool_calls:
                if context.no_tool_behaviour == "forward":
                    context.tools = None
                    return await super().handle(context)
                else:
                    context.response = {"tool_calls": []}
                    tool_response = get_tool_call_response(completion, [], [])
                    missed_tool_logger.debug(
                        f"Last message content: {context.last_message['content']}"
                    )
                    return JSONResponse(content=tool_response, status_code=200)

            unresolved_tol_calls = [
                t
                for t in tool_calls
                if t["function"]["name"] not in context.builtin_tool_names
            ]
            resolved_responses = []
            for tool in tool_calls:
                for bt in context.builtin_tools:
                    if tool["function"]["name"] == bt["function"]["name"]:
                        res = bt["run"](
                            **{
                                **json.loads(tool["function"]["arguments"]),
                                **bt["extra"],
                            }
                        )
                        resolved_responses.append(
                            {
                                "name": tool["function"]["name"],
                                "role": "tool",
                                "content": json.dumps(res),
                                "tool_call_id": "chatcmpl-" + completion.id,
                            }
                        )

                if not unresolved_tol_calls:
                    context.messages.extend(resolved_responses)
                    return await super().handle(context)

            tool_response = get_tool_call_response(
                completion, unresolved_tol_calls, resolved_responses
            )

            context.response = tool_response
            return JSONResponse(content=context.response, status_code=200)

        return await super().handle(context)

    async def process_tool_calls(self, context, new_messages):
        try:
            tries = 5
            tool_calls = []
            while tries > 0:
                try:
                    # Assuming the context has an instantiated client according to the selected provider
                    completion = context.client.route(
                        model=context.client.parser_model,
                        messages=new_messages,
                        temperature=0,
                        max_tokens=1024,
                        top_p=1,
                        stream=False,
                    )

                    response = completion.choices[0].message.content
                    if "```json" in response:
                        response = response.split("```json")[1].split("```")[0]

                    try:
                        tool_response = json.loads(response)
                        if isinstance(tool_response, list):
                            tool_response = {"tool_calls": tool_response}
                    except json.JSONDecodeError as e:
                        print(
                            f"Error parsing the tool response: {e}, tries left: {tries}"
                        )
                        new_messages.append(
                            {
                                "role": "user",
                                "content": f"Error: {e}.\n\n{CLEAN_UP_MESSAGE}",
                            }
                        )
                        tries -= 1
                        continue

                    for func in tool_response.get("tool_calls", []):
                        tool_calls.append(
                            {
                                "id": f"call_{func['name']}_{str(uuid.uuid4())}",
                                "type": "function",
                                "function": {
                                    "name": func["name"],
                                    "arguments": json.dumps(func["arguments"]),
                                },
                            }
                        )

                    break
                except Exception as e:
                    raise e

            if tries == 0:
                tool_calls = []

            return completion, tool_calls
        except Exception as e:
            print(f"Error processing the tool calls: {e}")
            raise e


class ToolResponseHandler(Handler):
    async def handle(self, context: Context):
        body = context.body
        if context.is_tool_response:
            messages = context.messages

            for message in messages:
                if message["role"] == "tool":
                    message["role"] = "user"
                    message["content"] = get_func_result_guide(message["content"])

            messages[-1]["role"] = "user"
            # Assuming get_func_result_guide is a function that formats the tool response
            messages[-1]["content"] = get_func_result_guide(messages[-1]["content"])

            try:
                completion = context.client.route(
                    messages=messages,
                    **context.client.clean_params(context.params),
                )
                context.response = completion.model_dump()
                return JSONResponse(content=context.response, status_code=200)
            except Exception as e:
                # Log the exception or handle it as needed
                print(e)
                context.response = {
                    "error": "An error occurred processing the tool response"
                }
                return JSONResponse(content=context.response, status_code=500)

        return await super().handle(context)


class DefaultCompletionHandler(Handler):
    async def handle(self, context: Context):
        if context.is_normal_chat:
            # Assuming context.client is set and has a method for creating chat completions
            completion = context.client.route(
                messages=context.messages,
                **context.client.clean_params(context.params),
            )
            context.response = completion.model_dump()
            return JSONResponse(content=context.response, status_code=200)

        return await super().handle(context)


class FallbackHandler(Handler):
    async def handle(self, context: Context):
        # This handler does not pass the request further down the chain.
        # It acts as a fallback when no other handler has processed the request.
        if not context.response:
            # The default action when no other handlers have processed the request
            context.response = {"message": "No suitable action found for the request."}
            return JSONResponse(content=context.response, status_code=400)

        # If there's already a response set in the context, it means one of the handlers has processed the request.
        return JSONResponse(content=context.response, status_code=200)


class ExceptionHandler(Handler):
    async def handle(self, context: Context, exception: Exception):
        print(f"Error processing the request: {exception}")
        print(traceback.format_exc())
        return JSONResponse(
            content={"error": "An unexpected error occurred. " + str(exception)},
            status_code=500,
        )


================================================
FILE: app/libs/chains.py
================================================
from abc import ABC, abstractmethod
from typing import Any, Dict
from importlib import import_module
import json
import uuid
import traceback
from fastapi import Request
from fastapi.responses import JSONResponse
from providers import BaseProvider
from prompts import *
from providers import GroqProvider
import importlib
from utils import get_tool_call_response, create_logger, describe

missed_tool_logger = create_logger(
    "chain.missed_tools", ".logs/empty_tool_tool_response.log"
)


class Context:
    def __init__(self, request: Request, provider: str, body: Dict[str, Any]):
        self.request = request
        self.provider = provider
        self.body = body
        self.response = None

        # extract all keys from body except messages and tools and set in params
        self.params = {k: v for k, v in body.items() if k not in ["messages", "tools"]}

        # self.no_tool_behaviour = self.params.get("no_tool_behaviour", "return")
        self.no_tool_behaviour = self.params.get("no_tool_behaviour", "forward")
        self.params.pop("no_tool_behaviour", None)

        # Todo: For now, no stream, sorry ;)
        self.params["stream"] = False

        self.messages = body.get("messages", [])
        self.tools = body.get("tools", [])

        self.builtin_tools = [
            t for t in self.tools if "parameters" not in t["function"]
        ]
        self.builtin_tool_names = [t["function"]["name"] for t in self.builtin_tools]
        self.custom_tools = [t for t in self.tools if "parameters" in t["function"]]

        for bt in self.builtin_tools:
            func_namespace = bt["function"]["name"]
            if len(func_namespace.split(".")) == 2:
                module_name, func_class_name = func_namespace.split(".")
                func_class_name = f"{func_class_name.capitalize()}Function"
                # raise ValueError("Only one builtin function can be called at a time.")
                module = importlib.import_module(f"app.functions.{module_name}")
                func_class = getattr(module, func_class_name, None)
                schema_dict = func_class.get_schema()
                if schema_dict:
                    bt["function"] = schema_dict
                    bt["run"] = func_class.run
                    bt["extra"] = self.params.get("extra", {})
                    self.params.pop("extra", None)

        self.client: BaseProvider = None

    @property
    def last_message(self):
        return self.messages[-1] if self.messages else {}

    @property
    def is_tool_call(self):
        return bool(
            self.last_message["role"] == "user"
            and self.tools
            and self.params.get("tool_choice", "none") != "none"
        )

    @property
    def is_tool_response(self):
        return bool(self.last_message["role"] == "tool" and self.tools)

    @property
    def is_normal_chat(self):
        return bool(not self.is_tool_call and not self.is_tool_response)


class Handler(ABC):
    """Abstract Handler class for building the chain of handlers."""

    _next_handler: "Handler" = None

    def set_next(self, handler: "Handler") -> "Handler":
        self._next_handler = handler
        return handler

    @abstractmethod
    async def handle(self, context: Context):
        if self._next_handler:
            try:
                return await self._next_handler.handle(context)
            except Exception as e:
                _exception_handler: "Handler" = ExceptionHandler()
                # Extract the stack trace and log the exception
                return await _exception_handler.handle(context, e)


class ProviderSelectionHandler(Handler):
    @staticmethod
    def provider_exists(provider: str) -> bool:
        module_name = f"app.providers"
        class_name = f"{provider.capitalize()}Provider"
        try:
            provider_module = import_module(module_name)
            provider_class = getattr(provider_module, class_name)
            return bool(provider_class)
        except ImportError:
            return False

    async def handle(self, context: Context):
        # Construct the module path and class name based on the provider
        module_name = f"app.providers"
        class_name = f"{context.provider.capitalize()}Provider"

        try:
            # Dynamically import the module and class
            provider_module = import_module(module_name)
            provider_class = getattr(provider_module, class_name)

            if provider_class:
                context.client = provider_class(
                    api_key=context.api_token
                )  # Assuming an api_key parameter
                return await super().handle(context)
            else:
                raise ValueError(
                    f"Provider class {class_name} could not be found in {module_name}."
                )
        except ImportError as e:
            # Handle import error (e.g., module or class not found)
            print(f"Error importing {class_name} from {module_name}: {e}")
            context.response = {
                "error": f"An error occurred while trying to load the provider: {e}"
            }
            return JSONResponse(content=context.response, status_code=500)


class ImageMessageHandler(Handler):
    async def handle(self, context: Context):
        new_messages = []
        image_ref = 1
        for message in context.messages:
            if message["role"] == "user":
                if isinstance(message["content"], list):
                    prompt = None
                    for content in message["content"]:
                        if content["type"] == "text":
                            # new_messages.append({"role": message["role"], "content": content["text"]})
                            prompt = content["text"]
                        elif content["type"] == "image_url":
                            image_url = content["image_url"]["url"]
                            try:
                                prompt = prompt or IMAGE_DESCRIPTO_PROMPT
                                description = describe(prompt, image_url)
                                if description:
                                    description = get_image_desc_guide(image_ref, description)
                                    new_messages.append(
                                        {"role": message["role"], "content": description}
                                    )
                                    image_ref += 1
                                else:
                                    pass
                            except Exception as e:
                                print(f"Error describing image: {e}")
                                continue
                else:
                    new_messages.append(message)
            else:
                new_messages.append(message)

        context.messages = new_messages
        return await super().handle(context)
    

class ImageLLavaMessageHandler(Handler):
    async def handle(self, context: Context):
        new_messages = []
        image_ref = 1
        for message in context.messages:
            new_messages.append(message)
            if message["role"] == "user":
                if isinstance(message["content"], list):
                    for content in message["content"]:
                        if content["type"] == "text":
                            prompt = content["text"]
                        elif content["type"] == "image_url":
                            image_url = content["image_url"]["url"]
                            try:
                                description = describe(prompt, image_url)
                                new_messages.append(
                                    {"role": "assistant", "content": description}
                                )
                                image_ref += 1
                            except Exception as e:
                                print(f"Error describing image: {e}")
                                continue
        context.messages = new_messages
        return await super().handle(context)


class ToolExtractionHandler(Handler):
    async def handle(self, context: Context):
        body = context.body
        if context.is_tool_call:

            # Prepare the messages and tools for the tool extraction
            messages = [
                f"{m['role'].title()}: {m['content']}"
                for m in context.messages
                if m["role"] != "system"
            ]
            tools_json = json.dumps([t["function"] for t in context.tools], indent=4)

            # Process the tool_choice
            tool_choice = context.params.get("tool_choice", "auto")
            forced_mode = False
            if (
                type(tool_choice) == dict
                and tool_choice.get("type", None) == "function"
            ):
                tool_choice = tool_choice["function"].get("name", None)
                if not tool_choice:
                    raise ValueError(
                        "Invalid tool choice. 'tool_choice' is set to a dictionary with 'type' as 'function', but 'function' does not have a 'name' key."
                    )
                forced_mode = True

                # Regenerate the string tool_json and keep only the forced tool
                tools_json = json.dumps(
                    [
                        t["function"]
                        for t in context.tools
                        if t["function"]["name"] == tool_choice
                    ],
                    indent=4,
                )

            system_message = (
                SYSTEM_MESSAGE if not forced_mode else ENFORCED_SYSTAME_MESSAE
            )
            suffix = SUFFIX if not forced_mode else get_forced_tool_suffix(tool_choice)

            new_messages = [
                {"role": "system", "content": system_message},
                {
                    "role": "system",
                    "content": f"Conversation History:\n{''.join(messages)}\n\nTools: \n{tools_json}\n\n{suffix}",
                },
            ]

            completion, tool_calls = await self.process_tool_calls(
                context, new_messages
            )

            if not tool_calls:
                if context.no_tool_behaviour == "forward":
                    context.tools = None
                    return await super().handle(context)
                else:
                    context.response = {"tool_calls": []}
                    tool_response = get_tool_call_response(completion, [], [])
                    missed_tool_logger.debug(
                        f"Last message content: {context.last_message['content']}"
                    )
                    return JSONResponse(content=tool_response, status_code=200)

            unresolved_tol_calls = [
                t
                for t in tool_calls
                if t["function"]["name"] not in context.builtin_tool_names
            ]
            resolved_responses = []
            for tool in tool_calls:
                for bt in context.builtin_tools:
                    if tool["function"]["name"] == bt["function"]["name"]:
                        res = bt["run"](
                            **{
                                **json.loads(tool["function"]["arguments"]),
                                **bt["extra"],
                            }
                        )
                        resolved_responses.append(
                            {
                                "name": tool["function"]["name"],
                                "role": "tool",
                                "content": json.dumps(res),
                                "tool_call_id": "chatcmpl-" + completion.id,
                            }
                        )

                if not unresolved_tol_calls:
                    context.messages.extend(resolved_responses)
                    return await super().handle(context)

            tool_response = get_tool_call_response(
                completion, unresolved_tol_calls, resolved_responses
            )

            context.response = tool_response
            return JSONResponse(content=context.response, status_code=200)

        return await super().handle(context)

    async def process_tool_calls(self, context, new_messages):
        try:
            tries = 5
            tool_calls = []
            while tries > 0:
                try:
                    # Assuming the context has an instantiated client according to the selected provider
                    completion = context.client.route(
                        model=context.client.parser_model,
                        messages=new_messages,
                        temperature=0,
                        max_tokens=1024,
                        top_p=1,
                        stream=False,
                    )

                    response = completion.choices[0].message.content
                    if "```json" in response:
                        response = response.split("```json")[1].split("```")[0]

                    try:
                        tool_response = json.loads(response)
                        if isinstance(tool_response, list):
                            tool_response = {"tool_calls": tool_response}
                    except json.JSONDecodeError as e:
                        print(
                            f"Error parsing the tool response: {e}, tries left: {tries}"
                        )
                        new_messages.append(
                            {
                                "role": "user",
                                "content": f"Error: {e}.\n\n{CLEAN_UP_MESSAGE}",
                            }
                        )
                        tries -= 1
                        continue

                    for func in tool_response.get("tool_calls", []):
                        tool_calls.append(
                            {
                                "id": f"call_{func['name']}_{str(uuid.uuid4())}",
                                "type": "function",
                                "function": {
                                    "name": func["name"],
                                    "arguments": json.dumps(func["arguments"]),
                                },
                            }
                        )

                    break
                except Exception as e:
                    raise e

            if tries == 0:
                tool_calls = []

            return completion, tool_calls
        except Exception as e:
            print(f"Error processing the tool calls: {e}")
            raise e


class ToolResponseHandler(Handler):
    async def handle(self, context: Context):
        body = context.body
        if context.is_tool_response:
            messages = context.messages

            for message in messages:
                if message["role"] == "tool":
                    message["role"] = "user"
                    message["content"] = get_func_result_guide(message["content"])

            messages[-1]["role"] = "user"
            # Assuming get_func_result_guide is a function that formats the tool response
            messages[-1]["content"] = get_func_result_guide(messages[-1]["content"])

            try:
                completion = context.client.route(
                    messages=messages,
                    **context.client.clean_params(context.params),
                )
                context.response = completion.model_dump()
                return JSONResponse(content=context.response, status_code=200)
            except Exception as e:
                # Log the exception or handle it as needed
                print(e)
                context.response = {
                    "error": "An error occurred processing the tool response"
                }
                return JSONResponse(content=context.response, status_code=500)

        return await super().handle(context)


class DefaultCompletionHandler(Handler):
    async def handle(self, context: Context):
        if context.is_normal_chat:
            # Assuming context.client is set and has a method for creating chat completions
            completion = context.client.route(
                messages=context.messages,
                **context.client.clean_params(context.params),
            )
            context.response = completion.model_dump()
            return JSONResponse(content=context.response, status_code=200)

        return await super().handle(context)


class FallbackHandler(Handler):
    async def handle(self, context: Context):
        # This handler does not pass the request further down the chain.
        # It acts as a fallback when no other handler has processed the request.
        if not context.response:
            # The default action when no other handlers have processed the request
            context.response = {"message": "No suitable action found for the request."}
            return JSONResponse(content=context.response, status_code=400)

        # If there's already a response set in the context, it means one of the handlers has processed the request.
        return JSONResponse(content=context.response, status_code=200)


class ExceptionHandler(Handler):
    async def handle(self, context: Context, exception: Exception):
        print(f"Error processing the request: {exception}")
        print(traceback.format_exc())
        return JSONResponse(
            content={"error": "An unexpected error occurred. " + str(exception)},
            status_code=500,
        )


================================================
FILE: app/libs/context.py
================================================
from typing import Any, Dict
from fastapi import Request
from providers import BaseProvider
from prompts import *
import importlib
from utils import create_logger


class Context:
    def __init__(self, request: Request, provider: str, body: Dict[str, Any]):
        self.request = request
        self.provider = provider
        self.body = body
        self.response = None

        # extract all keys from body except messages and tools and set in params
        self.params = {k: v for k, v in body.items() if k not in ["messages", "tools"]}

        # self.no_tool_behaviour = self.params.get("no_tool_behaviour", "return")
        self.no_tool_behaviour = self.params.get("no_tool_behaviour", "forward")
        self.params.pop("no_tool_behaviour", None)

        # Todo: For now, no stream, sorry ;)
        self.params["stream"] = False

        self.messages = body.get("messages", [])
        self.tools = body.get("tools", [])

        self.builtin_tools = [
            t for t in self.tools if "parameters" not in t["function"]
        ]
        self.builtin_tool_names = [t["function"]["name"] for t in self.builtin_tools]
        self.custom_tools = [t for t in self.tools if "parameters" in t["function"]]

        for bt in self.builtin_tools:
            func_namespace = bt["function"]["name"]
            if len(func_namespace.split(".")) == 2:
                module_name, func_class_name = func_namespace.split(".")
                func_class_name = f"{func_class_name.capitalize()}Function"
                # raise ValueError("Only one builtin function can be called at a time.")
                module = importlib.import_module(f"app.functions.{module_name}")
                func_class = getattr(module, func_class_name, None)
                schema_dict = func_class.get_schema()
                if schema_dict:
                    bt["function"] = schema_dict
                    bt["run"] = func_class.run
                    bt["extra"] = self.params.get("extra", {})
                    self.params.pop("extra", None)

        self.client: BaseProvider = None

    @property
    def last_message(self):
        return self.messages[-1] if self.messages else {}

    @property
    def is_tool_call(self):
        return bool(
            self.last_message["role"] == "user"
            and self.tools
            and self.params.get("tool_choice", None) != "none"
        )

    @property
    def is_tool_response(self):
        return bool(self.last_message["role"] == "tool" and self.tools)

    @property
    def is_normal_chat(self):
        return bool(not self.is_tool_call and not self.is_tool_response)


================================================
FILE: app/libs/provider_handler.py
================================================
from importlib import import_module
from fastapi.responses import JSONResponse
from prompts import *
from .base_handler import Handler
from .context import Context

class ProviderSelectionHandler(Handler):
    @staticmethod
    def provider_exists(provider: str) -> bool:
        module_name = f"app.providers"
        class_name = f"{provider.capitalize()}Provider"
        try:
            provider_module = import_module(module_name)
            provider_class = getattr(provider_module, class_name)
            return bool(provider_class)
        except ImportError:
            return False

    async def handle(self, context: Context):
        # Construct the module path and class name based on the provider
        module_name = f"app.providers"
        class_name = f"{context.provider.capitalize()}Provider"

        try:
            # Dynamically import the module and class
            provider_module = import_module(module_name)
            provider_class = getattr(provider_module, class_name)

            if provider_class:
                context.client = provider_class(
                    api_key=context.api_token
                )  # Assuming an api_key parameter
                return await super().handle(context)
            else:
                raise ValueError(
                    f"Provider class {class_name} could not be found in {module_name}."
                )
        except ImportError as e:
            # Handle import error (e.g., module or class not found)
            print(f"Error importing {class_name} from {module_name}: {e}")
            context.response = {
                "error": f"An error occurred while trying to load the provider: {e}"
            }
            return JSONResponse(content=context.response, status_code=500)



================================================
FILE: app/libs/tools_handler.py
================================================
import concurrent.futures
import uuid
import json
import math
from fastapi.responses import JSONResponse
from prompts import *
from .base_handler import Handler, Context
from .context import Context
from utils import get_tool_call_response, create_logger, describe
from config import PARSE_ERROR_TRIES, EVALUATION_CYCLES_COUNT
from collections import defaultdict

missed_tool_logger = create_logger(
    "chain.missed_tools", ".logs/empty_tool_tool_response.log"
)


class ImageLLavaMessageHandler(Handler):
    async def handle(self, context: Context):
        new_messages = []
        image_ref = 1
        for message in context.messages:
            new_messages.append(message)
            if message["role"] == "user":
                if isinstance(message["content"], list):
                    for content in message["content"]:
                        if content["type"] == "text":
                            prompt = content["text"]
                        elif content["type"] == "image_url":
                            image_url = content["image_url"]["url"]
                            try:
                                description = describe(prompt, image_url)
                                new_messages.append(
                                    {"role": "assistant", "content": description}
                                )
                                image_ref += 1
                            except Exception as e:
                                print(f"Error describing image: {e}")
                                continue
        context.messages = new_messages
        return await super().handle(context)


class ToolExtractionHandler(Handler):
    async def handle(self, context: Context):
            if not context.is_tool_call:
                return await super().handle(context)

            # Step 1: Prepare the conversation history
            messages = self._prepare_conversation_history(context.messages)

            # Step 2: Prepare tool details and detect the mode of operation
            available_tools, system_message, suffix = self._prepare_tool_details(context)

            # Step 3: Prepare the messages for the model
            new_messages = self._prepare_model_messages(messages, available_tools, suffix, context.messages[-1]['content'], system_message)

            # Step 4: Detect the tool calls
            tool_calls_result = await self.process_tool_calls(context, new_messages)
            tool_calls = tool_calls_result["tool_calls"]

            # Step 5: Handle the situation where no tool calls are detected
            if not tool_calls:
                return await self._handle_no_tool_calls(context, tool_calls_result)

            # Step 6: Process built-in tools and resolve the tool calls
            unresolved_tool_calls, resolved_responses = self._process_builtin_tools(context, tool_calls, tool_calls_result["last_completion"].id)

            if not unresolved_tool_calls:
                context.messages.extend(resolved_responses)
                return await super().handle(context)

            # Step 7: Return the unresolved tool calls to the client
            tool_response = get_tool_call_response(tool_calls_result, unresolved_tool_calls, resolved_responses)
            context.response = tool_response
            return JSONResponse(content=context.response, status_code=200)

    def _prepare_conversation_history(self, messages):
        return [
            f"<{m['role'].lower()}>\n{m['content']}\n</{m['role'].lower()}>"
            for m in messages
            if m["role"] != "system"
        ]

    def _prepare_tool_details(self, context):
        tool_choice = context.params.get("tool_choice", "auto")
        forced_mode = type(tool_choice) == dict and tool_choice.get("type", None) == "function"
        available_tools = []

        if forced_mode:
            tool_choice = tool_choice["function"]["name"]
            available_tools = [t["function"] for t in context.tools if t["function"]["name"] == tool_choice]
            system_message = ENFORCED_SYSTAME_MESSAE
            suffix = get_forced_tool_suffix(tool_choice)
        else:
            tool_choice = "auto"
            available_tools = [t["function"] for t in context.tools]
            system_message = SYSTEM_MESSAGE
            suffix = get_suffix()

        # Add one special tool called "fallback", which is always available, its job is to be used when non of other tools are useful for the user input.
        # available_tools.append({
        #     "name": "fallback", 
        #     "description": "Use this tool when none of the other tools are useful for the user input.",
        #     "arguments": {}}
        # )

        return available_tools, system_message, suffix

    def _prepare_model_messages(self, messages, available_tools, suffix, last_message_content, system_message):
        messages_flatten = "\n".join(messages)
        tools_json = json.dumps(available_tools, indent=4)

        return [
            {"role": "system", "content": system_message},
            {
                "role": "user",
                "content": f"# Conversation History:\n{messages_flatten}\n\n# Available Tools: \n{tools_json}\n\n{suffix}\n{last_message_content}",
            },
        ]

    async def _handle_no_tool_calls(self, context, tool_calls_result):
        if context.no_tool_behaviour == "forward":
            context.tools = None
            return await super().handle(context)
        else:
            context.response = {"tool_calls": []}
            tool_response = get_tool_call_response(tool_calls_result, [], [])
            missed_tool_logger.debug(f"Last message content: {context.last_message['content']}")
            return JSONResponse(content=tool_response, status_code=200)

    def _process_builtin_tools(self, context, tool_calls, tool_calls_result_id):
        unresolved_tool_calls = [
            t
            for t in tool_calls
            if t["function"]["name"] not in context.builtin_tool_names
        ]
        resolved_responses = []

        for tool in tool_calls:
            for bt in context.builtin_tools:
                if tool["function"]["name"] == bt["function"]["name"]:
                    res = bt["run"](**{**json.loads(tool["function"]["arguments"]), **bt["extra"]})
                    resolved_responses.append({
                        "name": tool["function"]["name"],
                        "role": "tool",
                        "content": json.dumps(res),
                        "tool_call_id": "chatcmpl-" + tool_calls_result_id,
                    })

        return unresolved_tool_calls, resolved_responses

    async def handle1(self, context: Context):
        body = context.body
        if context.is_tool_call:
            # Step 1: Prepare the the history of conversation.
            messages = [
                f"<{m['role'].lower()}>\n{m['content']}\n</{m['role'].lower()}>"
                for m in context.messages
                if m["role"] != "system"
            ]
            messages_flatten = "\n".join(messages)
            

            # Step 2: Prepare tools details and detect the mode of operation.
            tool_choice = context.params.get("tool_choice", "auto")
            forced_mode = type(tool_choice) == dict and tool_choice.get("type", None) == "function"

            if forced_mode:
                tool_choice = tool_choice["function"]["name"]
                tools_json = json.dumps([t["function"] for t in context.tools if t["function"]["name"] == tool_choice], indent=4)
                system_message = ENFORCED_SYSTAME_MESSAE
                suffix = get_forced_tool_suffix(tool_choice)
            else:
                tool_choice = "auto"
                tools_json = json.dumps([t["function"] for t in context.tools], indent=4)
                system_message = SYSTEM_MESSAGE
                suffix = SUFFIX

            # Step 3: Prepare the messages for the model.
            new_messages = [
                {"role": "system", "content": system_message},
                {
                    "role": "user",
                    "content": f"# Conversation History:\n{messages_flatten}\n\n# Available Tools: \n{tools_json}\n\n{suffix}\n{context.messages[-1]['content']}",
                },
            ]

            # Step 4: Detect the tool calls.
            tool_calls_result = await self.process_tool_calls(context, new_messages)
            tool_calls = tool_calls_result["tool_calls"]

            
            # Step 5: Handle the situation where no tool calls are detected.
            if not tool_calls:
                if context.no_tool_behaviour == "forward":
                    context.tools = None
                    return await super().handle(context)
                else:
                    context.response = {"tool_calls": []}
                    tool_response = get_tool_call_response(tool_calls_result, [], [])
                    missed_tool_logger.debug(
                        f"Last message content: {context.last_message['content']}"
                    )
                    return JSONResponse(content=tool_response, status_code=200)

            
            # Step 6: Process built-in toola and resolve the tool calls, here on the server. In case there is unresolved tool calls, we will return the tool calls to the client to resolve them. But if all tool calls are resolved, we will continue to the next handler.
            unresolved_tol_calls = [
                t
                for t in tool_calls
                if t["function"]["name"] not in context.builtin_tool_names
            ]
            resolved_responses = []
            for tool in tool_calls:
                for bt in context.builtin_tools:
                    if tool["function"]["name"] == bt["function"]["name"]:
                        res = bt["run"](
                            **{
                                **json.loads(tool["function"]["arguments"]),
                                **bt["extra"],
                            }
                        )
                        resolved_responses.append(
                            {
                                "name": tool["function"]["name"],
                                "role": "tool",
                                "content": json.dumps(res),
                                "tool_call_id": "chatcmpl-" + tool_calls_result.id,
                            }
                        )

                if not unresolved_tol_calls:
                    context.messages.extend(resolved_responses)
                    return await super().handle(context)

            # Step 7: If reach here, it means there are unresolved tool calls. We will return the tool calls to the client to resolve them.
            tool_response = get_tool_call_response(
                tool_calls_result, unresolved_tol_calls, resolved_responses
            )

            context.response = tool_response
            return JSONResponse(content=context.response, status_code=200)

        return await super().handle(context)

    async def process_tool_calls(self, context, new_messages):
        try:
            evaluation_cycles_count = EVALUATION_CYCLES_COUNT

            def call_route(messages):
                completion = context.client.route(
                    model=context.client.parser_model,
                    messages=messages,
                    temperature=0,
                    max_tokens=512,
                    top_p=1,
                    stream=False,
                )

                response = completion.choices[0].message.content
                response = response.replace("\_", "_")
                if TOOLS_OPEN_TOKEN in response:
                    response = response.split(TOOLS_OPEN_TOKEN)[1].split(
                        TOOLS_CLOSE_TOKEN
                    )[0]
                if "```json" in response:
                    response = response.split("```json")[1].split("```")[0]

                try:
                    tool_response = json.loads(response)
                    if isinstance(tool_response, list):
                        tool_response = {"tool_calls": tool_response}
                    # Check all detected functions exist in the available tools
                    valid_names = [t['function']["name"] for t in context.tools]
                    available_tools = [t for t in tool_response.get("tool_calls", []) if t['name'] in valid_names]
                    tool_response = {
                        "tool_calls": available_tools,
                    }
                    # tool_response = {"tool_calls": []}
                        
                    
                    return tool_response.get("tool_calls", []), completion
                except json.JSONDecodeError as e:
                    print(f"Error parsing the tool response: {e}")
                    return [], None

            with concurrent.futures.ThreadPoolExecutor() as executor:
                futures = [
                    executor.submit(call_route, new_messages)
                    for _ in range(evaluation_cycles_count)
                ]
                results = [
                    future.result()
                    for future in concurrent.futures.as_completed(futures)
                ]

            tool_calls_list, completions = zip(*results)

            tool_calls_count = defaultdict(int)
            for tool_calls in tool_calls_list:
                for func in tool_calls:
                    tool_calls_count[func["name"]] += 1

            pickup_threshold = math.floor(evaluation_cycles_count * 0.7)
            final_tool_calls = []
            for tool_calls in tool_calls_list:
                for func in tool_calls:
                    if tool_calls_count[func["name"]] >= pickup_threshold:
                        # ppend if function is not already in the list
                        if not any(
                            f['function']["name"] == func["name"] for f in final_tool_calls
                        ):
                            final_tool_calls.append(
                                {
                                    "id": f"call_{func['name']}_{str(uuid.uuid4())}",
                                    "type": "function",
                                    "function": {
                                        "name": func["name"],
                                        "arguments": json.dumps(func["arguments"]),
                                    },
                                }
                            )

            total_prompt_tokens = sum(c.usage.prompt_tokens for c in completions if c)
            total_completion_tokens = sum(
                c.usage.completion_tokens for c in completions if c
            )
            total_tokens = sum(c.usage.total_tokens for c in completions if c)

            last_completion = completions[-1] if completions else None

            return {
                "tool_calls": final_tool_calls,
                "last_completion": last_completion,
                "usage": {
                    "prompt_tokens": total_prompt_tokens,
                    "completion_tokens": total_completion_tokens,
                    "total_tokens": total_tokens,
                },
            }

        except Exception as e:
            print(f"Error processing the tool calls: {e}")
            raise e


class ToolResponseHandler(Handler):
    async def handle(self, context: Context):
        body = context.body
        if context.is_tool_response:
            messages = context.messages

            for message in messages:
                if message["role"] == "tool":
                    message["role"] = "user"
                    message["content"] = get_func_result_guide(message["content"])

            try:
                params = {
                    "temperature": 0.5,
                    "max_tokens": 1024,
                }
                params = {**params, **context.params}

                completion = context.client.route(
                    messages=messages,
                    **context.client.clean_params(params),
                )
                context.response = completion.model_dump()
                return JSONResponse(content=context.response, status_code=200)
            except Exception as e:
                raise e

        return await super().handle(context)


================================================
FILE: app/libs/vision_handler.py
================================================
from prompts import *
from utils import describe
from .context import Context
from .base_handler import Handler


class ImageMessageHandler(Handler):
    async def handle(self, context: Context):
        new_messages = []
        image_ref = 1
        for message in context.messages:
            if message["role"] == "user":
                if isinstance(message["content"], list):
                    prompt = None
                    for content in message["content"]:
                        if content["type"] == "text":
                            # new_messages.append({"role": message["role"], "content": content["text"]})
                            prompt = content["text"]
                        elif content["type"] == "image_url":
                            image_url = content["image_url"]["url"]
                            try:
                                prompt = prompt or IMAGE_DESCRIPTO_PROMPT
                                description = describe(prompt, image_url)
                                if description:
                                    description = get_image_desc_guide(image_ref, description)
                                    new_messages.append(
                                        {"role": message["role"], "content": description}
                                    )
                                    image_ref += 1
                                else:
                                    pass
                            except Exception as e:
                                print(f"Error describing image: {e}")
                                continue
                else:
                    new_messages.append(message)
            else:
                new_messages.append(message)

        context.messages = new_messages
        return await super().handle(context)
    

class ImageLLavaMessageHandler(Handler):
    async def handle(self, context: Context):
        new_messages = []
        image_ref = 1
        for message in context.messages:
            new_messages.append(message)
            if message["role"] == "user":
                if isinstance(message["content"], list):
                    for content in message["content"]:
                        if content["type"] == "text":
                            prompt = content["text"]
                        elif content["type"] == "image_url":
                            image_url = content["image_url"]["url"]
                            try:
                                description = describe(prompt, image_url)
                                new_messages.append(
                                    {"role": "assistant", "content": description}
                                )
                                image_ref += 1
                            except Exception as e:
                                print(f"Error describing image: {e}")
                                continue
        context.messages = new_messages
        return await super().handle(context)



================================================
FILE: app/main.py
================================================
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from fastapi.staticfiles import StaticFiles
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from routes import proxy
from routes import examples
from utils import create_logger
import os
from dotenv import load_dotenv

load_dotenv()

app = FastAPI()

logger = create_logger("app", ".logs/access.log")
app.mount("/static", StaticFiles(directory="frontend/assets"), name="static")
templates = Jinja2Templates(directory="frontend/pages")


origins = [
    "*",
]

app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)


@app.middleware("http")
async def log_requests(request: Request, call_next):
    if "/proxy" in request.url.path:
        client_ip = request.client.host
        logger.info(
            f"Incoming request from {client_ip}: {request.method} {request.url}"
        )
        response = await call_next(request)
        # logger.info(f"Response status code: {response.status_code}")
        return response
    else:
        return await call_next(request)


app.include_router(proxy.router, prefix="/proxy")
app.include_router(examples.router, prefix="/examples")


@app.get("/", response_class=HTMLResponse)
async def index(request: Request):
    return templates.TemplateResponse("index.html", {"request": request})


# Add an get endpoint simple return the evrsion of the app
@app.get("/version")
async def version():
    return {"version": "0.0.5"}


if __name__ == "__main__":
    import uvicorn

    # uvicorn.run("main:app", host=os.getenv("HOST"), port=int(os.getenv('PORT')), workers=1, reload=True)
    uvicorn.run(
        "main:app", host=os.getenv("HOST"), port=int(os.getenv("PORT")), workers=1, reload=False
    )


================================================
FILE: app/models.py
================================================
# To be developed

================================================
FILE: app/prompts.py
================================================
SYSTEM_MESSAGE_v0 = """You are a functiona-call proxy for an advanced LLM. Your jobe is to identify the required tools for answering the user queries, if any. You will received the result of those tools, and then based ont hem you generate final response for user. Some of these tools are like "send_email", "run python code" and etc. Remember you are not in charge to execute these tools as you ar  an AI model, you just detect them, then the middle system, executes, and returns you with response, then you use it to generate final response.

A history of conversations between an AI assistant and the user, plus the last user's message, is given to you.

In addition, you have access to a list of available tools. Each tool is a function that requires a set of parameters and, in response, returns information that the AI assistant needs to provide a proper answer.

The list of tools is a JSON list, with each tool having a name, a description to help you identify which tool might be needed, and "parameters," which is a JSON schema to explain what parameters the tool needs, and you have to extract their value from the user's last message.

Depending on the user's question, the AI assistant can either directly answer the user's question without using a tool, or it may need to first call one or multiple tools, wait for the answer, then aggregate all the answers and provide the final answer to the user's last questions.

Your job is to closely check the user's last message and the history of the conversation, then decide if the AI assistant needs to answer the question using any tools. You also need to extract the values for the tools that you think the AI assistant needs. Remember you can select multiple tools if needed.

Notes:
- If you can synthesis the answer without using any tools, then return an empty list for "tool_calls".
- You need tools if there is clear direction between the user's last message and the tools description.
- If you can't devise a value for a parameter directly from the user's message, only return null and NEVER TRY TO GUESS THE VALUE.
- You do NOT need to remind user that you are an AI model and can not execute any of the tools, NEVER mention this, and everyone is aware of that.

MESSAGE SUFFIX: 
- "SYSTEM MESSGAE": Whenever a message starts with 'SYSTEM MESSAGE', that is a guide and help information for you to generate your next response. Do not consider them a message from the user, and do not reply to them at all. Just use the information and continue your conversation with the user.
- "IMAGE [ref_index]": Whenever a message starts with 'IMAGE', that is a description of an images uploaded bu user. This information you can use it to generate your next responses, in case user refers to the image. Do not consider them a message from the user, and do not reply to them at all. Just use the information and continue your conversation with the user. The [ref_index] is the index of the image in the list of images uploaded by the user. """

SYSTEM_MESSAGE = """You are a function-call proxy for an advanced LLM. Your job is to identify the required tools for answering the user queries, if any. You will received the result of those tools, and then based on them you select the tool(s) must be called to prepare response for user. You alwayse return JSON. generate final JSON response for user. 

A history of conversations between an AI assistant and the user, plus the last user's message, is given to you.

In addition, you have access to a list of available tools. Each tool is a function that requires a set of parameters and, in response, returns information that the AI assistant needs to provide a proper answer.

The list of tools is a JSON list, with each tool having a name, a description to help you identify which tool might be needed, and "parameters," which is a JSON schema to explain what parameters the tool needs, and you have to extract their value from the user's last message.

Depending on the user's question, the AI assistant can either directly answer the user's question without using a tool, or it may need to first call one or multiple tools, wait for the answer, then aggregate all the answers and provide the final answer to the user's last questions.

Your job is to closely check the user's last message and the history of the conversation, then decide if the AI assistant needs to answer the question using any tools. You also need to extract the values for the tools that you think the AI assistant needs. Remember you can select multiple tools if needed.

Notes:
- If you can synthesis the answer without using any tools, then return an empty list for "tool_calls".
- You need tools if there is clear direction between the user's last message and the tools description.
- If you can't devise a value for a parameter directly from the user's message, only return null and NEVER TRY TO GUESS THE VALUE.
- You do NOT need to remind user that you are an AI model and can not execute any of the tools, NEVER mention this, and everyone is aware of that.

MESSAGE SUFFIX: 
- "FUNCTION RESPONSE": This is the response of a function call, which you requested and system provide it back to you. Do not consider them as user message and do not reply to them at all. Just use the information and continue your conversation with the user.
- "IMAGE [ref_index]": Whenever a message starts with 'IMAGE', that is a description of an images uploaded bu user. This information you can use it to generate your next responses, in case user refers to the image. Do not consider them a message from the user, and do not reply to them at all. Just use the information and continue your conversation with the user. The [ref_index] is the index of the image in the list of images uploaded by the user. """


ENFORCED_SYSTAME_MESSAE = """A history of conversations between an AI assistant and the user, plus the last user's message, is given to you.

You have access to a specific tool that the AI assistant must use to provide a proper answer. The tool is a function that requires a set of parameters, which are provided in a JSON schema to explain what parameters the tool needs. Your task is to extract the values for these parameters from the user's last message and the conversation history.

Your job is to closely examine the user's last message and the history of the conversation, then extract the necessary parameter values for the given tool based on the provided JSON schema. Remember that you must use the specified tool to generate the response.

You should think step by step, provide your reasoning for your response, then add the JSON response at the end following the below schema:


{
    "tool_calls": [{
        "name": "function_name",
        "arguments": {
            "arg1": "value1",
            "arg2": "value2",
            ...
        }]
    }
}


**Wrap the JSON response between ```json and ```, and rememebr "tool_calls" is a list.**. """


CLEAN_UP_MESSAGE = "When I tried to extract the content between ```json and ``` and parse the content to valid JSON object, I faced with the abovr error. Remember, you are supposed to wrap the schema between ```json and ```, and do this only one time. First find out what went wrong, that I couldn't extract the JSON between ```json and ```, and also faced error when trying to parse it, then regenerate the your last message and fix the issue."

SUFFIX = """# Example of your response:
<justification>
Here you explain why you think a tool or multiple tools are needed and how you extracted the values for the parameters from the user's last message and the conversation history. Also, you may explain why there is no need for any tools.
</justification>

<selected_tools> 
{
    "tool_calls" : [
        { 
            "name": "function_name_1",
            "arguments": {
                "arg1" : "value1", "arg2": "value2", ...
            }
        },
        { 
            "name": "function_name_2",
            "arguments": {
                "arg1" : "value1", "arg2": "value2", ...
            }
        }, ...
    ]
}
</selected_tools>

**If there is no need for any tools, then return an empty list for "tool_calls", like "{ "tool_calls": [] }".**

# Task:
Think step by step and justify your response in only two sentences. 

Remember: 
- You may select multiple tools if needed.
- **IF for some arguments there is no direct and clear value based ont he history of conversation and user last message then assign null to them. Therefore NEVER TRY TO GUESS THE VALUE.**
- **ONLY USE THE MENTIONED TOOLS ABOVE AND NOTHING OUT OF THAT. Do not suggest a function that is not in the list of tools.**
- BE CONCISE AND TO THE POINT. DO NOT ADD ANY UNNECESSARY INFORMATION. MAKER YOUR JUSTIFICATION SHORT.
- **Dont forget to refer to the histocry of conversation, when you are trying to figure out values of arguments for tool(s) you picked up.**
- **Some time user may have to refer to the previous messages, to extract the proepr value fo ruser arguments, becuase user may refer to them in his/her last message.**
- **Use the user's last message to detect the tool(s) you need to call, and use the history of conversation to extract the values of arguments for the tool(s) you picked up.**

IMPORTANT: Not every situiation need a tools, so don't force it, if the question doesn't match with any of tools simply retirn an empty list for "tool_calls" and justify your response.


# FEW SHOTS: Here we provide some example for you to learn how to generate your response.
<few_shots>
FEW_SHOTS
</few_shots>

Make decision based on on the last user message:"""

ENFORCE_SUFFIX = """# Example of your response:
<selected_tool_arguments_data> 
{
    "tool_calls" : [{
        "name": "function_name",
        "arguments": {
            "arg1": "value1",
            "arg2": "value2",
            ...
        }]
}
</selected_tool_arguments_data>

NOTE:
- **IF for some arguments there is no direct and clear value based ont he history of conversation and user last message then assign null to them. Therefore NEVER TRY TO GUESS THE VALUE.**
- **ONLY USE THE MENTIONED TOOLS ABOVE AND NOTHING OUT OF THAT. Do not suggest a function that is not in the list of tools.**
- BE CONCISE AND TO THE POINT. DO NOT ADD ANY UNNECESSARY INFORMATION. MAKER YOUR JUSTIFICATION SHORT.
- **Dont forget to refer to histocy of conversation, when you are trying to figure out values of arguments for the given tool (function).**
- **Some time user may have to refer to the previous messages so you can find the argument value from there.**

Now extract required data for this tool argument, if any. Make your decision based on on the last user's message:"""


TOOLS_OPEN_TOKEN = "<selected_tools>"
TOOLS_CLOSE_TOKEN = "</selected_tools>"


FORCE_CALL_SUFFIX = """For this task, you HAVE to choose the tool (function) {tool_name}, and ignore other rools. Therefore think step by step and justify your response, then closely examine the user's last message and the history of the conversation, then extract the necessary parameter values for the given tool based on the provided JSON schema. Remember that you must use the specified tool to generate the response. Finally generate a JSON response wrapped between "<selected_tools>" and "</selected_tools>". Remember to USE THIS JSON WRAPPER ONLY ONE TIME."""


IMAGE_DESCRIPTO_PROMPT = """The user has uploaded an image. List down in very detail what the image is about. List down all objetcs you see and their description. Your description should be enough for a blind person to be able to visualize the image and answe ny question about it."""


def get_forced_tool_suffix(tool_name: str) -> str:
    return FORCE_CALL_SUFFIX.format(tool_name=tool_name)


def get_func_result_guide(function_call_result: str) -> str:
    return f"SYSTEM MESSAGE: \n```json\n{function_call_result}\n```\n\nThe above is the result after functions are called. Use the result to answer the user's last question.\n\n"


def get_image_desc_guide(ref_index: int, description: str) -> str:
    return f"IMAGE: [{ref_index}] : {description}.\n\n"


FEW_SHOTS = [
    {
        "input": """# Conversation History:
<user>
I'm planning a birthday party for my friend. Can you suggest some good catering options?
</user>

# Available Tools:
[
    {
        "name": "get_catering_options",
        "description": "Use this function to retrieve a list of available catering options based on the type of event and dietary preferences.\n\n:param event_type: The type of event (e.g., birthday, wedding, corporate).\n:param dietary_preferences: Any specific dietary preferences or restrictions (e.g., vegetarian, gluten-free).\n:return: A string containing the list of catering options matching the provided criteria.",
        "parameters": {
            "type": "object",
            "properties": {
                "event_type": {
                    "type": "string"
                },
                "dietary_preferences": {
                    "type": "string"
                }
            }
        }
    },
    {
        "name": "book_catering",
        "description": "Use this function to book a catering service when all required information is available (selected catering option, event details, and payment information).\n\nArgs:\n    selected_option (str): The selected catering option.\n    event_details (dict): The details of the event (date, time, location, number of guests).\n    payment_info (dict): The payment information provided by the user.\n\nReturns:\n    The result of the booking operation.",
        "parameters": {
            "type": "object",
            "properties": {
                "selected_option": {
                    "type": "string"
                },
                "event_details": {
                    "type": "object"
                },
                "payment_info": {
                    "type": "object"
                }
            }
        }
    }
]""",
        "output": """<justification>
The user is asking for suggestions on catering options for a birthday party. We can use the "get_catering_options" tool to retrieve a list of suitable options based on the event type (birthday) and any dietary preferences provided by the user.
</justification>

<selected_tools>
{
    "tool_calls": [
        {
            "name": "get_catering_options",
            "arguments": {
                "event_type": "birthday",
                "dietary_preferences": ""
            }
        }
    ]
}
</selected_tools>""",
    },
    {
        "input": """# Conversation History:
<user>
I'm looking for a new laptop for work. Any recommendations?
</user>
<assistant>
Sure, I can help you with that. To provide the best recommendations, could you please share some more information about your requirements? Here are a few questions to consider:

1. What is your budget range?
2. What size laptop do you prefer (e.g., 13-inch, 15-inch)?
3. Do you have any specific performance requirements (e.g., processor, RAM, storage)?
4. Are there any particular features you're looking for (e.g., long battery life, lightweight)?

Once you provide these details, I can suggest some laptop models that would suit your needs.
</assistant>
<user>
Thanks for the questions. My budget is around $1,500, and I prefer a 14-inch or 15-inch laptop. I need a powerful processor and at least 16GB of RAM for running multiple applications simultaneously. A long battery life would be a plus.
</user>

# Available Tools:
[
    {
        "name": "get_laptop_recommendations",
        "description": "Use this function to retrieve a list of laptop recommendations based on the provided criteria (budget, size, performance requirements, and features).\n\n:param criteria: A dictionary containing the user's laptop preferences.\n:return: A string containing the list of laptop recommendations matching the provided criteria.",
        "parameters": {
            "type": "object",
            "properties": {
                "criteria": {
                    "type": "object"
                }
            }
        }
    },
    {
        "name": "compare_laptops",
        "description": "Use this function to compare the specifications and features of two or more laptop models.\n\n:param laptop_models: A list of laptop models to compare.\n:return: A string containing the comparison of the provided laptop models.",
        "parameters": {
            "type": "object",
            "properties": {
                "laptop_models": {
                    "type": "array",
                    "items": {
                        "type": "string"
                    }
                }
            }
        }
    }
]""",
        "output": """<justification>
The user has provided specific criteria for laptop recommendations, including budget, size, performance requirements, and desired features. We can use the "get_laptop_recommendations" tool to retrieve a list of laptops matching these criteria. Additionally, we can use the "compare_laptops" tool to provide a comparison of the recommended models to help the user make an informed decision.
</justification>

<selected_tools>
{
    "tool_calls": [
        {
            "name": "get_laptop_recommendations",
            "arguments": {
                "criteria": {
                    "budget": 1500,
                    "size": "14-inch or 15-inch",
                    "performance": {
                        "processor": "powerful",
                        "RAM": "at least 16GB"
                    },
                    "features": {
                        "battery_life": "long"
                    }
                }
            }
        },
        {
            "name": "compare_laptops",
            "arguments": {
                "laptop_models": []
            }
        }
    ]
}
</selected_tools>""",
    },
    {
        "input": """# Conversation History:
<user>
I'm planning a trip to Europe next month. Do you have any tips for packing light?
</user>
<assistant>
Packing light is a great way to make your travel experience more enjoyable and hassle-free. Here are some tips to help you pack efficiently:

1. Choose versatile clothing items that can be mixed and matched to create multiple outfits.
2. Opt for lightweight, quick-drying fabrics that are easy to wash and don't take up much space.
3. Limit your shoes to two or three comfortable pairs that can be worn with different outfits.
4. Use packing cubes or compression bags to organize your belongings and save space in your luggage.
5. Pack travel-sized toiletries and only bring the essentials.
6. Wear your bulkiest items, like jackets or boots, on the plane to save space in your luggage.
7. Consider doing laundry during your trip to reduce the amount of clothing you need to pack.

Remember, the key is to be selective and only pack what you truly need. If you have any specific questions about packing for your European trip, feel free to ask!
</assistant>
<user>
Thanks for the great advice! I'll definitely keep these tips in mind while packing. Do you have any recommendations for must-see attractions or hidden gems in Europe?
</user>

# Available Tools:
[
    {
        "name": "get_packing_checklist",
        "description": "Use this function to generate a personalized packing checklist based on the user's destination, trip duration, and travel preferences.\n\n:param destination: The user's travel destination.\n:param duration: The duration of the user's trip.\n:param preferences: Any specific travel preferences or requirements.\n:return: A string containing the personalized packing checklist.",
        "parameters": {
            "type": "object",
            "properties": {
                "destination": {
                    "type": "string"
                },
                "duration": {
                    "type": "string"
                },
                "preferences": {
                    "type": "string"
                }
            }
        }
    }
]""",
        "output": """<justification>
The user is asking for recommendations on must-see attractions and hidden gems in Europe, which is a broad and open-ended question. The available tool, "get_packing_checklist," is not relevant to answering this query. The best approach would be to provide a general response based on popular destinations and experiences, or ask the user for more specific information about their interests and the countries they plan to visit.
</justification>

<selected_tools>
{
    "tool_calls": []
}
</selected_tools>""",
    },
    {
        "input": """# Conversation History:
<user>
Hi what are available comic movies on 23th March?
</user>
<assistant>
Thank you for providing the information! I found two comic movies showing on March 23rd. Here are the details:

1. "Never Say Never Again" - High octane thrillers, PG, 2 hrs 14 mins, 1983, HD.
   Airing time: 2:25 PM EDT.

2. "Scary Movie 3" - Slashers, PG-13, 1 hr 25 mins, 2003, HD.
   Airing time: 11:23 PM EDT.

Additionally, I noticed "The Strangers: Prey at Night" is also available on March 23rd if you're interested in a horror movie.
- Horror, R, 1 hr 25 mins, 2018, HD.
  Airing time: 12:03 PM EDT.

Please let me know if you'd like to book any of these movies. I'm here to help!
</assistant>
<user>
Interesting, tell me more about this movie "Never Say Never Again,", what is its plot?
</user>

# Available Tools: 
[
    {
        "name": "get_available_slots",
        "description": "Use this function to search the Cinemax database of available movies, show time, and date.\n\n:param query: The query to search the Cinemax database of available movies, show time, and date.\n:return: A string containing the response to the query.",
        "parameters": {
            "type": "object",
            "properties": {
                "movie_slot_query": {
                    "type": "string"
                }
            }
        }
    },
    {
        "name": "book_cinema_ticket",
        "description": "Use this function ONLY for booking a ticket, when all info is available (movie name, date, time and suer email). Do NOT use this function when user asks for movie details and other things\n\nArgs:\n    movie_name (str): The name of the movie.\n    date (Optional[str], optional): The date of the movie.\n    time (Optional[str], optional): The time of the movie.\n    user_email (Optional[str], optional): The email of the user. Defaults to None.\n\nReturns:\n    The result of the operation.",
        "parameters": {
            "type": "object",
            "properties": {
                "movie_name": {
                    "type": "string"
                },
                "date": {
                    "type": [
                        "string",
                        "null"
                    ]
                },
                "time": {
                    "type": [
                        "string",
                        "null"
                    ]
                },
                "user_email": {
                    "type": [
                        "string",
                        "null"
                    ]
                }
            }
        }
    },
    {
        "name": "get_chat_history",
        "description": "Returns the chat history between the user and assistant.\n\n:param num_chats: The number of chats to return.\n    Each chat contains 2 messages. One from the user and one from the assistant.\n    Default: 3\n:return: A list of dictionaries representing the chat history.\n\nExample:\n    - To get the last chat, use num_chats=1.\n    - To get the last 5 chats, use num_chats=5.\n    - To get all chats, use num_chats=None.\n    - To get the first chat, use num_chats=None and pick the first message.",
        "parameters": {
            "type": "object",
            "properties": {
                "num_chats": {
                    "type": [
                        "number",
                        "null"
                    ]
                }
            }
        }
    }
]""",
        "output": """<justification>
The user is asking for more information that non of tools or chat history can provide. Therefore, I can't use any of the tools to answer the user's question. 
</justification>

<selected_tools>
{
    "tool_calls": []
}   
</selected_tools>""",
    },
    {
        "input": """# Conversation History:
<user>
What was the closing price of Apple Inc. (AAPL) stock yesterday?
</user>

# Available Tools:
[
    {
        "name": "get_stock_price",
        "description": "Use this function to retrieve the latest price information for a given stock symbol.\n\n:param symbol: The stock symbol to retrieve the price for (e.g., 'AAPL' for Apple Inc.).\n:return: A string containing the latest price information for the specified stock.",
        "parameters": {
            "type": "object",
            "properties": {
                "symbol": {
                    "type": "string"
                }
            }
        }
    },
    {
        "name": "get_stock_history",
        "description": "Use this function to retrieve the historical price data for a given stock symbol over a specified period.\n\n:param symbol: The stock symbol to retrieve the historical data for.\n:param period: The time period for the historical data (e.g., '1d', '5d', '1mo', '1y').\n:return: A string containing the historical price data for the specified stock and period.",
        "parameters": {
            "type": "object",
            "properties": {
                "symbol": {
                    "type": "string"
                },
                "period": {
                    "type": "string"
                }
            }
        }
    }
]""",
        "output": """<justification>
The user is asking for the closing price of Apple Inc. (AAPL) stock from the previous day. We can use the "get_stock_price" tool to retrieve the latest price information for the AAPL stock symbol. The "get_stock_history" tool is not needed in this case, as the user is only asking for the closing price from yesterday.
</justification>

<selected_tools>
{
    "tool_calls": [
        {
            "name": "get_stock_price",
            "arguments": {
                "symbol": "AAPL"
            }
        }
    ]
}
</selected_tools>""",
    },
    {
        "input": """# Conversation History:
<user>
I'm considering investing in Tesla (TSLA) stock. Can you provide me with some information to help me make a decision?
</user>
<assistant>
To help you make an informed decision about investing in Tesla (TSLA) stock, I can provide you with the following information:

1. Latest price: The current price of TSLA stock can give you an idea of its market value.

2. Historical performance: Analyzing the stock's price history over different periods (e.g., 1 month, 6 months, 1 year) can help you understand its volatility and potential for growth.

3. Company fundamentals: Researching Tesla's financial statements, revenue growth, profitability, and future projects can provide insights into the company's overall health and potential.

4. Market sentiment: Monitoring news, analyst ratings, and investor sentiment can help gauge the market's perception of Tesla stock.

Let me know which specific information you'd like me to provide first, and I'll be happy to assist you further.
</assistant>
<user>
Thanks for the overview. Let's start with the latest price and the historical performance over the past 6 months. That should give me a good starting point.
</user>

# Available Tools:
[
    {
        "name": "get_stock_price",
        "description": "Use this function to retrieve the latest price information for a given stock symbol.\n\n:param symbol: The stock symbol to retrieve the price for (e.g., 'AAPL' for Apple Inc.).\n:return: A string containing the latest price information for the specified stock.",
        "parameters": {
            "type": "object",
            "properties": {
                "symbol": {
                    "type": "string"
                }
            }
        }
    },
    {
        "name": "get_stock_history",
        "description": "Use this function to retrieve the historical price data for a given stock symbol over a specified period.\n\n:param symbol: The stock symbol to retrieve the historical data for.\n:param period: The time period for the historical data (e.g., '1d', '5d', '1mo', '1y').\n:return: A string containing the historical price data for the specified stock and period.",
        "parameters": {
            "type": "object",
            "properties": {
                "symbol": {
                    "type": "string"
                },
                "period": {
                    "type": "string"
                }
            }
        }
    }
]""",
        "output": """<justification>
The user has requested two specific pieces of information to help them make a decision about investing in Tesla (TSLA) stock: the latest price and the historical performance over the past 6 months. To provide this information, we need to use both the "get_stock_price" and "get_stock_history" tools.

1. "get_stock_price" will be used to retrieve the current price of TSLA stock.
2. "get_stock_history" will be used to retrieve the price history of TSLA stock over the past 6 months (period: '6mo').

By combining the results from these two tools, we can give the user the requested information to support their investment decision.
</justification>

<selected_tools>
{
    "tool_calls": [
        {
            "name": "get_stock_price",
            "arguments": {
                "symbol": "TSLA"
            }
        },
        {
            "name": "get_stock_history",
            "arguments": {
                "symbol": "TSLA",
                "period": "6mo"
            }
        }
    ]
}
</selected_tools>""",
    },
    {
        "input": """# Conversation History:
<user>
What do you think about the recent performance of the stock market?
</user>
<assistant>
The stock market's recent performance has been mixed, with some sectors showing strong growth while others have experienced volatility. Here are a few key points:

1. Tech stocks: Many technology companies have seen significant gains, driven by strong earnings and optimism about the sector's long-term prospects.

2. Energy sector: Oil and gas stocks have been under pressure due to fluctuating crude oil prices and concerns about the global economic recovery.

3. Health care: The health care sector has been a mixed bag, with some companies benefiting from the focus on COVID-19 treatments and vaccines, while others have faced challenges.

4. Overall market: The broader stock market indices, such as the S&P 500 and Dow Jones Industrial Average, have reached record highs recently, supported by positive economic data and optimism about the post-pandemic recovery.

It's important to remember that the stock market's performance is influenced by various factors, including economic indicators, geopolitical events, and investor sentiment. As an AI language model, I cannot provide personalized investment advice, but I suggest researching specific sectors and companies, and consulting with a financial advisor before making investment decisions.
</assistant>
<user>
Thanks for the insights. It's helpful to get a general overview of the market's performance across different sectors. I'll definitely do more research and consult with a financial advisor before making any investment decisions.
</user>

# Available Tools:
[
    {
        "name": "get_stock_price",
        "description": "Use this function to retrieve the latest price information for a given stock symbol.\n\n:param symbol: The stock symbol to retrieve the price for (e.g., 'AAPL' for Apple Inc.).\n:return: A string containing the latest price information for the specified stock.",
        "parameters": {
            "type": "object",
            "properties": {
                "symbol": {
                    "type": "string"
                }
            }
        }
    },
    {
        "name": "get_stock_history",
        "description": "Use this function to retrieve the historical price data for a given stock symbol over a specified period.\n\n:param symbol: The stock symbol to retrieve the historical data for.\n:param period: The time period for the historical data (e.g., '1d', '5d', '1mo', '1y').\n:return: A string containing the historical price data for the specified stock and period.",
        "parameters": {
            "type": "object",
            "properties": {
                "symbol": {
                    "type": "string"
                },
                "period": {
                    "type": "string"
                }
            }
        }
    }
]""",
        "output": """<justification>
The user's question about the recent performance of the stock market is broad and does not require the use of the available tools. The conversation history shows that the assistant has provided a general overview of the market's performance across different sectors, which sufficiently addresses the user's question. The user also acknowledges that the insights are helpful and expresses their intention to conduct further research and consult with a financial advisor. As a result, no specific tools are needed to answer this query.
</justification>

<selected_tools>
{
    "tool_calls": []
}
</selected_tools>""",
    },
]

import random
def get_suffix():
    random.shuffle(FEW_SHOTS)
    # Turn each element of FEW_SHOTS into a string like '-- EXAMPLE i ---\nINPUT:\n{input}\n\nOUTPUT:\n{output}\n\n', then join them by \n\n
    few_shots = "\n\n".join(
        [
            f'-- EXAMPLE {i} ---\nINPUT:\n{example["input"]}\n\nOUTPUT:\n{example["output"]}\n\n'
            for i, example in enumerate(FEW_SHOTS, 1)
        ]
    )
    # Replace FEW_SHOTS with the actual examples
    return SUFFIX.replace("FEW_SHOTS", few_shots)


================================================
FILE: app/providers.py
================================================
from groq import Groq
from openai import OpenAI
from litellm import completion as sync_call_llm
import litellm


class BaseProvider:
    def __init__(self, api_key: str, base_url = None):
        self.api_key = api_key
        self.parser_model = ""
        self.route_model = ""

    def route(self, model: str, messages: list, **kwargs):
        pass

    async def route_async(self, model: str, messages: list, **kwargs):
        pass

    def clean_params(self, params):
        pass


class OpenaiProvider(BaseProvider):
    def __init__(self, api_key: str, base_url = None):
        super().__init__(api_key)
        self._client = OpenAI(api_key=api_key)
        self.parser_model = "gpt-3.5-turbo"
        self.route_model = "gpt-3.5-turbo"
        self.exclude_params = ["messages"]

    def route(self, model: str, messages: list, **kwargs):
        completion = self._client.chat.completions.create(
            model=model,
            messages=messages,
            **kwargs
        )
        return completion

    def clean_params(self, params):
        return {k: v for k, v in params.items() if k not in self.exclude_params}


class GroqProvider(BaseProvider):
    def __init__(self, api_key: str, base_url = None):
        super().__init__(api_key)
        self._client = Groq(api_key=api_key)
        self.parser_model = "mixtral-8x7b-32768"
        self.route_model = "mixtral-8x7b-32768"
        self.exclude_params = ["messages", "tools", "tool_choice"]
       
    def route(self, model: str, messages: list, **kwargs):
        completion =  self._client.chat.completions.create(
            model=model,
            messages=messages,
            **kwargs
        )
        return completion
    
    def clean_params(self, params):
        return {k: v for k, v in params.items() if k not in self.exclude_params}
        

class OllamaProvider(BaseProvider):
    def __init__(self, api_key: str, base_url = None):
        super().__init__(api_key)
        self.parser_model = "gemma:2b"
        self.route_model = "gemma:7b"
        self.exclude_params = ["messages", "tools", "tool_choice"]
        
    def route(self, model: str, messages: list, **kwargs):
        # Filter out all messages with rol assistant and has key "tool_calls"
        messages = [message for message in messages if message["role"] != "assistant" and "tool_calls" not in message]
        params = self.clean_params(kwargs)
        params = {
            'max_tokens': 2048,
            **params
        }
        response = sync_call_llm(
            model=f"ollama/{model}",
            api_base="http://localhost:11434",
            messages=messages,
            **self.clean_params(kwargs)
        )

        return response
    
    def clean_params(self, params):
        return {k: v for k, v in params.items() if k not in self.exclude_params}

================================================
FILE: app/reasoning/__init__.py
================================================


================================================
FILE: app/reasoning/base.py
================================================
from abc import ABC, abstractmethod
from typing import Dict

class ReasoningBase(ABC):
    name: str
    description: str

    @abstractmethod
    def run(self, context) -> Dict:
        pass



================================================
FILE: app/reasoning/rerank.py
================================================
from pydantic import BaseModel, Field
from typing import Optional, Dict
from .base import ReasoningBase
from pydantic import Field
from typing import Optional
import requests
import json, os
from providers import GroqProvider
import concurrent.futures
from dotenv import load_dotenv
load_dotenv()

def get_rerank_prompt(query, responses, top_k):
    prompt = f"""You are an AI assistant tasked with evaluating and selecting the best responses to a user's request. The user's request is:

<context>
{query}
</context>

Here are the responses generated by different programmers to the user's request:

{responses}

# Task:
Your task is to evaluate these responses and select the top {top_k} that best address the user's request. Consider factors such as relevance, clarity, and completeness when making your selection.

After selecting the top {top_k} responses, generate a final response by merging and summarizing the selected responses. Format your output as follows:

**Make sure to wrap the final answer supposed to be back to user using >>>**

# Example of your response:
After evaluating the responses, I have selected the top {top_k} that best address the user's request. Here they are:

<response index = 1>
summary of response 1

<response index = 2>
summary of response 2

...

<response index = top_k>
summary of response top_k

Based on these top {top_k} responses, I have generated a final response by merging and summarizing them:

>>>
final_response_for_user
>>>"""

    return prompt

class RerankReasoning(ReasoningBase):
    name = "rerank"
    description = "Use this reasoning strategy to generate and rerank responses to a user query."

    def __init__(self, generator_model: str, reranker_model: str, n: int = 5, top_k: int = 3):
        self.generator_model = generator_model
        self.reranker_model = reranker_model
        self.n = n
        self.top_k = top_k
        self.generator_provider = GroqProvider(api_key=os.getenv("GROQ_API_KEY"))
        self.reranker_provider = GroqProvider(api_key=os.getenv("GROQ_API_KEY"))

    def run(self, context):
        message_stories = context.messages

        # Generate responses in parallel using a thread pool
        with concurrent.futures.ThreadPoolExecutor() as executor:
            response_futures = [executor.submit(self.generator_provider.route, model=self.generator_model, messages=message_stories) for _ in range(self.n)]
            responses = [future.result().get("response") for future in concurrent.futures.as_completed(response_futures)]

        unique_responses = list(set(responses))

        # Adjust top_k if it's greater than the number of unique responses
        if self.top_k > len(unique_responses):
            self.top_k = int(len(unique_responses) * 0.4)

        # Generate prompt for reranking
        prompt = get_rerank_prompt(
            query=message_stories[-1].content,
            responses='\n\n'.join([f'<response index={idx + 1}>\n{response}\n</response>' for idx, response in enumerate(unique_responses)]),
            top_k=self.top_k
        )

        # Rerank responses
        rerank_completion = self.reranker_provider.route(model=self.reranker_model, messages=[{"content": prompt}])
        reranked_response = rerank_completion.get("response")

        # Extract the final response
        final_response = reranked_response.split(">>>")[1].split(">>>")[0].strip()

        # Add the final response as a new message to the context
        new_message = {"role": "assistant", "content":final_response}
        context.messages.append(new_message)

        return new_message

================================================
FILE: app/routes/__init__.py
================================================
from .proxy import router as proxy_router
from .examples import router as examples_router

__all__ = [ "proxy_router", "examples_router" ]


================================================
FILE: app/routes/examples.py
================================================

from fastapi.responses import FileResponse
from fastapi import APIRouter
import os

router = APIRouter()

# Add endpoint to downl;oad files in the ../examples folder
@router.get("/{file_path}")
async def read_examples(file_path: str):
    # get parent directory
    parent = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
    file_path = f"{parent}/examples/{file_path}"
    if os.path.exists(file_path):
        return FileResponse(file_path)
    else:
        return {"error": "File not found."}

# @router.get("/examples")
# async def read_root():
#     return {"message": "Hello World", "examples": [
#         "/example/example_1.py",
#         "/example/example_2.py",
#         "/example/example_3.py",
#         "/example/example_4.py",
#     ]}

================================================
FILE: app/routes/proxy.py
================================================
from fastapi import APIRouter, Response, Request, Path, Query
from fastapi.responses import JSONResponse
# from libs.chains import (
#     Context,
#     ProviderSelectionHandler,
#     ImageMessageHandler,
#     ToolExtractionHandler,
#     ToolResponseHandler,
#     DefaultCompletionHandler,
#     FallbackHandler,
# )

from libs import (
    Context,
    ProviderSelectionHandler,
    ImageMessageHandler,
    ToolExtractionHandler,
    ToolResponseHandler,
    DefaultCompletionHandler,
    FallbackHandler,
)


from typing import Optional

router = APIRouter()


# Add get endpoint for /openai/v1 and print request body
@router.get("/{provider}/v1")
async def get_openai_v1(
    response: Response, provider: str = Path(..., title="Provider")
) -> JSONResponse:
    return JSONResponse(content={"message": f"GET request to {provider} v1"})


@router.post("/groqchain/{provider}/v1/chat/completions")
async def post_groq_chat_completions(
    request: Request,
    provider: str = Path(..., title="Provider")
) -> JSONResponse:
    # Call the original post_chat_completions method with provider set to "groq"
    return await post_chat_completions(request, provider="groq")


@router.post("/{provider}/v1/chat/completions")
async def post_chat_completions(
    request: Request,
    provider: str = Path(..., title="Provider")
) -> JSONResponse:
    try:
        if not provider:
            provider = "openai"

        if not ProviderSelectionHandler.provider_exists(provider):
            return JSONResponse(content={"error": "Invalid provider"}, status_code=400)

        # Extract the API token and body from the request
        api_token = request.headers.get("Authorization").split("Bearer ")[1]
        body = await request.json()

        # Initialize the context with request details
        context = Context(request, provider, body)
        context.api_token = (
            api_token  # Adding the API token to the context for use in handlers
        )

        # Initialize and link the handlers
        provider_selection_handler = ProviderSelectionHandler()
        image_message_handler = ImageMessageHandler()
        tool_extraction_handler = ToolExtractionHandler()
        tool_response_handler = ToolResponseHandler()
        default_completion_handler = DefaultCompletionHandler()
        fallback_handler = FallbackHandler()

        # Set up the chain of responsibility
        chains = [
            provider_selection_handler,
            image_message_handler,
            tool_extraction_handler,
            tool_response_handler,
            default_completion_handler,
            fallback_handler,
        ]
        for i in range(len(chains) - 1):
            chains[i].set_next(chains[i + 1])

        # provider_selection_handler.set_next(tool_extraction_handler).set_next(
        #     tool_response_handler
        # ).set_next(default_completion_handler).set_next(fallback_handler)

        # Execute the chain with the initial context
        response = await provider_selection_handler.handle(context)

        # Return the response generated by the handlers
        return response
    except Exception as e:
        print(f"Error processing the request: {e}")
        return JSONResponse(
            content={"error": "An unexpected error occurred"}, status_code=500
        )


================================================
FILE: app/utils.py
================================================
import logging
import os
import replicate
import base64
from io import BytesIO


# To be developed
def create_logger(logger_name: str, log_path: str = ".logs/access.log", show_on_shell: bool = False):
    log_dir = os.path.dirname(log_path)
    if not os.path.exists(log_dir):
        os.makedirs(log_dir)
    logger = logging.getLogger(logger_name)
    logger.setLevel(logging.DEBUG)
    file_handler = logging.FileHandler(log_path)
    file_handler.setLevel(logging.DEBUG)
    formatter = logging.Formatter(
        "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    )
    file_handler.setFormatter(formatter)
    logger.addHandler(file_handler)
    if show_on_shell:
        stream_handler = logging.StreamHandler()
        stream_handler.setLevel(logging.DEBUG)
        shell_formatter = logging.Formatter(
            "%(levelname)s (%(name)s)   %(message)s"
        )
        stream_handler.setFormatter(shell_formatter)
        logger.addHandler(stream_handler)    
    return logger


def get_tool_call_response(tool_calls_result, unresolved_tol_calls, resolved_responses):
    last_completion = tool_calls_result["last_completion"]
    tool_response = {
        "id": "chatcmpl-" + last_completion.id if last_completion else None,
        "object": "chat.completion",
        "created": last_completion.created if last_completion else None,
        "model": last_completion.model if last_completion else None,
        "choices": [
            {
                "index": 0,
                "message": {
                    "role": "assistant",
                    "content": "", # None,
                    "tool_calls": unresolved_tol_calls,
                },
                "logprobs": None,
                "finish_reason": "tool_calls",
            }
        ],
        "resolved": resolved_responses,
        "usage": tool_calls_result["usage"],
        "system_fingerprint": last_completion.system_fingerprint if last_completion else None,
    }
    return tool_response

def describe(prompt: str, image_url_or_base64 : str, **kwargs) -> str:
    logger = create_logger("vision", ".logs/access.log", True)
    try:
        if image_url_or_base64.startswith("data:image/"):
            # If the input is a base64 string
            image_data = base64.b64decode(image_url_or_base64.split(",")[1])
            image_file = BytesIO(image_data)
        else:
            # If the input is a URL
            image_file = image_url_or_base64
        
        model_params = {
            "top_p": 1,
            "max_tokens": 1024,
            "temperature": 0.2
        }
        model_params.update(kwargs)

        logger.info("Running the model")
        output = replicate.run(
            "yorickvp/llava-13b:01359160a4cff57c6b7d4dc625d0019d390c7c46f553714069f114b392f4a726",
            input={
                "image": image_file,
                "prompt": prompt, #"Describe the image in detail.",
                **model_params
            }
        )
        
        description = ""
        for item in output:
            if not description:
                logger.info("Streaming...")
            description += item
        
        return description.strip()
    except Exception as e:
        logger.error( f"Vision model, An error occurred: {e}")
    return None



    # describe("Describe the image in detail.", "https://replicate.delivery/pbxt/KRULC43USWlEx4ZNkXltJqvYaHpEx2uJ4IyUQPRPwYb8SzPf/view.jpg")

================================================
FILE: cookbook/ai_assistant_custome_tools.py
================================================

import os, json
from typing import Optional, List
from phi.llm.openai.like import OpenAILike
from phi.assistant import Assistant
from phi.knowledge.json import JSONKnowledgeBase
from phi.vectordb.pgvector import PgVector2
from phi.storage.assistant.postgres import PgAssistantStorage
from phi.tools import Toolkit
from phi.tools.email import EmailTools
from phi.utils.log import logger
from phi.tools.email import EmailTools
from phi.knowledge.base import AssistantKnowledge
from phi.knowledge.base import Document
from resources import vector_db
from rich.prompt import Prompt
from dotenv import load_dotenv
load_dotenv()

# To run this example, first make sure to follow the instructions below:
# 1. Install the phidata: pip install phidata
# 2. Run the following command to start a docker, with pgvector db running: phi start resources.py 
# 3. Download the sample of JSON knowledge base from the same folder of this file: cinemax.json

class CinemaSerachDB(Toolkit):
    def __init__(
        self,
        knowledge_base  : Optional[AssistantKnowledge] = None,
        num_documents: int = None
    ):
        super().__init__(name="get_available_slots")
        self.knowledge_base = knowledge_base
        self.num_documents = num_documents
        self.register(self.get_available_slots)

    def get_available_slots(self, movie_slot_query: str ) -> str:
        """Use this function to search the Cinemax database of available movies, show time, and date.

        :param query: The query to search the Cinemax database of available movies, show time, and date.
        :return: A string containing the response to the query.
        """
        relevant_docs: List[Document] = self.knowledge_base.search(query=movie_slot_query, num_documents=self.num_documents)
        if len(relevant_docs) == 0:
            return None

        return json.dumps([doc.to_dict() for doc in relevant_docs], indent=2)



class CinemaTools(Toolkit):
    def __init__(
        self,
        email_tools: Optional["EmailTools"] = None,
    ):
        super().__init__(name="cinema_tools")
        self.email_tools = email_tools
        self.register(self.book_cinema_ticket)

    def book_cinema_ticket(self, movie_name: str, date: Optional[str] = None, time: Optional[str] = None, user_email: Optional[str] = None) -> str:
        """Use this function ONLY for booking a ticket, when all info is available (movie name, date, time and suer email). Do NOT use this function when user asks for movie details and other things

        Args:
            movie_name (str): The name of the movie.
            date (Optional[str], optional): The date of the movie.
            time (Optional[str], optional): The time of the movie.
            user_email (Optional[str], optional): The email of the user. Defaults to None.

        Returns:
            The result of the operation.

        """

        anything_missed = any([not movie_name, not date, not time, not user_email])

        missed_items = []

        if anything_missed:
            if not date:
                missed_items.append( "error: No date provided, I need a date to book a ticket")
            
            if not time:
                missed_items.append( "error: No time provided, I need a time to book a ticket")
            
            if not user_email:
                missed_items.append( "error: No user email provided, I need an email to send the ticket")

            missed_itemes = ", ".join(missed_items)
            return f"There are some missing items: \n{missed_itemes}"

        # Simulate booking the ticket
        ticket_number = self._generate_ticket_number()
        logger.info(f"Booking ticket for {movie_name} on {date} at {time}")

        # Prepare the email subject and body
        subject = f"Your ticket for {movie_name}"
        body = f"Dear user,\n\nYour ticket for {movie_name} on {date} at {time} has been booked.\n\n" \
               f"Your ticket number is: {ticket_number}\n\nEnjoy the movie!\n\nBest regards,\nThe Cinema Team"

        # Send the email using the EmailTools
        if not self.email_tools:
            return "error: No email tools provided"
        self.email_tools.receiver_email = user_email
        result = self.email_tools.email_user(subject, body)

        if result.startswith("error"):
            logger.error(f"Error booking ticket: {result}")
            return result
        return "success"

    def _generate_ticket_number(self) -> str:
        """Generates a dummy ticket number."""
        import random
        import string
        return "".join(random.choices(string.ascii_uppercase + string.digits, k=10))

kb = JSONKnowledgeBase(
    path="cinemax.json",
    vector_db=PgVector2(collection="cinemax", db_url=vector_db.get_db_connection_local()),
)
storage = PgAssistantStorage(
    table_name="cinemax_assistant_storage",
    db_url=vector_db.get_db_connection_local(),
)

my_groq = OpenAILike(
        model="mixtral-8x7b-32768",
        api_key=os.environ["GROQ_API_KEY"],
        base_url="http://localhost:8000/proxy/groq/v1"
        # base_url="http://groqcall.ai/proxy/groq/v1"
    )


def cinemax_assistant(new: bool = False, user: str = "user"):
    run_id: Optional[str] = None
    # new = False
    # new = True
    user_id = user

    if not new:
        existing_run_ids: List[str] = storage.get_all_run_ids(user_id)
        if len(existing_run_ids) > 0:
            run_id = existing_run_ids[0]

    assistant = Assistant(
        run_id=run_id,
        user_id="test_user",
        llm=my_groq,
        # knowledge_base=kb, Since I created my own tool CinemaSerachDB
        storage=storage,
        use_tools=True,
        tools=[
            CinemaSerachDB(knowledge_base=kb, num_documents=5),
            CinemaTools(EmailTools("YOUR_EMAIL_ADDRESS", "SENDER_NAME", "SENDER_EMAIL", os.environ['email_pass_key'] ))
        ], show_tool_calls=True, markdown=True,
        add_chat_history_to_messages=True, # Inject history in every prompt
        add_datetime_to_instructions=True,
        limit_tool_access=True,
        extra_instructions="",
        output_model= None, # Pydantic
        prevent_prompt_injection=True,
        add_references_to_prompt=False, # It searches in the kb and add it to every prompt
        # add_references_to_prompt=True,
        add_to_system_prompt="\n\nRemember:As a Cinemax agent, your responsibilities include:  \n\n1. Assisting individuals in locating their preferred movies using the provided database information. \n2. Facilitating ticket booking through the tools at your disposal. \n3. If critical information is missing and the registration system returns an error, generate a polite, humanized version of the error message and request the necessary additional information from users. \n4. When interacting with users, never disclose that you are an AI model unable to perform specific functions or activate certain features. \n5. When users inquire about movie showtimes, available movies, or related details, utilize your tools to access data from the knowledge base, rather than relying on your own knowledge. However, you may provide plot information about specific movies in a concise and succinct manner, ensuring responses are not excessively lengthy. \n6.When engaging in a discussion with users about films and proposing movies based on their preferences and the available options, it would be beneficial to list the available dates and times tailored to the user's interests and conversation history. This approach would simplify the booking process for the user. If the user has already specified a date, it is essential to remember and adhere to it, avoiding the suggestion of alternative dates."
    )
    assistant.knowledge_base.load(recreate=False)

    if run_id is None:
        run_id = assistant.run_id
        print(f"Started Run: {run_id}\n")
    else:
        print(f"Continuing Run: {run_id}\n")

    while True:
        message = Prompt.ask(f"[bold] :sunglasses: {user} [/bold]")
        if message in ("exit", "bye"):
            break
        assistant.print_response(message, markdown=True, stream=False)
        # response = assistant.run(message, stream=False)

if __name__ == "__main__":
    cinemax_assistant(user="Tom")




================================================
FILE: cookbook/cinemax.json
================================================
[
    {
        "date": "March 17",
        "name": "Barbershop 2: Back in Business",
        "airing_time": "10:37 AM EDT",
        "details_spans": "['drama|', 'PG-13|', '1 hr 47 mins|', '2004|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Hustle & Flow",
        "airing_time": "12:24 PM EDT",
        "details_spans": "['crime|', 'R|', '1 hr 59 mins|', '2005|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Predators",
        "airing_time": "2:21 PM EDT",
        "details_spans": "['space & beyond|', 'R|', '1 hr 49 mins|', '2010|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Paranoia",
        "airing_time": "4:08 PM EDT",
        "details_spans": "['drama|', 'PG-13|', '1 hr 48 mins|', '2013|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "XXX",
        "airing_time": "5:55 PM EDT",
        "details_spans": "['high octane thrillers|', 'PG-13|', '2 hrs 6 mins|', '2002|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Black Mass",
        "airing_time": "8:00 PM EDT",
        "details_spans": "['organized crime|', 'R|', '2 hrs 3 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Freedomland",
        "airing_time": "10:03 PM EDT",
        "details_spans": "['big screen crime|', 'R|', '1 hr 56 mins|', '2006|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Attack the Block",
        "airing_time": "11:57 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 30 mins|', '2011|']"
    },
    {
        "date": "March 17",
        "name": "Fifty Shades of Grey",
        "airing_time": "1:26 AM EDT",
        "details_spans": "['drama|', 'R|', '2 hrs 6 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "A Prayer Before Dawn",
        "airing_time": "3:32 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 58 mins|', '2018|']"
    },
    {
        "date": "March 17",
        "name": "The Pick-Up Artist",
        "airing_time": "5:30 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 22 mins|', '1987|']"
    },
    {
        "date": "March 17",
        "name": "I Am Wrath",
        "airing_time": "8:20 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 32 mins|', '2016|']"
    },
    {
        "date": "March 17",
        "name": "Leatherheads",
        "airing_time": "9:51 AM EDT",
        "details_spans": "['romantic comedy|', 'PG-13|', '1 hr 54 mins|', '2016|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Office Space",
        "airing_time": "11:45 AM EDT",
        "details_spans": "['cult classic|', 'R|', '1 hr 31 mins|', '1999|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Paycheck",
        "airing_time": "1:15 PM EDT",
        "details_spans": "['high octane thrillers|', 'PG-13|', '2 hrs|', '2003|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "The Sea of Trees",
        "airing_time": "11:19 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 52 mins|', '2016|']"
    },
    {
        "date": "March 17",
        "name": "Fx2: The Deadly Art of Illusion",
        "airing_time": "1:11 PM EDT",
        "details_spans": "['mystery & suspense|', 'PG-13|', '1 hr 50 mins|', '1991|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Scary Movie",
        "airing_time": "3:00 PM EDT",
        "details_spans": "['offbeat|', 'R|', '1 hr 30 mins|', '2000|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Scary Movie 2",
        "airing_time": "4:29 PM EDT",
        "details_spans": "['horror|', 'R|', '1 hr 23 mins|', '2001|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Scary Movie 3",
        "airing_time": "5:52 PM EDT",
        "details_spans": "['slashers|', 'PG-13|', '1 hr 26 mins|', '2003|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Jennifer's Body",
        "airing_time": "7:17 PM EDT",
        "details_spans": "['comedy|', 'R|', '1 hr 44 mins|', '2009|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Friday the 13th",
        "airing_time": "9:00 PM EDT",
        "details_spans": "['slashers|', 'R|', '1 hr 39 mins|', '2009|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "The Witch",
        "airing_time": "10:38 PM EDT",
        "details_spans": "['occult|', 'R|', '1 hr 34 mins|', '2016|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Dark Places",
        "airing_time": "12:11 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 55 mins|', '2015|']"
    },
    {
        "date": "March 17",
        "name": "Conviction",
        "airing_time": "2:05 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 49 mins|', '2010|']"
    },
    {
        "date": "March 17",
        "name": "Pride and Glory",
        "airing_time": "3:53 AM EDT",
        "details_spans": "['drama|', 'NR|', '2 hrs 11 mins|', '2008|']"
    },
    {
        "date": "March 17",
        "name": "Master of the Shadowless Kick: Wong Kei-Ying",
        "airing_time": "7:39 AM EDT",
        "details_spans": "['action|', 'TV-14|', '1 hr 40 mins|', '2017|']"
    },
    {
        "date": "March 17",
        "name": "The New World",
        "airing_time": "9:18 AM EDT",
        "details_spans": "['drama|', 'NR|', '2 hrs 17 mins|', '2005|']"
    },
    {
        "date": "March 17",
        "name": "Body of Lies",
        "airing_time": "11:34 AM EDT",
        "details_spans": "['action|', 'NR|', '2 hrs 9 mins|', '2008|']"
    },
    {
        "date": "March 17",
        "name": "Barbershop",
        "airing_time": "11:00 AM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 44 mins|', '2002|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Beauty Shop",
        "airing_time": "12:43 PM EDT",
        "details_spans": "['family relationships|', 'PG-13|', '1 hr 47 mins|', '2005|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Taxi",
        "airing_time": "2:29 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 38 mins|', '2004|']"
    },
    {
        "date": "March 17",
        "name": "Talk Black",
        "airing_time": "4:07 PM EDT",
        "details_spans": "['comedy|', 'TV-14|', '15 mins|', '2023|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "The Company You Keep",
        "airing_time": "4:21 PM EDT",
        "details_spans": "['suspense|', 'NR|', '2 hrs 3 mins|', '2013|']"
    },
    {
        "date": "March 17",
        "name": "Pure",
        "airing_time": "6:23 PM EDT",
        "details_spans": "['romantic dramas|', 'TV-14|', '14 mins|', '2022|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Extortion",
        "airing_time": "6:36 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 50 mins|', '2017|']"
    },
    {
        "date": "March 17",
        "name": "Hotel Artemis",
        "airing_time": "8:26 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 34 mins|', '2018|']"
    },
    {
        "date": "March 17",
        "name": "House at the End of the Street",
        "airing_time": "10:00 PM EDT",
        "details_spans": "['horror|', 'NR|', '1 hr 41 mins|', '2012|']"
    },
    {
        "date": "March 17",
        "name": "Attack the Block",
        "airing_time": "11:41 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 30 mins|', '2011|']"
    },
    {
        "date": "March 17",
        "name": "Line of Duty",
        "airing_time": "1:10 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 42 mins|', '2013|']"
    },
    {
        "date": "March 17",
        "name": "Poltergeist II: The Other Side",
        "airing_time": "2:51 AM EDT",
        "details_spans": "['horror|', 'NR|', '1 hr 32 mins|', '1986|']"
    },
    {
        "date": "March 17",
        "name": "Poltergeist III",
        "airing_time": "4:22 AM EDT",
        "details_spans": "['horror|', 'NR|', '1 hr 39 mins|', '1988|']"
    },
    {
        "date": "March 17",
        "name": "Yes Man",
        "airing_time": "7:31 AM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 45 mins|', '2008|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Missing in Action",
        "airing_time": "9:15 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 43 mins|', '1984|']"
    },
    {
        "date": "March 17",
        "name": "Braddock: Missing in Action III",
        "airing_time": "10:57 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 45 mins|', '1988|']"
    },
    {
        "date": "March 17",
        "name": "Unbreakable",
        "airing_time": "12:41 PM EDT",
        "details_spans": "['thrillers|', 'PG-13|', '1 hr 48 mins|', '2000|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "The Frozen Ground",
        "airing_time": "10:18 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 46 mins|', '2013|']"
    },
    {
        "date": "March 17",
        "name": "Antitrust",
        "airing_time": "12:04 PM EDT",
        "details_spans": "['adventure|', 'PG-13|', '1 hr 50 mins|', '2001|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Charlie Wilson's War",
        "airing_time": "1:53 PM EDT",
        "details_spans": "['comedy|', 'R|', '1 hr 43 mins|', '2007|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Skin Can Breathe",
        "airing_time": "3:35 PM EDT",
        "details_spans": "['drama|', 'TV-14|', '12 mins|', '2022|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Gemini",
        "airing_time": "3:47 PM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 35 mins|', '2018|']"
    },
    {
        "date": "March 17",
        "name": "The Strangers: Prey at Night",
        "airing_time": "5:21 PM EDT",
        "details_spans": "['horror|', 'R|', '1 hr 26 mins|', '2018|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Mojave",
        "airing_time": "6:47 PM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 36 mins|', '2016|']"
    },
    {
        "date": "March 17",
        "name": "Fargo",
        "airing_time": "8:21 PM EDT",
        "details_spans": "['mystery & suspense|', 'R|', '1 hr 40 mins|', '1996|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Victor Frankenstein",
        "airing_time": "10:00 PM EDT",
        "details_spans": "['horror|', 'NR|', '1 hr 51 mins|', '2015|']"
    },
    {
        "date": "March 17",
        "name": "Revenge of the Green Dragons",
        "airing_time": "11:50 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 36 mins|', '2014|']"
    },
    {
        "date": "March 17",
        "name": "The Forbidden Kingdom",
        "airing_time": "1:25 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 46 mins|', '2008|']"
    },
    {
        "date": "March 17",
        "name": "Green Room",
        "airing_time": "3:10 AM EDT",
        "details_spans": "['horror|', 'NR|', '1 hr 36 mins|', '2016|']"
    },
    {
        "date": "March 17",
        "name": "Into the Blue 2: The Reef",
        "airing_time": "4:46 AM EDT",
        "details_spans": "['suspense|', 'TV-MA|', '1 hr 33 mins|', '2009|']"
    },
    {
        "date": "March 17",
        "name": "Altitude",
        "airing_time": "8:10 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 29 mins|', '2017|']"
    },
    {
        "date": "March 17",
        "name": "Friday the 13th",
        "airing_time": "9:38 AM EDT",
        "details_spans": "['horror|', 'R|', '1 hr 36 mins|', '1980|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Friday the 13th, Part II",
        "airing_time": "11:14 AM EDT",
        "details_spans": "['slashers|', 'R|', '1 hr 27 mins|', '1981|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Friday the 13th Part III",
        "airing_time": "12:41 PM EDT",
        "details_spans": "['slashers|', 'R|', '1 hr 38 mins|', '1982|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Woman Walks Ahead",
        "airing_time": "10:45 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 44 mins|', '2018|']"
    },
    {
        "date": "March 17",
        "name": "Winter's Tale",
        "airing_time": "12:28 PM EDT",
        "details_spans": "['fantasy-sci-fi|', 'NR|', '1 hr 59 mins|', '2014|']"
    },
    {
        "date": "March 17",
        "name": "The Dead Don't Die",
        "airing_time": "2:26 PM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 46 mins|', '2019|']"
    },
    {
        "date": "March 17",
        "name": "Paycheck",
        "airing_time": "4:11 PM EDT",
        "details_spans": "['high octane thrillers|', 'PG-13|', '2 hrs 2 mins|', '2003|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "The Pick-Up Artist",
        "airing_time": "6:11 PM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 23 mins|', '1987|']"
    },
    {
        "date": "March 17",
        "name": "Buffy the Vampire Slayer",
        "airing_time": "7:34 PM EDT",
        "details_spans": "['teen comedy|', 'PG-13|', '1 hr 27 mins|', '1992|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Observe and Report",
        "airing_time": "9:00 PM EDT",
        "details_spans": "['comedy|', 'R|', '1 hr 27 mins|', '2009|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Pens & Pencils",
        "airing_time": "10:27 PM EDT",
        "details_spans": "['mystery & suspense|', 'TV-14|', '17 mins|', '2023|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Vox Lux",
        "airing_time": "10:43 PM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 55 mins|', '2018|']"
    },
    {
        "date": "March 17",
        "name": "A Glimpse Inside the Mind of Charles Swan III",
        "airing_time": "12:38 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 26 mins|', '2013|']"
    },
    {
        "date": "March 17",
        "name": "Missing in Action 2 - The Beginning",
        "airing_time": "2:04 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 37 mins|', '1985|']"
    },
    {
        "date": "March 17",
        "name": "Major League",
        "airing_time": "3:40 AM EDT",
        "details_spans": "['classic comedy|', 'R|', '1 hr 47 mins|', '1989|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Cyrus",
        "airing_time": "7:38 AM EDT",
        "details_spans": "['romance|', 'NR|', '1 hr 32 mins|', '2010|']"
    },
    {
        "date": "March 17",
        "name": "Laggies",
        "airing_time": "9:09 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 41 mins|', '2014|']"
    },
    {
        "date": "March 17",
        "name": "Poltergeist (2015)",
        "airing_time": "10:49 AM EDT",
        "details_spans": "['fantasy-sci-fi|', 'NR|', '1 hr 35 mins|', '2015|']"
    },
    {
        "date": "March 17",
        "name": "Room",
        "airing_time": "12:23 PM EDT",
        "details_spans": "['family relationships|', 'R|', '1 hr 59 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Never Say Never Again",
        "airing_time": "10:07 AM EDT",
        "details_spans": "['high octane thrillers|', 'PG|', '2 hrs 16 mins|', '1983|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Mississippi Grind",
        "airing_time": "12:22 PM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 50 mins|', '2015|']"
    },
    {
        "date": "March 17",
        "name": "Dumb and Dumber To",
        "airing_time": "2:12 PM EDT",
        "details_spans": "['dark comedy|', 'PG-13|', '1 hr 51 mins|', '2014|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Appaloosa",
        "airing_time": "4:02 PM EDT",
        "details_spans": "['crime|', 'R|', '1 hr 57 mins|', '2008|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Live and Let Die",
        "airing_time": "5:58 PM EDT",
        "details_spans": "['high octane thrillers|', 'PG|', '2 hrs 2 mins|', '1973|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Across the Universe",
        "airing_time": "8:00 PM EDT",
        "details_spans": "['musicals|', 'PG-13|', '2 hrs 14 mins|', '2007|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Wonderland",
        "airing_time": "10:14 PM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 45 mins|', '2003|']"
    },
    {
        "date": "March 17",
        "name": "The Last Movie Star",
        "airing_time": "11:59 PM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 44 mins|', '2018|']"
    },
    {
        "date": "March 17",
        "name": "Walk the Line",
        "airing_time": "1:43 AM EDT",
        "details_spans": "['romantic dramas|', 'PG-13|', '2 hrs 16 mins|', '2005|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Lean on Pete",
        "airing_time": "3:58 AM EDT",
        "details_spans": "['drama|', '12|', '2 hrs 4 mins|', '2018|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Curse of the Pink Panther",
        "airing_time": "8:06 AM EDT",
        "details_spans": "['offbeat|', 'PG|', '1 hr 52 mins|', '1983|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Ronin",
        "airing_time": "9:57 AM EDT",
        "details_spans": "['action|', 'NR|', '2 hrs 3 mins|', '1998|']"
    },
    {
        "date": "March 17",
        "name": "De-Lovely",
        "airing_time": "11:58 AM EDT",
        "details_spans": "['musicals|', 'PG-13|', '2 hrs 7 mins|', '2004|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "XXX: State of the Union",
        "airing_time": "11:26 AM EDT",
        "details_spans": "['high octane thrillers|', 'PG-13|', '1 hr 41 mins|', '2005|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Fifty Shades Darker",
        "airing_time": "1:07 PM EDT",
        "details_spans": "['mystery & suspense|', 'R|', '1 hr 58 mins|', '2017|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Poltergeist (2015)",
        "airing_time": "3:04 PM EDT",
        "details_spans": "['fantasy-sci-fi|', 'NR|', '1 hr 34 mins|', '2015|']"
    },
    {
        "date": "March 17",
        "name": "Yes Man",
        "airing_time": "4:38 PM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 45 mins|', '2008|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "I Love You, Beth Cooper",
        "airing_time": "6:23 PM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 42 mins|', '2009|']"
    },
    {
        "date": "March 17",
        "name": "The Vanishing of Sidney Hall",
        "airing_time": "8:05 PM EDT",
        "details_spans": "['drama|', 'NR|', '2 hrs|', '2018|']"
    },
    {
        "date": "March 17",
        "name": "Laggies",
        "airing_time": "10:05 PM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 40 mins|', '2014|']"
    },
    {
        "date": "March 17",
        "name": "Cracked",
        "airing_time": "11:45 PM EDT",
        "details_spans": "['romantic dramas|', 'TV-MA|', '16 mins|', '2022|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Strange Days",
        "airing_time": "12:00 AM EDT",
        "details_spans": "['fantasy & sci-fi|', 'R|', '2 hrs 26 mins|', '1995|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Kingsman: The Secret Service",
        "airing_time": "2:26 AM EDT",
        "details_spans": "['high octane thrillers|', 'R|', '2 hrs 9 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Johnny Dangerously",
        "airing_time": "4:35 AM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 31 mins|', '1984|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "The Goods: Live Hard, Sell Hard",
        "airing_time": "7:41 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 31 mins|', '2009|']"
    },
    {
        "date": "March 17",
        "name": "John Dies at the End",
        "airing_time": "9:11 AM EDT",
        "details_spans": "['horror|', 'R|', '1 hr 41 mins|', '2013|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Don't Let Go",
        "airing_time": "10:51 AM EDT",
        "details_spans": "['horror|', 'NR|', '1 hr 44 mins|', '2019|']"
    },
    {
        "date": "March 17",
        "name": "Unmothered",
        "airing_time": "12:35 PM EDT",
        "details_spans": "['dark comedy|', 'TV-PG|', '16 mins|', '2021|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Strange Days",
        "airing_time": "12:50 PM EDT",
        "details_spans": "['fantasy & sci-fi|', 'R|', '2 hrs 27 mins|', '1995|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Barbershop 2: Back in Business",
        "airing_time": "10:37 AM EDT",
        "details_spans": "['drama|', 'PG-13|', '1 hr 47 mins|', '2004|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Hustle & Flow",
        "airing_time": "12:24 PM EDT",
        "details_spans": "['crime|', 'R|', '1 hr 59 mins|', '2005|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Predators",
        "airing_time": "2:21 PM EDT",
        "details_spans": "['space & beyond|', 'R|', '1 hr 49 mins|', '2010|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Paranoia",
        "airing_time": "4:08 PM EDT",
        "details_spans": "['drama|', 'PG-13|', '1 hr 48 mins|', '2013|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "XXX",
        "airing_time": "5:55 PM EDT",
        "details_spans": "['high octane thrillers|', 'PG-13|', '2 hrs 6 mins|', '2002|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Black Mass",
        "airing_time": "8:00 PM EDT",
        "details_spans": "['organized crime|', 'R|', '2 hrs 3 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Freedomland",
        "airing_time": "10:03 PM EDT",
        "details_spans": "['big screen crime|', 'R|', '1 hr 56 mins|', '2006|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Attack the Block",
        "airing_time": "11:57 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 30 mins|', '2011|']"
    },
    {
        "date": "March 17",
        "name": "Fifty Shades of Grey",
        "airing_time": "1:26 AM EDT",
        "details_spans": "['drama|', 'R|', '2 hrs 6 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "A Prayer Before Dawn",
        "airing_time": "3:32 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 58 mins|', '2018|']"
    },
    {
        "date": "March 17",
        "name": "The Pick-Up Artist",
        "airing_time": "5:30 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 22 mins|', '1987|']"
    },
    {
        "date": "March 17",
        "name": "I Am Wrath",
        "airing_time": "8:20 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 32 mins|', '2016|']"
    },
    {
        "date": "March 17",
        "name": "Leatherheads",
        "airing_time": "9:51 AM EDT",
        "details_spans": "['romantic comedy|', 'PG-13|', '1 hr 54 mins|', '2016|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Office Space",
        "airing_time": "11:45 AM EDT",
        "details_spans": "['cult classic|', 'R|', '1 hr 31 mins|', '1999|', 'HD|']"
    },
    {
        "date": "March 17",
        "name": "Paycheck",
        "airing_time": "1:15 PM EDT",
        "details_spans": "['high octane thrillers|', 'PG-13|', '2 hrs|', '2003|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Taxi",
        "airing_time": "11:24 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 38 mins|', '2004|']"
    },
    {
        "date": "March 18",
        "name": "Pieces of April",
        "airing_time": "1:02 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 21 mins|', '2003|']"
    },
    {
        "date": "March 18",
        "name": "Never Goin' Back",
        "airing_time": "2:22 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 28 mins|', '2018|']"
    },
    {
        "date": "March 18",
        "name": "Life After Beth",
        "airing_time": "3:49 AM EDT",
        "details_spans": "['comedy|', 'R|', '1 hr 30 mins|', '2014|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Equals",
        "airing_time": "5:18 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 43 mins|', '2016|']"
    },
    {
        "date": "March 18",
        "name": "Fifty Shades Darker",
        "airing_time": "7:00 AM EDT",
        "details_spans": "['mystery & suspense|', 'R|', '1 hr 59 mins|', '2017|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Rescue Dawn",
        "airing_time": "8:58 AM EDT",
        "details_spans": "['action|', 'NR|', '2 hrs 6 mins|', '2007|']"
    },
    {
        "date": "March 18",
        "name": "Election",
        "airing_time": "11:04 AM EDT",
        "details_spans": "['drama|', 'R|', '1 hr 43 mins|', '1999|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "How to Talk to Girls at Parties",
        "airing_time": "12:47 PM EDT",
        "details_spans": "['comedy|', 'R|', '1 hr 43 mins|', '2017|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Barbershop",
        "airing_time": "2:30 PM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 43 mins|', '2002|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Yes Man",
        "airing_time": "4:13 PM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 45 mins|', '2008|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Company You Keep",
        "airing_time": "5:58 PM EDT",
        "details_spans": "['suspense|', '2 hrs 2 mins|', '2013|']"
    },
    {
        "date": "March 18",
        "name": "Unbreakable",
        "airing_time": "8:00 PM EDT",
        "details_spans": "['thrillers|', 'PG-13|', '1 hr 47 mins|', '2000|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Kingsman: The Secret Service",
        "airing_time": "9:47 PM EDT",
        "details_spans": "['high octane thrillers|', 'R|', '2 hrs 9 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Blue Valentine",
        "airing_time": "11:56 PM EDT",
        "details_spans": "['drama|', 'R|', '1 hr 53 mins|', '2011|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "A Most Violent Year",
        "airing_time": "11:07 PM EDT",
        "details_spans": "['drama|', 'R|', '2 hrs 6 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Little Woods",
        "airing_time": "1:12 AM EDT",
        "details_spans": "['suspense|', 'NR|', '1 hr 46 mins|', '2019|']"
    },
    {
        "date": "March 18",
        "name": "Source Code",
        "airing_time": "2:56 AM EDT",
        "details_spans": "['suspense|', 'NR|', '1 hr 35 mins|', '2011|']"
    },
    {
        "date": "March 18",
        "name": "Spaceballs",
        "airing_time": "4:30 AM EDT",
        "details_spans": "['fantasy & sci-fi|', 'PG|', '1 hr 37 mins|', '1987|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Fluffy Movie",
        "airing_time": "6:07 AM EDT",
        "details_spans": "['documentary|', 'NR|', '1 hr 44 mins|', '2014|']"
    },
    {
        "date": "March 18",
        "name": "Leap of Faith",
        "airing_time": "7:49 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 50 mins|', '1992|']"
    },
    {
        "date": "March 18",
        "name": "The Last Word",
        "airing_time": "9:37 AM EDT",
        "details_spans": "['comedy|', 'NR|', '1 hr 50 mins|', '2017|']"
    },
    {
        "date": "March 18",
        "name": "The Answer Man",
        "airing_time": "11:25 AM EDT",
        "details_spans": "['romantic comedy|', 'R|', '1 hr 38 mins|', '2009|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "My Boss\u2019s Daughter",
        "airing_time": "1:03 PM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 27 mins|', '2003|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "A Night at the Roxbury",
        "airing_time": "2:30 PM EDT",
        "details_spans": "['offbeat|', 'PG-13|', '1 hr 22 mins|', '1998|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Trail of the Pink Panther",
        "airing_time": "3:52 PM EDT",
        "details_spans": "['big screen crime|', 'PG|', '1 hr 37 mins|', '1982|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Whiteout",
        "airing_time": "5:29 PM EDT",
        "details_spans": "['action|', '1 hr 41 mins|', '2009|']"
    },
    {
        "date": "March 18",
        "name": "Victor Frankenstein",
        "airing_time": "7:10 PM EDT",
        "details_spans": "['horror|', '1 hr 50 mins|', '2015|']"
    },
    {
        "date": "March 18",
        "name": "Evan Almighty",
        "airing_time": "9:00 PM EDT",
        "details_spans": "['kids & family|', 'PG|', '1 hr 36 mins|', '2007|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Dumb and Dumber To",
        "airing_time": "10:36 PM EDT",
        "details_spans": "['dark comedy|', 'PG-13|', '1 hr 50 mins|', '2014|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Hot Summer Nights",
        "airing_time": "12:26 AM EDT",
        "details_spans": "['comedy|', '1 hr 49 mins|', '2018|']"
    },
    {
        "date": "March 18",
        "name": "Missing in Action",
        "airing_time": "11:45 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 43 mins|', '1984|']"
    },
    {
        "date": "March 18",
        "name": "Teen Wolf",
        "airing_time": "1:27 AM EDT",
        "details_spans": "['comedy|', 'PG|', '1 hr 33 mins|', '1985|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Paparazzi",
        "airing_time": "3:00 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 26 mins|', '2004|']"
    },
    {
        "date": "March 18",
        "name": "American Heist",
        "airing_time": "4:25 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 37 mins|', '2015|']"
    },
    {
        "date": "March 18",
        "name": "Missing in Action 2 - The Beginning",
        "airing_time": "6:01 AM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 36 mins|', '1985|']"
    },
    {
        "date": "March 18",
        "name": "Predators",
        "airing_time": "7:37 AM EDT",
        "details_spans": "['space & beyond|', 'R|', '1 hr 47 mins|', '2010|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Children of the Corn II: The Final Sacrifice",
        "airing_time": "9:24 AM EDT",
        "details_spans": "['horror|', 'R|', '1 hr 36 mins|', '1993|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Brokedown Palace",
        "airing_time": "10:58 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 43 mins|', '1999|']"
    },
    {
        "date": "March 18",
        "name": "The Rage: Carrie 2",
        "airing_time": "12:39 PM EDT",
        "details_spans": "['occult|', 'R|', '1 hr 45 mins|', '1999|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Edge",
        "airing_time": "2:24 PM EDT",
        "details_spans": "['suspense|', '1 hr 58 mins|', '1997|']"
    },
    {
        "date": "March 18",
        "name": "Major League",
        "airing_time": "4:22 PM EDT",
        "details_spans": "['classic comedy|', 'R|', '1 hr 47 mins|', '1989|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Pens & Pencils",
        "airing_time": "6:09 PM EDT",
        "details_spans": "['mystery & suspense|', 'TV-14|', '16 mins|', '2023|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "XXX: State of the Union",
        "airing_time": "6:25 PM EDT",
        "details_spans": "['high octane thrillers|', 'PG-13|', '1 hr 41 mins|', '2005|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Leatherheads",
        "airing_time": "8:06 PM EDT",
        "details_spans": "['romantic comedy|', 'PG-13|', '1 hr 54 mins|', '2016|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Attack the Block",
        "airing_time": "10:00 PM EDT",
        "details_spans": "['action|', '1 hr 29 mins|', '2011|']"
    },
    {
        "date": "March 18",
        "name": "Don't Let Go",
        "airing_time": "11:29 PM EDT",
        "details_spans": "['horror|', '1 hr 44 mins|', '2019|']"
    },
    {
        "date": "March 18",
        "name": "Extortion",
        "airing_time": "1:13 AM EDT",
        "details_spans": "['action|', '1 hr 50 mins|', '2017|']"
    },
    {
        "date": "March 18",
        "name": "Blown Away",
        "airing_time": "11:38 PM EDT",
        "details_spans": "['drama|', 'R|', '2 hrs 2 mins|', '1994|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Fury",
        "airing_time": "1:39 AM EDT",
        "details_spans": "['horror|', 'R|', '1 hr 59 mins|', '1978|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Chain Reaction",
        "airing_time": "3:38 AM EDT",
        "details_spans": "['high octane thrillers|', 'PG-13|', '1 hr 47 mins|', '1996|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Internal Affairs",
        "airing_time": "5:25 AM EDT",
        "details_spans": "['drama|', 'NR|', '1 hr 55 mins|', '1990|']"
    },
    {
        "date": "March 18",
        "name": "The Gift",
        "airing_time": "7:20 AM EDT",
        "details_spans": "['drama|', 'TV-14|', '16 mins|', '2022|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Charlie Wilson's War",
        "airing_time": "7:35 AM EDT",
        "details_spans": "['comedy|', 'R|', '1 hr 43 mins|', '2007|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "F/x",
        "airing_time": "9:17 AM EDT",
        "details_spans": "['action|', 'R|', '1 hr 50 mins|', '1986|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Fx2: The Deadly Art of Illusion",
        "airing_time": "11:06 AM EDT",
        "details_spans": "['mystery & suspense|', 'PG-13|', '1 hr 49 mins|', '1991|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Bourne Ultimatum",
        "airing_time": "12:55 PM EDT",
        "details_spans": "['action|', 'PG-13|', '1 hr 56 mins|', '2007|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Frozen Ground",
        "airing_time": "2:51 PM EDT",
        "details_spans": "['drama|', '1 hr 46 mins|', '2013|']"
    },
    {
        "date": "March 18",
        "name": "The Captive",
        "airing_time": "4:37 PM EDT",
        "details_spans": "['drama|', '16+|', '1 hr 53 mins|', '2014|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Forbidden Kingdom",
        "airing_time": "6:30 PM EDT",
        "details_spans": "['action|', '1 hr 45 mins|', '2008|']"
    },
    {
        "date": "March 18",
        "name": "Wonderland",
        "airing_time": "8:15 PM EDT",
        "details_spans": "['drama|', '1 hr 45 mins|', '2003|']"
    },
    {
        "date": "March 18",
        "name": "Antitrust",
        "airing_time": "10:00 PM EDT",
        "details_spans": "['adventure|', 'PG-13|', '1 hr 49 mins|', '2001|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Green Room",
        "airing_time": "11:49 PM EDT",
        "details_spans": "['horror|', '1 hr 36 mins|', '2016|']"
    },
    {
        "date": "March 18",
        "name": "Climax",
        "airing_time": "1:25 AM EDT",
        "details_spans": "['horror|', '1 hr 37 mins|', '2019|']"
    },
    {
        "date": "March 18",
        "name": "Posse: The Revenge of Jessie Lee",
        "airing_time": "10:46 PM EDT",
        "details_spans": "['action|', 'NR|', '1 hr 52 mins|', '1993|']"
    },
    {
        "date": "March 18",
        "name": "The Adderall Diaries",
        "airing_time": "12:38 AM EDT",
        "details_spans": "['drama|', 'R|', '1 hr 28 mins|', '2016|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Company You Keep",
        "airing_time": "2:05 AM EDT",
        "details_spans": "['suspense|', 'NR|', '2 hrs 3 mins|', '2013|']"
    },
    {
        "date": "March 18",
        "name": "Leatherheads",
        "airing_time": "4:07 AM EDT",
        "details_spans": "['romantic comedy|', 'PG-13|', '1 hr 55 mins|', '2016|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Hustle & Flow",
        "airing_time": "6:01 AM EDT",
        "details_spans": "['crime|', 'R|', '1 hr 59 mins|', '2005|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Poltergeist II: The Other Side",
        "airing_time": "7:58 AM EDT",
        "details_spans": "['horror|', 'NR|', '1 hr 33 mins|', '1986|']"
    },
    {
        "date": "March 18",
        "name": "Paranoia",
        "airing_time": "9:29 AM EDT",
        "details_spans": "['drama|', 'PG-13|', '1 hr 48 mins|', '2013|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Leatherheads",
        "airing_time": "11:16 AM EDT",
        "details_spans": "['romantic comedy|', 'PG-13|', '1 hr 54 mins|', '2016|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Straight Outta Compton",
        "airing_time": "1:10 PM EDT",
        "details_spans": "['biographical dramas|', 'R|', '2 hrs 27 mins|', '2015|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Philomena",
        "airing_time": "3:37 PM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 38 mins|', '2013|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Whip It",
        "airing_time": "5:15 PM EDT",
        "details_spans": "['comedy|', '1 hr 51 mins|', '2009|']"
    },
    {
        "date": "March 18",
        "name": "Another Country",
        "airing_time": "7:06 PM EDT",
        "details_spans": "['family relationships|', 'TV-MA|', '12 mins|', '2023|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "I Love You, Beth Cooper",
        "airing_time": "7:18 PM EDT",
        "details_spans": "['comedy|', '1 hr 42 mins|', '2009|']"
    },
    {
        "date": "March 18",
        "name": "I Now Pronounce You Chuck & Larry",
        "airing_time": "9:00 PM EDT",
        "details_spans": "['comedy|', 'PG-13|', '1 hr 58 mins|', '2007|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "Elizabethtown",
        "airing_time": "10:58 PM EDT",
        "details_spans": "['comedy|', '2 hrs 6 mins|', '2005|']"
    },
    {
        "date": "March 18",
        "name": "Carrie",
        "airing_time": "1:04 AM EDT",
        "details_spans": "['horror|', 'R|', '1 hr 40 mins|', '2013|', 'HD|']"
    },
    {
        "date": "March 18",
        "name": "The Spy Who Loved Me",
        "airing_time": "11:54 PM EDT",
        "details_spans": "['high octane thrillers|', 'PG|', '2 hrs 7 mins|', '1977|', 'HD|']"
    },
    {
        "date": "M
Download .txt
gitextract_bsj671yq/

├── .github/
│   └── workflows/
│       └── main.yml
├── .gitignore
├── LICENSE
├── README.md
├── README_old.md
├── app/
│   ├── __init__.py
│   ├── config.py
│   ├── functions/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   └── duckduck.py
│   ├── libs/
│   │   ├── __init__.py
│   │   ├── base_handler.py
│   │   ├── chains copy.py
│   │   ├── chains.py
│   │   ├── context.py
│   │   ├── provider_handler.py
│   │   ├── tools_handler.py
│   │   └── vision_handler.py
│   ├── main.py
│   ├── models.py
│   ├── prompts.py
│   ├── providers.py
│   ├── reasoning/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   └── rerank.py
│   ├── routes/
│   │   ├── __init__.py
│   │   ├── examples.py
│   │   └── proxy.py
│   └── utils.py
├── cookbook/
│   ├── ai_assistant_custome_tools.py
│   ├── cinemax.json
│   ├── function_call_force_schema.py
│   ├── function_call_force_tool_choice.py
│   ├── function_call_ollama.py
│   ├── function_call_phidata.py
│   ├── function_call_vision.py
│   ├── function_call_with_schema.py
│   ├── function_call_without_schema.py
│   ├── functiona_call_groq_langchain.py
│   └── resources.py
├── examples/
│   ├── example_1.py
│   ├── example_2.py
│   ├── example_3.py
│   └── example_4.py
├── frontend/
│   ├── assets/
│   │   ├── README.md
│   │   ├── markdown.css
│   │   └── style.css
│   └── pages/
│       ├── index.html
│       └── index_old.html
└── requirements.txt
Download .txt
SYMBOL INDEX (152 symbols across 23 files)

FILE: app/functions/base.py
  class Function (line 4) | class Function:
    class Schema (line 8) | class Schema(BaseModel):
    method get_schema (line 12) | def get_schema(cls) -> Dict:

FILE: app/functions/duckduck.py
  class SearchFunction (line 15) | class SearchFunction(Function):
    class Schema (line 19) | class Schema(Function.Schema):
    method run (line 24) | def run(cls, **kwargs):
  class NewsFunction (line 31) | class NewsFunction(Function):
    class Schema (line 35) | class Schema(Function.Schema):
    method run (line 40) | def run(cls, **kwargs):

FILE: app/libs/base_handler.py
  class Handler (line 6) | class Handler(ABC):
    method set_next (line 11) | def set_next(self, handler: "Handler") -> "Handler":
    method handle (line 16) | async def handle(self, context: Context):
  class DefaultCompletionHandler (line 26) | class DefaultCompletionHandler(Handler):
    method handle (line 27) | async def handle(self, context: Context):
  class FallbackHandler (line 40) | class FallbackHandler(Handler):
    method handle (line 41) | async def handle(self, context: Context):
  class ExceptionHandler (line 53) | class ExceptionHandler(Handler):
    method handle (line 54) | async def handle(self, handler: Handler, context: Context, exception: ...

FILE: app/libs/chains copy.py
  class Context (line 20) | class Context:
    method __init__ (line 21) | def __init__(self, request: Request, provider: str, body: Dict[str, An...
    method last_message (line 64) | def last_message(self):
    method is_tool_call (line 68) | def is_tool_call(self):
    method is_tool_response (line 76) | def is_tool_response(self):
    method is_normal_chat (line 80) | def is_normal_chat(self):
  class Handler (line 84) | class Handler(ABC):
    method set_next (line 89) | def set_next(self, handler: "Handler") -> "Handler":
    method handle (line 94) | async def handle(self, context: Context):
  class ProviderSelectionHandler (line 104) | class ProviderSelectionHandler(Handler):
    method provider_exists (line 106) | def provider_exists(provider: str) -> bool:
    method handle (line 116) | async def handle(self, context: Context):
  class ImageMessageHandler (line 144) | class ImageMessageHandler(Handler):
    method handle (line 145) | async def handle(self, context: Context):
  class ImageLLavaMessageHandler (line 181) | class ImageLLavaMessageHandler(Handler):
    method handle (line 182) | async def handle(self, context: Context):
  class ToolExtractionHandler (line 207) | class ToolExtractionHandler(Handler):
    method handle (line 208) | async def handle(self, context: Context):
    method process_tool_calls (line 310) | async def process_tool_calls(self, context, new_messages):
  class ToolResponseHandler (line 372) | class ToolResponseHandler(Handler):
    method handle (line 373) | async def handle(self, context: Context):
  class DefaultCompletionHandler (line 405) | class DefaultCompletionHandler(Handler):
    method handle (line 406) | async def handle(self, context: Context):
  class FallbackHandler (line 419) | class FallbackHandler(Handler):
    method handle (line 420) | async def handle(self, context: Context):
  class ExceptionHandler (line 432) | class ExceptionHandler(Handler):
    method handle (line 433) | async def handle(self, context: Context, exception: Exception):

FILE: app/libs/chains.py
  class Context (line 20) | class Context:
    method __init__ (line 21) | def __init__(self, request: Request, provider: str, body: Dict[str, An...
    method last_message (line 64) | def last_message(self):
    method is_tool_call (line 68) | def is_tool_call(self):
    method is_tool_response (line 76) | def is_tool_response(self):
    method is_normal_chat (line 80) | def is_normal_chat(self):
  class Handler (line 84) | class Handler(ABC):
    method set_next (line 89) | def set_next(self, handler: "Handler") -> "Handler":
    method handle (line 94) | async def handle(self, context: Context):
  class ProviderSelectionHandler (line 104) | class ProviderSelectionHandler(Handler):
    method provider_exists (line 106) | def provider_exists(provider: str) -> bool:
    method handle (line 116) | async def handle(self, context: Context):
  class ImageMessageHandler (line 144) | class ImageMessageHandler(Handler):
    method handle (line 145) | async def handle(self, context: Context):
  class ImageLLavaMessageHandler (line 181) | class ImageLLavaMessageHandler(Handler):
    method handle (line 182) | async def handle(self, context: Context):
  class ToolExtractionHandler (line 207) | class ToolExtractionHandler(Handler):
    method handle (line 208) | async def handle(self, context: Context):
    method process_tool_calls (line 310) | async def process_tool_calls(self, context, new_messages):
  class ToolResponseHandler (line 372) | class ToolResponseHandler(Handler):
    method handle (line 373) | async def handle(self, context: Context):
  class DefaultCompletionHandler (line 405) | class DefaultCompletionHandler(Handler):
    method handle (line 406) | async def handle(self, context: Context):
  class FallbackHandler (line 419) | class FallbackHandler(Handler):
    method handle (line 420) | async def handle(self, context: Context):
  class ExceptionHandler (line 432) | class ExceptionHandler(Handler):
    method handle (line 433) | async def handle(self, context: Context, exception: Exception):

FILE: app/libs/context.py
  class Context (line 9) | class Context:
    method __init__ (line 10) | def __init__(self, request: Request, provider: str, body: Dict[str, An...
    method last_message (line 53) | def last_message(self):
    method is_tool_call (line 57) | def is_tool_call(self):
    method is_tool_response (line 65) | def is_tool_response(self):
    method is_normal_chat (line 69) | def is_normal_chat(self):

FILE: app/libs/provider_handler.py
  class ProviderSelectionHandler (line 7) | class ProviderSelectionHandler(Handler):
    method provider_exists (line 9) | def provider_exists(provider: str) -> bool:
    method handle (line 19) | async def handle(self, context: Context):

FILE: app/libs/tools_handler.py
  class ImageLLavaMessageHandler (line 18) | class ImageLLavaMessageHandler(Handler):
    method handle (line 19) | async def handle(self, context: Context):
  class ToolExtractionHandler (line 44) | class ToolExtractionHandler(Handler):
    method handle (line 45) | async def handle(self, context: Context):
    method _prepare_conversation_history (line 78) | def _prepare_conversation_history(self, messages):
    method _prepare_tool_details (line 85) | def _prepare_tool_details(self, context):
    method _prepare_model_messages (line 110) | def _prepare_model_messages(self, messages, available_tools, suffix, l...
    method _handle_no_tool_calls (line 122) | async def _handle_no_tool_calls(self, context, tool_calls_result):
    method _process_builtin_tools (line 132) | def _process_builtin_tools(self, context, tool_calls, tool_calls_resul...
    method handle1 (line 153) | async def handle1(self, context: Context):
    method process_tool_calls (line 247) | async def process_tool_calls(self, context, new_messages):
  class ToolResponseHandler (line 348) | class ToolResponseHandler(Handler):
    method handle (line 349) | async def handle(self, context: Context):

FILE: app/libs/vision_handler.py
  class ImageMessageHandler (line 7) | class ImageMessageHandler(Handler):
    method handle (line 8) | async def handle(self, context: Context):
  class ImageLLavaMessageHandler (line 44) | class ImageLLavaMessageHandler(Handler):
    method handle (line 45) | async def handle(self, context: Context):

FILE: app/main.py
  function log_requests (line 36) | async def log_requests(request: Request, call_next):
  function index (line 54) | async def index(request: Request):
  function version (line 60) | async def version():

FILE: app/prompts.py
  function get_forced_tool_suffix (line 153) | def get_forced_tool_suffix(tool_name: str) -> str:
  function get_func_result_guide (line 157) | def get_func_result_guide(function_call_result: str) -> str:
  function get_image_desc_guide (line 161) | def get_image_desc_guide(ref_index: int, description: str) -> str:
  function get_suffix (line 654) | def get_suffix():

FILE: app/providers.py
  class BaseProvider (line 7) | class BaseProvider:
    method __init__ (line 8) | def __init__(self, api_key: str, base_url = None):
    method route (line 13) | def route(self, model: str, messages: list, **kwargs):
    method route_async (line 16) | async def route_async(self, model: str, messages: list, **kwargs):
    method clean_params (line 19) | def clean_params(self, params):
  class OpenaiProvider (line 23) | class OpenaiProvider(BaseProvider):
    method __init__ (line 24) | def __init__(self, api_key: str, base_url = None):
    method route (line 31) | def route(self, model: str, messages: list, **kwargs):
    method clean_params (line 39) | def clean_params(self, params):
  class GroqProvider (line 43) | class GroqProvider(BaseProvider):
    method __init__ (line 44) | def __init__(self, api_key: str, base_url = None):
    method route (line 51) | def route(self, model: str, messages: list, **kwargs):
    method clean_params (line 59) | def clean_params(self, params):
  class OllamaProvider (line 63) | class OllamaProvider(BaseProvider):
    method __init__ (line 64) | def __init__(self, api_key: str, base_url = None):
    method route (line 70) | def route(self, model: str, messages: list, **kwargs):
    method clean_params (line 87) | def clean_params(self, params):

FILE: app/reasoning/base.py
  class ReasoningBase (line 4) | class ReasoningBase(ABC):
    method run (line 9) | def run(self, context) -> Dict:

FILE: app/reasoning/rerank.py
  function get_rerank_prompt (line 13) | def get_rerank_prompt(query, responses, top_k):
  class RerankReasoning (line 53) | class RerankReasoning(ReasoningBase):
    method __init__ (line 57) | def __init__(self, generator_model: str, reranker_model: str, n: int =...
    method run (line 65) | def run(self, context):

FILE: app/routes/examples.py
  function read_examples (line 10) | async def read_examples(file_path: str):

FILE: app/routes/proxy.py
  function get_openai_v1 (line 31) | async def get_openai_v1(
  function post_groq_chat_completions (line 38) | async def post_groq_chat_completions(
  function post_chat_completions (line 47) | async def post_chat_completions(

FILE: app/utils.py
  function create_logger (line 9) | def create_logger(logger_name: str, log_path: str = ".logs/access.log", ...
  function get_tool_call_response (line 33) | def get_tool_call_response(tool_calls_result, unresolved_tol_calls, reso...
  function describe (line 58) | def describe(prompt: str, image_url_or_base64 : str, **kwargs) -> str:

FILE: cookbook/ai_assistant_custome_tools.py
  class CinemaSerachDB (line 25) | class CinemaSerachDB(Toolkit):
    method __init__ (line 26) | def __init__(
    method get_available_slots (line 36) | def get_available_slots(self, movie_slot_query: str ) -> str:
  class CinemaTools (line 50) | class CinemaTools(Toolkit):
    method __init__ (line 51) | def __init__(
    method book_cinema_ticket (line 59) | def book_cinema_ticket(self, movie_name: str, date: Optional[str] = No...
    method _generate_ticket_number (line 110) | def _generate_ticket_number(self) -> str:
  function cinemax_assistant (line 133) | def cinemax_assistant(new: bool = False, user: str = "user"):

FILE: cookbook/function_call_force_schema.py
  function duckduckgo_search (line 17) | def duckduckgo_search(query, max_results=None):
  function duckduckgo_news (line 24) | def duckduckgo_news(query, max_results=None):

FILE: cookbook/function_call_force_tool_choice.py
  function duckduckgo_search (line 14) | def duckduckgo_search(query, max_results=None):
  function duckduckgo_news (line 21) | def duckduckgo_news(query, max_results=None):

FILE: cookbook/function_call_with_schema.py
  function duckduckgo_search (line 14) | def duckduckgo_search(query, max_results=None):
  function duckduckgo_news (line 21) | def duckduckgo_news(query, max_results=None):

FILE: examples/example_2.py
  function duckduckgo_search (line 12) | def duckduckgo_search(query, max_results=None):
  function duckduckgo_news (line 19) | def duckduckgo_news(query, max_results=None):

FILE: examples/example_4.py
  function duckduckgo_search (line 10) | def duckduckgo_search(query, max_results=None):
Condensed preview — 50 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (457K chars).
[
  {
    "path": ".github/workflows/main.yml",
    "chars": 2111,
    "preview": "name: Auto PR and Merge on Push by Specific User\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  auto-pr-and-merge:\n   "
  },
  {
    "path": ".gitignore",
    "chars": 3127,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "LICENSE",
    "chars": 9136,
    "preview": "Apache License\nVersion 2.0, January 2004\nhttp://www.apache.org/licenses/\n\nTERMS AND CONDITIONS FOR USE, REPRODUCTION, AN"
  },
  {
    "path": "README.md",
    "chars": 5403,
    "preview": "# GroqCall.ai - Lightning-Fast LLM Function Calls\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badg"
  },
  {
    "path": "README_old.md",
    "chars": 13789,
    "preview": "# GroqCall.ai (I changed the name from FunckyCall to GroqCall)\n[![Open In Colab](https://colab.research.google.com/asset"
  },
  {
    "path": "app/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/config.py",
    "chars": 65,
    "preview": "# To be developed\nEVALUATION_CYCLES_COUNT=1\nPARSE_ERROR_TRIES = 5"
  },
  {
    "path": "app/functions/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/functions/base.py",
    "chars": 382,
    "preview": "from pydantic import BaseModel\nfrom typing import Dict\n\nclass Function:\n    name: str\n    description: str\n\n    class Sc"
  },
  {
    "path": "app/functions/duckduck.py",
    "chars": 1862,
    "preview": "from pydantic import BaseModel, Field\nfrom typing import Optional, Dict\nimport requests\nimport json\nfrom duckduckgo_sear"
  },
  {
    "path": "app/libs/__init__.py",
    "chars": 536,
    "preview": "from .context import Context\nfrom .base_handler import Handler, DefaultCompletionHandler, ExceptionHandler, FallbackHand"
  },
  {
    "path": "app/libs/base_handler.py",
    "chars": 2511,
    "preview": "from abc import ABC, abstractmethod\nfrom .context import Context\nfrom fastapi.responses import JSONResponse\nimport trace"
  },
  {
    "path": "app/libs/chains copy.py",
    "chars": 17687,
    "preview": "from abc import ABC, abstractmethod\nfrom typing import Any, Dict\nfrom importlib import import_module\nimport json\nimport "
  },
  {
    "path": "app/libs/chains.py",
    "chars": 17687,
    "preview": "from abc import ABC, abstractmethod\nfrom typing import Any, Dict\nfrom importlib import import_module\nimport json\nimport "
  },
  {
    "path": "app/libs/context.py",
    "chars": 2641,
    "preview": "from typing import Any, Dict\nfrom fastapi import Request\nfrom providers import BaseProvider\nfrom prompts import *\nimport"
  },
  {
    "path": "app/libs/provider_handler.py",
    "chars": 1783,
    "preview": "from importlib import import_module\nfrom fastapi.responses import JSONResponse\nfrom prompts import *\nfrom .base_handler "
  },
  {
    "path": "app/libs/tools_handler.py",
    "chars": 16456,
    "preview": "import concurrent.futures\nimport uuid\nimport json\nimport math\nfrom fastapi.responses import JSONResponse\nfrom prompts im"
  },
  {
    "path": "app/libs/vision_handler.py",
    "chars": 3019,
    "preview": "from prompts import *\nfrom utils import describe\nfrom .context import Context\nfrom .base_handler import Handler\n\n\nclass "
  },
  {
    "path": "app/main.py",
    "chars": 1900,
    "preview": "from fastapi import FastAPI\nfrom fastapi.responses import HTMLResponse\nfrom fastapi.templating import Jinja2Templates\nfr"
  },
  {
    "path": "app/models.py",
    "chars": 17,
    "preview": "# To be developed"
  },
  {
    "path": "app/prompts.py",
    "chars": 33923,
    "preview": "SYSTEM_MESSAGE_v0 = \"\"\"You are a functiona-call proxy for an advanced LLM. Your jobe is to identify the required tools f"
  },
  {
    "path": "app/providers.py",
    "chars": 2852,
    "preview": "from groq import Groq\nfrom openai import OpenAI\nfrom litellm import completion as sync_call_llm\nimport litellm\n\n\nclass B"
  },
  {
    "path": "app/reasoning/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/reasoning/base.py",
    "chars": 193,
    "preview": "from abc import ABC, abstractmethod\nfrom typing import Dict\n\nclass ReasoningBase(ABC):\n    name: str\n    description: st"
  },
  {
    "path": "app/reasoning/rerank.py",
    "chars": 3602,
    "preview": "from pydantic import BaseModel, Field\nfrom typing import Optional, Dict\nfrom .base import ReasoningBase\nfrom pydantic im"
  },
  {
    "path": "app/routes/__init__.py",
    "chars": 139,
    "preview": "from .proxy import router as proxy_router\nfrom .examples import router as examples_router\n\n__all__ = [ \"proxy_router\", \""
  },
  {
    "path": "app/routes/examples.py",
    "chars": 767,
    "preview": "\nfrom fastapi.responses import FileResponse\nfrom fastapi import APIRouter\nimport os\n\nrouter = APIRouter()\n\n# Add endpoin"
  },
  {
    "path": "app/routes/proxy.py",
    "chars": 3328,
    "preview": "from fastapi import APIRouter, Response, Request, Path, Query\nfrom fastapi.responses import JSONResponse\n# from libs.cha"
  },
  {
    "path": "app/utils.py",
    "chars": 3443,
    "preview": "import logging\nimport os\nimport replicate\nimport base64\nfrom io import BytesIO\n\n\n# To be developed\ndef create_logger(log"
  },
  {
    "path": "cookbook/ai_assistant_custome_tools.py",
    "chars": 8257,
    "preview": "\nimport os, json\nfrom typing import Optional, List\nfrom phi.llm.openai.like import OpenAILike\nfrom phi.assistant import "
  },
  {
    "path": "cookbook/cinemax.json",
    "chars": 168370,
    "preview": "[\n    {\n        \"date\": \"March 17\",\n        \"name\": \"Barbershop 2: Back in Business\",\n        \"airing_time\": \"10:37 AM E"
  },
  {
    "path": "cookbook/function_call_force_schema.py",
    "chars": 4856,
    "preview": "\nfrom duckduckgo_search import DDGS\nimport requests, os\nimport json\n\napi_key=os.environ[\"GROQ_API_KEY\"]\nheader = {\n    \""
  },
  {
    "path": "cookbook/function_call_force_tool_choice.py",
    "chars": 4229,
    "preview": "from duckduckgo_search import DDGS\nimport requests, os\nimport json\n\napi_key=os.environ[\"GROQ_API_KEY\"]\nheader = {\n    \"A"
  },
  {
    "path": "cookbook/function_call_ollama.py",
    "chars": 638,
    "preview": "\nfrom phi.llm.openai.like import OpenAILike\nfrom phi.assistant import Assistant\nfrom phi.tools.duckduckgo import DuckDuc"
  },
  {
    "path": "cookbook/function_call_phidata.py",
    "chars": 1123,
    "preview": "\nfrom phi.llm.openai.like import OpenAILike\nfrom phi.assistant import Assistant\nfrom phi.tools.duckduckgo import DuckDuc"
  },
  {
    "path": "cookbook/function_call_vision.py",
    "chars": 1380,
    "preview": "import requests, os\n\napi_key = os.environ[\"GROQ_API_KEY\"]\nheader = {\"Authorization\": f\"Bearer {api_key}\", \"Content-Type\""
  },
  {
    "path": "cookbook/function_call_with_schema.py",
    "chars": 4670,
    "preview": "\nfrom duckduckgo_search import DDGS\nimport requests, os\nimport json\n\napi_key=os.environ[\"GROQ_API_KEY\"]\nheader = {\n    \""
  },
  {
    "path": "cookbook/function_call_without_schema.py",
    "chars": 1202,
    "preview": "import requests\nimport json\nimport os\n\napi_key=os.environ[\"GROQ_API_KEY\"],\nheader = {\n    \"Authorization\": f\"Bearer {api"
  },
  {
    "path": "cookbook/functiona_call_groq_langchain.py",
    "chars": 1864,
    "preview": "# pip install --upgrade --quiet langchain-groq tavily-python langchain langchainhub langchain-openai\n\nfrom langchain_cor"
  },
  {
    "path": "cookbook/resources.py",
    "chars": 352,
    "preview": "from phi.docker.app.postgres import PgVectorDb\nfrom phi.docker.resources import DockerResources\n\n# -*- PgVector2 running"
  },
  {
    "path": "examples/example_1.py",
    "chars": 1491,
    "preview": "from phi.llm.openai.like import OpenAILike\nfrom phi.assistant import Assistant\nfrom phi.tools.duckduckgo import DuckDuck"
  },
  {
    "path": "examples/example_2.py",
    "chars": 4589,
    "preview": "from duckduckgo_search import DDGS\nimport requests, os\napi_key=os.environ[\"GROQ_API_KEY\"]\nimport json\nheader = {\n    \"Au"
  },
  {
    "path": "examples/example_3.py",
    "chars": 1318,
    "preview": "from duckduckgo_search import DDGS\nimport requests, os\napi_key = os.environ[\"GROQ_API_KEY\"]\nheader = {\n    \"Authorizatio"
  },
  {
    "path": "examples/example_4.py",
    "chars": 3629,
    "preview": "from duckduckgo_search import DDGS\nimport requests, os, json\napi_key = os.environ[\"GROQ_API_KEY\"]\nheader = {\n    \"Author"
  },
  {
    "path": "frontend/assets/README.md",
    "chars": 5403,
    "preview": "# GroqCall.ai - Lightning-Fast LLM Function Calls\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badg"
  },
  {
    "path": "frontend/assets/markdown.css",
    "chars": 30706,
    "preview": "@media (prefers-color-scheme: dark) {\n    .markdown-body,\n    [data-theme=\"dark\"] {\n      /*dark*/\n      color-scheme: d"
  },
  {
    "path": "frontend/assets/style.css",
    "chars": 94,
    "preview": "@import \"tailwindcss/base\";\n@import \"tailwindcss/components\";\n@import \"tailwindcss/utilities\";"
  },
  {
    "path": "frontend/pages/index.html",
    "chars": 4194,
    "preview": "<!DOCTYPE html>\n<html lang=\"en\">\n    <head>\n        <link rel=\"preconnect\" href=\"https://fonts.googleapis.com\" />\n      "
  },
  {
    "path": "frontend/pages/index_old.html",
    "chars": 25575,
    "preview": "<!DOCTYPE html>\n<html lang=\"en\">\n    <head>\n        <link rel=\"preconnect\" href=\"https://fonts.googleapis.com\" />\n      "
  },
  {
    "path": "requirements.txt",
    "chars": 1691,
    "preview": "aiohttp==3.9.3\naiosignal==1.3.1\nannotated-types==0.6.0\nanyio==4.3.0\nappnope==0.1.4\nasttokens==2.4.1\nasync-timeout==4.0.3"
  }
]

About this extraction

This page contains the full source code of the unclecode/groqcall GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 50 files (414.1 KB), approximately 117.5k tokens, and a symbol index with 152 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!