Full Code of adoptai/zapi for AI

dev 40fb34a773e8 cached
38 files
178.6 KB
42.0k tokens
98 symbols
1 requests
Download .txt
Repository: adoptai/zapi
Branch: dev
Commit: 40fb34a773e8
Files: 38
Total size: 178.6 KB

Directory structure:
gitextract_rtr6tfjp/

├── .devenv
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.yml
│   │   ├── config.yml
│   │   └── feature-request.yml
│   ├── pull_request_template.md
│   └── workflows/
│       └── ruff-check.yml
├── .gitignore
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── demo.py
├── docs/
│   └── introduction.md
├── examples/
│   ├── async_usage.py
│   ├── basic_usage.py
│   ├── langchain/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   └── demo.py
│   ├── llm_keys_usage.py
│   └── simple_usage.py
├── pyproject.toml
├── requirements.txt
├── scripts/
│   ├── README.md
│   └── pre-commit.sh
├── setup.py
└── zapi/
    ├── __init__.py
    ├── auth.py
    ├── cli.py
    ├── constants.py
    ├── core.py
    ├── encryption.py
    ├── exceptions.py
    ├── har_processing.py
    ├── integrations/
    │   └── langchain/
    │       └── tool.py
    ├── providers.py
    ├── session.py
    └── utils.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .devenv
================================================
LLM_API_KEY=
LLM_PROVIDER=
LLM_MODEL_NAME=
ADOPT_CLIENT_ID=
ADOPT_SECRET_KEY=
YOUR_API_URL=

================================================
FILE: .github/ISSUE_TEMPLATE/bug-report.yml
================================================
name: "🐞 Bug Report"
description: "Report a bug or unexpected behavior in ZAPI"
title: "[Bug]: <Short description>"
labels: ["bug", "needs-triage"]
body:
  - type: markdown
    attributes:
      value: |
        ## 🐞 Bug Report
        Thanks for taking the time to report a bug! Please provide as much detail as possible to help us investigate and fix it quickly.

  - type: input
    id: zapi_version
    attributes:
      label: "ZAPI Version"
      description: "Version of ZAPI you're using (check with `pip show zapi`)"
      placeholder: "0.1.0"
    validations:
      required: true

  - type: input
    id: python_version
    attributes:
      label: "Python Version"
      description: "Python version and operating system"
      placeholder: "Python 3.11 on macOS 14.2 or Python 3.9 on Ubuntu 22.04"
    validations:
      required: true

  - type: dropdown
    id: component
    attributes:
      label: "Component"
      description: "Which part of ZAPI is affected?"
      options:
        - Browser Session / Playwright
        - HAR Processing / Analysis
        - LLM Key Management / BYOK
        - LangChain Integration
        - Authentication / OAuth
        - File Upload
        - Other
      default: 0
    validations:
      required: true

  - type: dropdown
    id: environment
    attributes:
      label: "Environment"
      description: "Where did this issue occur?"
      options:
        - Local Development
        - CI/CD Pipeline
        - Docker Container
        - Cloud Deployment
        - Other
      default: 0
    validations:
      required: true

  - type: textarea
    id: description
    attributes:
      label: "Describe the Bug"
      description: "What happened? What did you expect to happen instead?"
      placeholder: |
        When calling `z.launch_browser(url="https://example.com")`, the browser crashes immediately.
        Expected: Browser should launch and navigate to the URL successfully.
    validations:
      required: true

  - type: textarea
    id: reproduction_steps
    attributes:
      label: "Steps to Reproduce"
      description: "Please include exact steps or code to reproduce the issue"
      placeholder: |
        1. Initialize ZAPI with valid credentials
        2. Call `z.launch_browser(url="https://example.com")`
        3. Browser crashes with error
    validations:
      required: true

  - type: textarea
    id: code_snippet
    attributes:
      label: "Minimal Reproducible Example"
      description: "Paste code to reproduce (remove sensitive data like API keys)"
      placeholder: |
        ```python
        from zapi import ZAPI
        
        z = ZAPI()
        session = z.launch_browser(url="https://example.com")
        session.dump_logs("session.har")
        session.close()
        ```
      render: python

  - type: textarea
    id: error_logs
    attributes:
      label: "Error Output / Stack Trace"
      description: "Paste the full error output or traceback"
      render: shell
      placeholder: |
        Traceback (most recent call last):
          File "demo.py", line 10, in <module>
            session = z.launch_browser(url="https://example.com")
          File "zapi/core.py", line 367, in launch_browser
            raise ZAPIError(f"Failed to launch browser session: {error_message}")
        zapi.core.ZAPIError: Failed to launch browser session: ...
    validations:
      required: true

  - type: textarea
    id: evidence
    attributes:
      label: "Evidence / Demo"
      description: "Provide screenshots, video recording, or terminal output showing the issue"
      placeholder: |
        - Screenshot: [Attach image]
        - Video: [Link to Loom/YouTube]
        - Terminal output: [Paste relevant logs]
        - HAR file snippet: [If applicable]

  - type: checkboxes
    id: reproducibility
    attributes:
      label: "Reproducibility"
      description: "How consistently does the bug occur?"
      options:
        - label: "Always reproducible"
        - label: "Intermittent / Sometimes"
        - label: "Happened once, can't reproduce"

  - type: textarea
    id: environment_details
    attributes:
      label: "Environment Details"
      description: "Additional environment information (optional)"
      placeholder: |
        - Playwright version: 1.40.0
        - Browser: Chromium 120.0.6099.109
        - LLM Provider: anthropic
        - Headless mode: True/False

  - type: textarea
    id: additional_context
    attributes:
      label: "Additional Context or Screenshots"
      description: "Add logs, screenshots, HAR files, or related issues if available"

  - type: checkboxes
    id: checklist
    attributes:
      label: "Pre-submission Checklist"
      options:
        - label: "I have searched existing issues to avoid duplicates"
          required: true
        - label: "I have removed sensitive data (API keys, tokens) from code snippets"
          required: true
        - label: "I have tested with the latest version of ZAPI"
          required: false



================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
blank_issues_enabled: false
contact_links:
  - name: 📚 Documentation
    url: https://github.com/adoptai/zapi/blob/main/README.md
    about: Read the full documentation and usage guides
  - name: 💬 GitHub Discussions
    url: https://github.com/adoptai/zapi/discussions
    about: Ask questions and discuss ideas with the community
  - name: 🌐 Adopt AI Website
    url: https://www.adopt.ai
    about: Visit the Adopt AI website for more information
  - name: 🐦 Follow us on X (Twitter)
    url: https://twitter.com/getadoptai
    about: Stay updated with the latest news and announcements



================================================
FILE: .github/ISSUE_TEMPLATE/feature-request.yml
================================================
name: "🚀 Feature Request"
description: "Suggest a new feature or improvement for ZAPI"
title: "[Feature]: <Short description>"
labels: ["feature-request", "enhancement"]
body:
  - type: markdown
    attributes:
      value: |
        ## 🚀 Feature Request
        Have an idea that can make ZAPI better? Please describe it below as clearly as possible.  
        The more context you give, the easier it is for us to prioritize and implement!

  - type: dropdown
    id: area
    attributes:
      label: "Area of Improvement"
      description: "Which part of ZAPI does this request relate to?"
      options:
        - Browser Session / Playwright Integration
        - HAR Processing / Analysis
        - LLM Provider Support
        - LangChain Integration
        - Authentication / Security
        - API Discovery Features
        - Documentation
        - Developer Experience
        - Other
      default: 0
    validations:
      required: true

  - type: input
    id: feature_title
    attributes:
      label: "Feature Name"
      description: "Short descriptive name for the feature"
      placeholder: "Add support for Gemini LLM provider"
    validations:
      required: true

  - type: textarea
    id: feature_description
    attributes:
      label: "Describe the Feature"
      description: "What would you like to see added or improved?"
      placeholder: |
        I'd like ZAPI to support Google's Gemini API as an LLM provider for API discovery, 
        similar to how it currently supports Anthropic, OpenAI, Google, and Groq.
    validations:
      required: true

  - type: textarea
    id: use_case
    attributes:
      label: "Use Case / Motivation"
      description: "Explain why this feature is valuable. What problem does it solve?"
      placeholder: |
        - My team uses Gemini for all LLM tasks and wants consistency
        - Gemini offers better pricing for our use case
        - We need multi-modal capabilities for API documentation
    validations:
      required: true

  - type: textarea
    id: proposed_solution
    attributes:
      label: "Proposed Solution or API Design (Optional)"
      description: "How would you like this to work? Feel free to propose code examples."
      placeholder: |
        Example usage:
        ```python
        from zapi import ZAPI
        
        z = ZAPI(
            llm_provider="gemini",
            llm_model_name="gemini-1.5-pro",
            llm_api_key="your-gemini-key"
        )
        
        session = z.launch_browser(url="https://example.com")
        # ... rest of workflow
        ```

  - type: dropdown
    id: priority
    attributes:
      label: "Priority (from your perspective)"
      description: "How important is this feature to you?"
      options:
        - Critical - Blocking my workflow
        - High - Would significantly improve my experience
        - Medium - Nice to have
        - Low - Just an idea
      default: 2

  - type: checkboxes
    id: impact_scope
    attributes:
      label: "Who does this impact?"
      options:
        - label: "Python developers using ZAPI"
        - label: "LangChain users"
        - label: "API discovery workflows"
        - label: "HAR processing pipelines"
        - label: "Security / BYOK users"

  - type: textarea
    id: alternatives
    attributes:
      label: "Alternatives Considered"
      description: "Have you considered any workarounds or alternative approaches?"
      placeholder: |
        - Currently using OpenAI but prefer Gemini
        - Manual HAR processing with custom scripts

  - type: textarea
    id: related_issues
    attributes:
      label: "Related Issues / References"
      description: "Link any related GitHub issues, docs, or external resources"
      placeholder: "#42, https://ai.google.dev/gemini-api/docs"

  - type: checkboxes
    id: willingness
    attributes:
      label: "Would you like to contribute to this feature?"
      options:
        - label: "Yes, I can help implement it"
        - label: "Maybe, I can help test or review"
        - label: "No, just sharing the idea"

  - type: textarea
    id: additional_context
    attributes:
      label: "Additional Context"
      description: "Any extra information, mockups, code samples, or screenshots"

  - type: checkboxes
    id: checklist
    attributes:
      label: "Pre-submission Checklist"
      options:
        - label: "I have searched existing issues to avoid duplicates"
          required: true
        - label: "I have checked the documentation to ensure this isn't already supported"
          required: true



================================================
FILE: .github/pull_request_template.md
================================================
## Description

<!-- Provide a clear and concise description of what this PR does -->

## Type of Change

<!-- Check all that apply -->

- [ ] Bug fix (non-breaking change that fixes an issue)
- [ ] New feature (non-breaking change that adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update
- [ ] Code refactoring
- [ ] Performance improvement
- [ ] Test coverage improvement

## Related Issues

<!-- Link related issues using #issue_number -->

Fixes #
Relates to #

## Changes Made

<!-- List the main changes in bullet points -->

- 
- 
- 

## Testing

<!-- Describe how you tested your changes -->

- [ ] Tested with `demo.py`
- [ ] Tested with example scripts
- [ ] Tested error cases
- [ ] Tested with different Python versions
- [ ] Tested browser interactions (if applicable)
- [ ] Tested HAR processing (if applicable)
- [ ] Tested LangChain integration (if applicable)

### Test Environment

- Python version: 
- Operating System: 
- ZAPI version: 

## Evidence / Demo

<!-- Provide evidence that your changes work as expected -->

### Code Snippet / Reproduction

```python
# Paste code demonstrating the fix or feature

```

### Output / Screenshots

<!-- Add screenshots, terminal output, or video demos if applicable -->

```
# Paste relevant output here

```

## Documentation

- [ ] Updated README.md (if needed)
- [ ] Updated docstrings
- [ ] Updated CONTRIBUTING.md (if needed)
- [ ] Added/updated code examples

## Checklist

- [ ] My code follows the project's coding standards
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] My changes generate no new warnings or errors
- [ ] I have removed any sensitive data (API keys, tokens) from the code
- [ ] I have tested that existing functionality still works
- [ ] I have read the [CONTRIBUTING.md](../CONTRIBUTING.md) guide

## Additional Context

<!-- Add any other context about the PR here -->



================================================
FILE: .github/workflows/ruff-check.yml
================================================
name: Ruff Linting

on:
  pull_request:
    branches:
      - main
      - dev
    paths:
      - '**.py'
      - 'pyproject.toml'
      - 'requirements.txt'
      - '.github/workflows/ruff-check.yml'
  push:
    branches:
      - main
      - dev
    paths:
      - '**.py'
      - 'pyproject.toml'
      - 'requirements.txt'
      - '.github/workflows/ruff-check.yml'

jobs:
  ruff-check:
    runs-on: ubuntu-latest
    
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'

      - name: Install Ruff
        run: |
          pip install ruff

      - name: Run Ruff Linter
        run: |
          ruff check . --output-format=github

      - name: Run Ruff Formatter Check
        run: |
          ruff format --check .



================================================
FILE: .gitignore
================================================
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# Virtual environments
.venv/
venv/
env/
ENV/
env.bak/
venv.bak/

# Environment variables
.env

# API credentials
api-headers.json

# IDEs
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store

# Testing
.pytest_cache/
.coverage
htmlcov/
.tox/
.hypothesis/

# HAR files
*.har

# Poetry lock file
poetry.lock

# Playwright
playwright-report/
test-results/

# Temporary files
*.log
*.tmp
.temp/



================================================
FILE: .pre-commit-config.yaml
================================================
# Pre-commit hooks for ZAPI
# See https://pre-commit.com for more information

repos:
  # Ruff - Fast Python linter and formatter
  - repo: https://github.com/astral-sh/ruff-pre-commit
    rev: v0.6.9
    hooks:
      # Run the linter
      - id: ruff
        args: [--fix]
        types_or: [python, pyi]
      # Run the formatter
      - id: ruff-format
        types_or: [python, pyi]

  # Additional useful hooks
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.6.0
    hooks:
      # Prevent committing large files
      - id: check-added-large-files
        args: ['--maxkb=1000']
      # Check for files that would conflict in case-insensitive filesystems
      - id: check-case-conflict
      # Check for merge conflicts
      - id: check-merge-conflict
      # Check YAML files
      - id: check-yaml
        exclude: ^\.github/workflows/
      # Check TOML files
      - id: check-toml
      # Check JSON files
      - id: check-json
      # Trim trailing whitespace
      - id: trailing-whitespace
        exclude: ^\.github/
      # Ensure files end with newline
      - id: end-of-file-fixer
        exclude: ^\.github/
      # Prevent committing to main/master
      - id: no-commit-to-branch
        args: ['--branch', 'main', '--branch', 'master']


================================================
FILE: CONTRIBUTING.md
================================================
# Contributing to ZAPI

Thank you for your interest in contributing to ZAPI! This document provides guidelines and instructions for contributing to the project.

## Table of Contents

- [Development Setup](#development-setup)
- [Project Structure](#project-structure)
- [Coding Standards](#coding-standards)
- [Documentation Requirements](#documentation-requirements)
- [Pull Request Process](#pull-request-process)
- [Adding New LLM Providers](#adding-new-llm-providers)
- [Testing Guidelines](#testing-guidelines)
- [Release Process](#release-process)

## Development Setup

### Prerequisites

- Python 3.9 or later
- pip (Python package manager)
- Git
- [Playwright](https://playwright.dev/python/) browser binaries

### Getting Started

1. Fork and clone the repository:

   ```bash
   git clone https://github.com/YOUR_USERNAME/zapi.git
   cd zapi
   ```

2. Create a virtual environment (recommended):

   ```bash
   python -m venv venv
   source venv/bin/activate  # On Windows: venv\Scripts\activate
   ```

3. Install dependencies:

   ```bash
   pip install -r requirements.txt
   ```

4. Install Playwright browser binaries:

   ```bash
   playwright install
   ```

5. Set up your environment variables:

   ```bash
   cp .devenv .env
   # Edit .env with your credentials from app.adopt.ai
   ```

6. Install Ruff for linting and formatting:

   ```bash
   pip install ruff
   ```

7. Install pre-commit hooks (recommended):

   ```bash
   pip install pre-commit
   pre-commit install
   ```

   This will automatically run Ruff checks before every commit.

8. Test the installation:

   ```bash
   python demo.py
   ```

### Development Commands

```bash
# Run the demo script
python demo.py

# Run specific examples
python examples/basic_usage.py
python examples/langchain/demo.py

# Test HAR processing
python -c "from zapi import analyze_har_file; analyze_har_file('demo_session.har')"
```

### Code Quality Tools

ZAPI uses [Ruff](https://docs.astral.sh/ruff/) for fast linting and formatting. All PRs are automatically checked via GitHub Actions.

**Run linting checks:**

```bash
# Check for linting issues
ruff check .

# Auto-fix linting issues
ruff check . --fix
```

**Run formatting checks:**

```bash
# Check if code is formatted correctly
ruff format --check .

# Format code automatically
ruff format .
```

**Before submitting a PR:**

```bash
# Option 1: Run pre-commit hooks manually
pre-commit run --all-files

# Option 2: Run Ruff directly
ruff check .
ruff format --check .

# Option 3: Use the pre-commit script
./scripts/pre-commit.sh

# Or fix everything automatically
ruff check . --fix
ruff format .
```

**Configuration:**

Ruff settings are defined in `pyproject.toml`. Key settings:
- Line length: 120 characters
- Target: Python 3.9+
- Enabled rules: pycodestyle, pyflakes, isort, pep8-naming, pyupgrade, flake8-bugbear, and more

### Pre-commit Hooks

ZAPI uses [pre-commit](https://pre-commit.com/) to automatically run checks before commits:

**Setup (one-time):**
```bash
pip install pre-commit
pre-commit install
```

**What it does:**
- ✅ Runs Ruff linter with auto-fix
- ✅ Runs Ruff formatter
- ✅ Checks for large files (>1MB)
- ✅ Checks YAML, TOML, JSON syntax
- ✅ Trims trailing whitespace
- ✅ Prevents commits to main/master

**Manual run:**
```bash
# Run on all files
pre-commit run --all-files

# Run on staged files only
pre-commit run

# Use the standalone script
./scripts/pre-commit.sh
```

**Skip hooks (not recommended):**
```bash
git commit --no-verify
```

## Project Structure

```
zapi/
├── zapi/                      # Main package directory
│   ├── __init__.py           # Package exports
│   ├── core.py               # ZAPI class, OAuth, BYOK encryption
│   ├── session.py            # BrowserSession with Playwright
│   ├── auth.py               # Authentication handlers
│   ├── providers.py          # LLM provider validation
│   ├── encryption.py         # AES-256-GCM key encryption
│   ├── har_processing.py     # HAR analysis and filtering
│   ├── utils.py              # Helper utilities
│   ├── constants.py          # Configuration constants
│   ├── exceptions.py         # Custom exception classes
│   └── integrations/
│       └── langchain/
│           └── tool.py       # LangChain tool integration
├── examples/                  # Example scripts
│   ├── basic_usage.py
│   ├── async_usage.py
│   └── langchain/
│       ├── demo.py           # Interactive LangChain demo
│       └── README.md         # LangChain integration guide
├── docs/                      # Documentation
├── demo.py                    # End-to-end demo script
├── requirements.txt           # Python dependencies
├── pyproject.toml            # Package metadata
├── setup.py                  # Setup script
├── README.md                 # Main documentation
└── CONTRIBUTING.md           # This file
```

### Key Modules

| Module | Purpose |
|--------|---------|
| `zapi/core.py` | Main `ZAPI` class with credential loading, OAuth token exchange, BYOK encryption, HAR upload, and API documentation fetching |
| `zapi/session.py` | `BrowserSession` wrapper around Playwright with auth injection, HAR recording, navigation helpers, and error handling |
| `zapi/providers.py` | LLM provider validation for Anthropic, OpenAI, Google, and Groq with format-specific checks |
| `zapi/encryption.py` | `LLMKeyEncryption` class using AES-256-GCM for secure key storage |
| `zapi/har_processing.py` | `HarProcessor` for filtering static assets, analyzing API calls, and cost estimation |
| `zapi/integrations/langchain/tool.py` | `ZAPILangchainTool` for converting documented APIs into LangChain tools |

## Coding Standards

### Python Style Guide

1. Follow [PEP 8](https://pep8.org/) style guidelines
2. Use type hints for all function parameters and return values
3. Use docstrings for all public classes, methods, and functions
4. Keep functions focused and under 50 lines when possible
5. Use meaningful variable and function names
6. Prefer explicit over implicit
7. Use `pathlib.Path` for file operations
8. Use f-strings for string formatting

### File Headers

Every Python module should include a docstring at the top:

```python
"""Module description.

Detailed explanation of what this module does and how it fits
into the larger ZAPI architecture.
"""
```

### Function Documentation

Every public function must include a docstring with:

1. Brief description
2. Args section with type hints
3. Returns section
4. Raises section for exceptions
5. Example usage (for user-facing functions)

Example:

```python
def analyze_har_file(
    har_file_path: str,
    save_filtered: bool = False,
    filtered_output_path: Optional[str] = None
) -> Tuple[HarStats, str, Optional[str]]:
    """
    Analyze a HAR file and generate statistics.

    This function loads a HAR file, filters out static assets,
    and provides cost/time estimates for API discovery processing.

    Args:
        har_file_path: Path to the HAR file to analyze
        save_filtered: Whether to save filtered HAR with only API entries
        filtered_output_path: Custom path for filtered HAR (optional)

    Returns:
        Tuple of (statistics, formatted_report, filtered_file_path)

    Raises:
        HarProcessingError: If HAR file is invalid or cannot be processed
        FileNotFoundError: If HAR file does not exist

    Example:
        >>> stats, report, filtered = analyze_har_file("session.har", save_filtered=True)
        >>> print(f"API entries: {stats.valid_entries}")
        >>> print(f"Estimated cost: ${stats.estimated_cost_usd:.2f}")
    """
    # Implementation
```

### Error Handling

1. Use custom exception classes from `zapi/exceptions.py`
2. Provide meaningful error messages
3. Include context in error messages (e.g., file paths, URLs)
4. Document all exceptions in function docstrings
5. Use try-except blocks appropriately
6. Log errors when appropriate

Example:

```python
from .core import ZAPIValidationError, ZAPINetworkError

def upload_har(self, har_file: str):
    """Upload HAR file to ZAPI service."""
    try:
        with open(har_file, 'rb') as f:
            # Upload logic
            pass
    except FileNotFoundError:
        raise ZAPIValidationError(f"HAR file not found: '{har_file}'")
    except requests.exceptions.ConnectionError:
        raise ZAPINetworkError(
            "Cannot connect to ZAPI service. "
            "Please check your internet connection."
        )
```

### Code Organization

1. Group imports in this order:
   - Standard library imports
   - Third-party imports
   - Local application imports
2. Use blank lines to separate logical sections
3. Keep related functionality together
4. Extract complex logic into helper functions
5. Use constants for magic numbers and strings

## Documentation Requirements

### Module Documentation

Each module should have:

1. Clear docstring explaining its purpose
2. Usage examples for public APIs
3. Type hints for all functions
4. Inline comments for complex logic

### README Updates

When adding new features:

1. Update the main README.md with usage examples
2. Add to the appropriate section (Quick Start, API Reference, etc.)
3. Include code examples that users can copy-paste
4. Update the Table of Contents if adding new sections

### Example Scripts

When creating example scripts:

1. Add them to the `examples/` directory
2. Include a header comment explaining what the example demonstrates
3. Make examples self-contained and runnable
4. Use clear variable names and comments
5. Handle errors gracefully with informative messages

## Pull Request Process

1. Create a feature branch from `dev`:

   ```bash
   git checkout dev
   git pull origin dev
   git checkout -b feature/your-feature-name
   ```

2. Make your changes following the coding standards

3. Test your changes thoroughly:
   - Run existing examples to ensure no regressions
   - Test error cases
   - Test with different Python versions if possible

4. Update documentation:
   - Add/update docstrings
   - Update README.md if needed
   - Add example usage if applicable

5. Commit your changes with clear messages:

   ```bash
   git add .
   git commit -m "Add feature: brief description"
   ```

6. Push to your fork and create a pull request:

   ```bash
   git push origin feature/your-feature-name
   ```

7. In your pull request description:
   - Explain what the change does
   - Link to any related issues
   - Include screenshots/examples if applicable
   - List any breaking changes

8. Wait for review and address feedback

### Pull Request Guidelines

- Keep PRs focused on a single feature or fix
- Write clear commit messages
- Include tests if applicable
- Update documentation
- **Ensure code passes Ruff checks** (`ruff check .` and `ruff format --check .`)
- Respond to review comments promptly

**Note:** All PRs are automatically checked by GitHub Actions for code quality using Ruff. Make sure to run the checks locally before submitting to avoid CI failures.

## Adding New LLM Providers

To add support for a new LLM provider:

1. Update `zapi/providers.py`:

   ```python
   class LLMProvider(Enum):
       # ... existing providers ...
       NEW_PROVIDER = "newprovider"
   ```

2. Add validation logic in `_validate_key_format()`:

   ```python
   elif provider == LLMProvider.NEW_PROVIDER.value:
       if not api_key.startswith("expected-prefix-"):
           raise LLMKeyException("NewProvider API keys must start with 'expected-prefix-'")
       if len(api_key) < 20:
           raise LLMKeyException("NewProvider API keys must be at least 20 characters long")
   ```

3. Update `get_supported_providers_info()`:

   ```python
   "newprovider": {
       "display_name": "NewProvider",
       "support_level": "main",
       "description": "Fully supported with complete validation"
   }
   ```

4. Update documentation:
   - Add provider to README.md supported providers list
   - Add example usage in Environment Setup section
   - Update `zapi/utils.py` if needed for environment variable mapping

5. Test the new provider:
   - Test key validation
   - Test encryption/decryption
   - Test with actual API calls if possible

## Testing Guidelines

### Manual Testing

1. Test with the demo script:
   ```bash
   python demo.py
   ```

2. Test specific features:
   ```bash
   # Test HAR analysis
   python -c "from zapi import analyze_har_file; print(analyze_har_file('demo_session.har'))"

   # Test LangChain integration
   python examples/langchain/demo.py
   ```

3. Test error cases:
   - Invalid credentials
   - Invalid URLs
   - Missing files
   - Network errors

### Testing Checklist

Before submitting a PR, verify:

- [ ] Code runs without errors
- [ ] All examples still work
- [ ] Error messages are clear and helpful
- [ ] Documentation is updated
- [ ] No sensitive data in code or commits
- [ ] **Code passes Ruff linting** (`ruff check .`)
- [ ] **Code is properly formatted** (`ruff format --check .`)
- [ ] New features have usage examples

## Release Process

ZAPI follows semantic versioning (MAJOR.MINOR.PATCH):

- **MAJOR**: Breaking changes
- **MINOR**: New features (backward compatible)
- **PATCH**: Bug fixes (backward compatible)

### Creating a Release

1. Update version in `pyproject.toml` and `setup.py`
2. Update `__version__` in `zapi/__init__.py`
3. Update CHANGELOG.md (if exists) with changes
4. Create a release commit:
   ```bash
   git commit -am "Release v0.2.0"
   ```
5. Create a tag:
   ```bash
   git tag v0.2.0
   git push origin v0.2.0
   ```
6. Create a GitHub release with release notes
7. Publish to PyPI (maintainers only):
   ```bash
   python -m build
   python -m twine upload dist/*
   ```

## Questions and Support

- **Issues**: [GitHub Issues](https://github.com/adoptai/zapi/issues)
- **Discussions**: [GitHub Discussions](https://github.com/adoptai/zapi/discussions)
- **Website**: [adopt.ai](https://www.adopt.ai)
- **Twitter**: [@getadoptai](https://twitter.com/getadoptai)
- **LinkedIn**: [Adopt AI](https://www.linkedin.com/company/getadoptai)

## Code of Conduct

- Be respectful and inclusive
- Provide constructive feedback
- Focus on what is best for the community
- Show empathy towards other contributors

## License

By contributing to ZAPI, you agree that your contributions will be licensed under the MIT License.

Copyright (c) 2025 AdoptAI

See [LICENSE](LICENSE) file for full license text.

---

Thank you for contributing to ZAPI! Your contributions help make API discovery and LLM integration easier for everyone. 🚀


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2025 AdoptAI

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: MANIFEST.in
================================================
# Include important files in the distribution
include README.md
include LICENSE
include requirements.txt
include CONTRIBUTING.md

# Include all Python files in the package
recursive-include zapi *.py

# Include examples
recursive-include examples *.py

# Exclude development and build artifacts
global-exclude __pycache__
global-exclude *.py[cod]
global-exclude *.so
global-exclude .DS_Store
global-exclude *.har


================================================
FILE: README.md
================================================
<h3 align="center">
  <a name="readme-top"></a>
  <img
    src="https://asset.adopt.ai/web/icons/github_banner.png">
</h3>
<div align="center">
<a href="https://GitHub.com/adoptai/zapi/graphs/contributors">
  <img src="https://img.shields.io/github/contributors/adoptai/zapi.svg" alt="GitHub Contributors">
</a>
<a href="https://www.adopt.ai">
  <img src="https://img.shields.io/badge/Visit-Adopt.AI-gr" alt="Visit Adopt AI">
</a>
</div>
<div>
  <p align="center">
    <a href="https://twitter.com/getadoptai">
      <img src="https://img.shields.io/badge/Follow%20on%20X-000000?style=for-the-badge&logo=x&logoColor=white" alt="Follow on X" />
    </a>
    <a href="https://www.linkedin.com/company/getadoptai">
      <img src="https://img.shields.io/badge/Follow%20on%20LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white" alt="Follow on LinkedIn" />
    </a>
  </p>
</div>

# ZAPI - Zero-Shot API Discovery

ZAPI by Adopt AI is an open-source Python library that automatically captures network traffic and API calls from web applications. Use it for API discovery, LLM training datasets, advanced API security analysis, and debugging complex browser workflows.

## Highlights
- Automated Playwright-powered browser sessions that inject auth tokens, capture traffic, export HAR logs, and upload them securely.
- Built-in HAR filtering that excludes static assets, surfaces API-only entries, and provides upfront cost/time estimates before processing.
- LangChain integration that converts documented APIs into ready-to-use tools, complete with type-safe schemas and optional custom headers.
- Bring Your Own Key (BYOK) support for **Anthropic**, **OpenAI**, **Google**, and **Groq**, with AES-256-GCM encryption for every credential.
- Comprehensive API reference, error handling helpers, and secure credential loading utilities so you can extend ZAPI safely.

## Table of Contents
- [Requirements & Installation](#requirements--installation)
- [Environment Setup](#environment-setup)
- [Project Structure](#project-structure)
- [Quick Start](#quick-start)
- [HAR Analysis & Cost Estimation](#har-analysis--cost-estimation)
- [LangChain Integration](#langchain-integration)
- [API Reference](#api-reference)
- [Security & BYOK](#security--byok)
- [Enhanced Discovery Workflow](#enhanced-discovery-workflow)
- [Troubleshooting & Tips](#troubleshooting--tips)
- [Contributing](#contributing)

## Requirements & Installation

ZAPI targets **Python 3.9+**, **Playwright 1.40.0+**, and **cryptography 41.0.0+**.

```bash
# Install dependencies
pip install -r requirements.txt

# Install browser binaries (REQUIRED)
playwright install
```

**Test the installation**

```bash
python demo.py
```

## Project Structure

| Path | Purpose |
|------|---------|
| `zapi/core.py` | Home of the `ZAPI` class. Handles credential loading (`load_zapi_credentials()`), OAuth token exchange, BYOK encryption via `LLMKeyEncryption`, LangChain key propagation, and helper methods like `upload_har()` and `get_documented_apis()`. |
| `zapi/session.py` | Contains the `BrowserSession` abstraction that wraps Playwright. Manages auth header injection, HAR recording, navigation helpers (`navigate`, `click`, `fill`, `wait_for`), and robust error handling plus synchronous wrappers. |
| `demo.py` | End-to-end workflow script wired to the modules above. Launches a browser, lets you interact manually, saves the HAR (`session.dump_logs`), runs `analyze_har_file(..., save_filtered=True)`, lets you pick the filtered HAR, and finally calls `ZAPI.upload_har()`. Tweak `DEMO_URL`, `OUTPUT_FILE`, and `HEADLESS_BROWSER` at the top before running. |
| `examples/langchain/` | LangChain integration docs and demo agent showing how `z.get_zapi_tools()` converts documented APIs into LangChain tools. |

Use this as a map when extending ZAPI or debugging the flow.

## Environment Setup

1. Sign up at [app.adopt.ai](https://app.adopt.ai) to obtain your `ADOPT_CLIENT_ID`, `ADOPT_SECRET_KEY`, and BYOK token credentials before running ZAPI.
2. Copy the example environment file and add your secrets:

```bash
cp .devenv .env
```

2. **Set up your environment:**
   - Create a `.env` file in the root of the project.
   - Populate it with the required variables:
     ```env
     # Required environment variables
     LLM_API_KEY=your_llm_api_key_here
     LLM_PROVIDER=anthropic                    # anthropic, openai, google, groq
     LLM_MODEL_NAME=your_model_name_here      # Use the latest available model for your provider
     ADOPT_CLIENT_ID=your_client_id_here       # Get from app.adopt.ai
     ADOPT_SECRET_KEY=your_secret_key_here     # Get from app.adopt.ai
     YOUR_API_URL=your_api_url_here            # Custom API URL
     ```

Use `load_llm_credentials()` (provided in the library) to load secrets safely when building custom tooling.

## Quick Start

### Launch, capture, analyze, and upload

```python
from zapi import ZAPI, analyze_har_file

# Initialize ZAPI (automatically loads from .env file)
z = ZAPI()

# Launch browser and capture traffic
session = z.launch_browser(url="https://app.example.com/dashboard")

# Export network logs
session.dump_logs("session.har")

# Analyze HAR file before upload (optional but recommended)
stats, report, _ = analyze_har_file("session.har")
print(f"API entries: {stats.valid_entries}, Estimated cost: ${stats.estimated_cost_usd:.2f}")

# Upload for enhanced API discovery
if input("Upload? (y/n): ").lower() == 'y':
    z.upload_har("session.har")
    print("Upload completed!")

session.close()
```

> Prefer `python demo.py` for the full interactive experience. The script calls the same primitives shown above but adds guardrails: manual browser driving, HAR filtering, filtered/original upload prompts, and descriptive exception handling for every component (`ZAPI`, `BrowserSession`, HAR processing, networking, etc.).

### LLM key management

```python
from zapi import ZAPI

# Initialize ZAPI (loads configuration from .env)
z = ZAPI()

# Check configuration
print(f"Provider: {z.get_llm_provider()}")        # 'anthropic'
print(f"Model: {z.get_llm_model_name()}")         # Your configured model name
print(f"Has key: {z.has_llm_key()}")              # True

# Update LLM configuration after initialization
z.set_llm_key("openai", "sk-your-openai-key", "gpt-4")

# Access encrypted key (for debugging)
encrypted_key = z.get_encrypted_llm_key()
decrypted_key = z.get_decrypted_llm_key()  # Use carefully
```

### Error handling example

```python
try:
    z = ZAPI(
        client_id="invalid",
        secret="invalid",
        llm_provider="anthropic",
        llm_model_name="your-model-name",  # Use the latest available model for your provider
        llm_api_key="invalid-key"
    )
except ZAPIAuthenticationError as e:
    print(f"Authentication failed: {e}")
except ZAPIValidationError as e:
    print(f"Input validation error: {e}")
except ZAPINetworkError as e:
    print(f"Network error: {e}")
```

## HAR Analysis & Cost Estimation

ZAPI ships with a HAR analyzer that filters out static assets, surfaces API-only calls, and estimates processing cost/time before you upload.

```python
from zapi import analyze_har_file, HarProcessingError

try:
    stats, report, filtered_file = analyze_har_file(
        "session.har",
        save_filtered=True,                 # Save filtered version with only API entries
        filtered_output_path="api_only.har" # Optional custom path
    )

    print(f"Total entries: {stats.total_entries:,}")
    print(f"API-relevant entries: {stats.valid_entries:,}")
    print(f"Unique domains: {stats.unique_domains:,}")
    print(f"Estimated cost: ${stats.estimated_cost_usd:.2f}")
    print(f"Estimated time: {stats.estimated_time_minutes:.1f} minutes")

    print("\nSkipped entries by reason:")
    for reason, count in stats.skipped_by_reason.items():
        if count > 0:
            print(f"  {reason.replace('_', ' ').title()}: {count:,}")

    print("\n" + report)

except HarProcessingError as e:
    print(f"HAR analysis failed: {e}")
```

## LangChain Integration

ZAPI converts documented APIs into LangChain-compatible tools, so your agents can reason over real endpoints immediately.

```python
from langchain.agents import create_agent
from zapi import ZAPI, interactive_chat

z = ZAPI()
agent = create_agent(
    z.get_llm_model_name(),
    z.get_zapi_tools(),  # One-liner to fetch and build all tools
    system_prompt="You are a helpful assistant with access to APIs."
)

interactive_chat(agent)
```

Run the interactive demo any time:

```bash
python examples/langchain/demo.py
```

**Tool anatomy**

- `z.get_zapi_tools()` returns standard LangChain `Tool` objects (name, description, args schema) built from your documented APIs.
- Tools automatically display which authentication headers were loaded (values stay hidden for security) so you always know what context the agent has.
- Execution is routed through ZAPI, letting the agent call your APIs with consistent authentication, logging, and error handling.

**Optional API headers**

Create `api-headers.json` in the repository root when you need to pass custom auth to all generated tools:

```json
{
  "headers": {
    "Authorization": "Bearer your-api-token-here",
    "X-API-Key": "your-api-key-here",
    "X-Client-ID": "your-client-id-here"
  }
}
```

Short variants:

**Bearer token**
```json
{
  "headers": {
    "Authorization": "Bearer sk_live_abc123..."
  }
}
```

**API key**
```json
{
  "headers": {
    "X-API-Key": "your_api_key_here",
    "X-Client-ID": "your_client_id"
  }
}
```

**Custom headers**
```json
{
  "headers": {
    "X-Custom-Auth": "custom_value",
    "X-Organization": "org_123",
    "X-Tenant": "tenant_456"
  }
}
```

ZAPI will load the file automatically, hide secret values in logs, and apply the headers to every LangChain tool call. See the dedicated [LangChain Integration Guide](examples/langchain/README.md) for a deeper walkthrough, troubleshooting tips, and additional examples.

## API Reference

### ZAPI class

`ZAPI(client_id, secret, llm_provider, llm_model_name, llm_api_key)`

- `client_id` / `secret`: OAuth credentials from Adopt AI.
- `llm_provider`: `"groq"`, `"anthropic"`, `"openai"`, or `"google"`.
- `llm_model_name`: Any model identifier your provider supports. Use the latest available model for your provider (e.g., check your provider's documentation for current model names).
- `llm_api_key`: Provider-specific API key (encrypted immediately per organization context).

Key methods:

- `launch_browser(url, headless=True, **playwright_options)`: Returns a `BrowserSession` that injects auth tokens into every request.
- `set_llm_key(provider, api_key, model_name)`: Update provider credentials on the fly; keys are encrypted instantly.
- `get_llm_provider()`, `get_llm_model_name()`, `has_llm_key()`: Inspect the active LLM configuration.
- `get_encrypted_llm_key()`, `get_decrypted_llm_key()`: Access credential blobs when you must debug (handle decrypted values carefully).
- `upload_har(filepath)`: Upload a HAR file with metadata for enhanced API discovery.
- `get_documented_apis(page=1, page_size=10)`: Fetch paginated API documentation from the Adopt AI platform.

### BrowserSession class

| Method | Description |
|--------|-------------|
| `navigate(url, wait_until="networkidle")` | Navigate to a URL. |
| `click(selector, **kwargs)` | Click an element with Playwright under the hood. |
| `fill(selector, value, **kwargs)` | Type into an input or textarea. |
| `wait_for(selector=None, timeout=None)` | Wait for a selector or a timeout. |
| `dump_logs(filepath)` | Export HAR traffic for later analysis. |
| `close()` | Close the browser and clean up resources. |

## Security & BYOK

- ZAPI requires valid BYOK credentials to unlock enhanced discovery; every key is encrypted with **AES-256-GCM** as soon as it is provided.
- No plaintext keys are stored in memory or logs, and transmission to the Adopt AI discovery service is secured with per-organization isolation.
- Configure any supported provider by passing `(provider, model_name, api_key)` to `set_llm_key()` or by using the `.env` helpers.
- `load_llm_credentials()` ensures secrets are loaded from disk without exposing them in code.
- Providers currently supported: **Anthropic**, **OpenAI**, **Google**, **Groq**.

## Enhanced Discovery Workflow

When you bring your own LLM API key, ZAPI unlocks deeper API insights:

**When to use BYOK**

- Building LLM training datasets from API interactions.
- Generating comprehensive API documentation.
- Performing advanced API security analysis.
- Understanding complex application workflows end to end.
- Creating intelligent API testing scenarios.
- Budgeting API discovery sessions with upfront estimates.

**Example enhanced workflow**

```python
from zapi import ZAPI, analyze_har_file

z = ZAPI()

session = z.launch_browser(url="https://app.example.com")
# ... navigate and interact ...
session.dump_logs("session.har")

stats, report, _ = analyze_har_file("session.har")
print(f"Found {stats.valid_entries} API entries")
print(f"Estimated cost: ${stats.estimated_cost_usd:.2f}")
print(f"Estimated time: {stats.estimated_time_minutes:.1f} minutes")

z.upload_har("session.har")
session.close()
```

## Troubleshooting & Tips

- If `HarProcessingError` appears, the HAR file is malformed or contains unsupported entries—rerun the capture or inspect the skipped reasons in the report.
- ZAPI proceeds without authentication headers when `api-headers.json` is missing; add it only when needed and validate the JSON beforehand.
- Tools will mention which headers were loaded, but the values stay hidden so you can safely confirm configuration without exposing secrets.
- Always rerun `playwright install` after upgrading browsers or moving to a new machine.
- Use `get_documented_apis()` to verify connectivity with the Adopt AI backend before launching long capture sessions.
- Keep `.env` out of version control and rotate your BYOK tokens regularly through [app.adopt.ai](https://app.adopt.ai).

## Contributing

We welcome contributions from the community! Whether you're fixing bugs, adding features, improving documentation, or adding support for new LLM providers, your help is appreciated.

**Get Started:**
- Read our [Contributing Guide](CONTRIBUTING.md) for development setup, coding standards, and pull request guidelines
- Check out [open issues](https://github.com/adoptai/zapi/issues) for tasks to work on
- Join discussions on [GitHub Discussions](https://github.com/adoptai/zapi/discussions)

**Quick Links:**
- [Development Setup](CONTRIBUTING.md#development-setup)
- [Project Structure](CONTRIBUTING.md#project-structure)
- [Adding New LLM Providers](CONTRIBUTING.md#adding-new-llm-providers)
- [Pull Request Process](CONTRIBUTING.md#pull-request-process)

By contributing to ZAPI, you agree that your contributions will be licensed under the MIT License.


================================================
FILE: demo.py
================================================
#!/usr/bin/env python
"""ZAPI Demo Script showing capture, analysis, and upload."""

from pathlib import Path
from typing import Optional

from zapi import (
    ZAPI,
    BrowserInitializationError,
    BrowserNavigationError,
    BrowserSessionError,
    HarProcessingError,
    ZAPIAuthenticationError,
    ZAPIError,
    ZAPINetworkError,
    ZAPIValidationError,
    analyze_har_file,
)

# ---------------------------------------------------------------------------
# Quick configuration – edit these defaults before running the script.
# ---------------------------------------------------------------------------
DEMO_URL = "<INSERT_URL_HERE>"
OUTPUT_FILE = Path("demo_session.har")
HEADLESS_BROWSER = False


def record_session(zapi_client: ZAPI, url: str, output_path: Path) -> None:
    """Record a HAR file by letting the user drive the browser."""
    print(f"🌐 Launching browser and navigating to: {url}")
    session = zapi_client.launch_browser(url=url, headless=HEADLESS_BROWSER)
    try:
        print("✅ Browser launched successfully!")
        input("📋 Use the browser freely, then press ENTER to save the HAR...")

        print("💾 Saving session logs...")
        session.dump_logs(str(output_path))
        print(f"✅ Session saved to: {output_path}")
    finally:
        session.close()
        print("🧹 Browser session closed.")


def analyze_har_file_with_filter(source_path: Path) -> Optional[Path]:
    """Analyze the HAR and produce a filtered file for API-only calls."""
    print("\n🔍 Analyzing HAR file...")
    try:
        stats, report, filtered_path = analyze_har_file(str(source_path), save_filtered=True)
    except HarProcessingError as exc:
        print(f"⚠️ HAR analysis failed: {exc}")
        print("   Continuing with the original HAR.")
        return None

    print("\n📊 HAR Analysis Results:")
    print(f"   ✅ API-relevant entries: {stats.valid_entries:,}")
    print(f"   💰 Estimated cost: ${stats.estimated_cost_usd:.2f}")
    print(f"   ⏱️  Estimated processing time: {round(stats.estimated_time_minutes)} minutes")
    if filtered_path:
        print(f"   🧹 Filtered HAR saved to: {filtered_path}")
    return Path(filtered_path).resolve() if filtered_path else None


def pick_upload_file(original_path: Path, filtered_path: Optional[Path]) -> Path:
    """Interactively choose whether to upload the original or filtered HAR."""
    if filtered_path:
        print("\nYou now have two files available:")
        print(f"  1. Original HAR : {original_path}")
        print(f"  2. Filtered HAR : {filtered_path}")
        choice = input("Upload filtered HAR? (Y/n): ").strip().lower()
        if choice in ("", "y", "yes"):
            print("📤 Using filtered HAR for upload.")
            return filtered_path
        print("📤 Using original HAR for upload.")
        return original_path

    print("\nFiltered HAR not available, defaulting to the original file.")
    return original_path


def main() -> int:
    print("🚀 Starting ZAPI demo...")
    url = DEMO_URL
    output_path = OUTPUT_FILE.expanduser().resolve()

    try:
        z = ZAPI()
        record_session(z, url, output_path)

        filtered_path = analyze_har_file_with_filter(output_path)
        upload_path = pick_upload_file(output_path, filtered_path)

        confirm = input("\n💡 Ready to upload. Press ENTER to continue or 'n' to cancel: ").strip().lower()
        if confirm in {"n", "no"}:
            print("⏹️ Upload cancelled by user.")
            return 0

        print("\n☁️ Uploading HAR file...")
        z.upload_har(str(upload_path))
        print("✅ HAR file uploaded successfully!")
        print("🎉 Demo completed successfully!")

    except ZAPIValidationError as e:
        print("❌ Configuration Validation Error:")
        print(f"   {str(e)}")
        print("💡 Please check your input values:")
        print(f"   - URL: '{url}' (should be like 'https://example.com')")
        print(f"   - Output file: '{output_path}' (should end with '.har')")
        print("   Make sure to replace placeholder values with actual ones.")
        return 1

    except ZAPIAuthenticationError as e:
        print("❌ Authentication Error:")
        print(f"   {str(e)}")
        print("💡 Please check your credentials:")
        print("   - Make sure your account is active and has proper permissions")
        return 1

    except ZAPINetworkError as e:
        print("❌ Network Error:")
        print(f"   {str(e)}")
        print("💡 This might be due to:")
        print("   - Internet connectivity issues")
        print("   - ZAPI service being temporarily unavailable")
        print("   - Firewall or proxy blocking the connection")
        print("   - DNS resolution problems")
        return 1

    except BrowserNavigationError as e:
        print("❌ Browser Navigation Error:")
        print(f"   {str(e)}")
        print("💡 Common solutions:")
        print(f"   - Check URL format: '{url}'")
        print("   - Ensure the website is accessible")
        print("   - Try a different URL for testing")
        print("   - Check your internet connection")
        return 1

    except BrowserInitializationError as e:
        print("❌ Browser Initialization Error:")
        print(f"   {str(e)}")
        print("💡 This might be due to:")
        print("   - Missing browser dependencies (try: playwright install)")
        print("   - System permissions issues")
        print("   - Insufficient system resources")
        return 1

    except BrowserSessionError as e:
        print("❌ Browser Session Error:")
        print(f"   {str(e)}")
        print("💡 Try the following:")
        print("   - Restart the script")
        print("   - Check if the browser window is responsive")
        print("   - Ensure sufficient disk space for HAR files")
        return 1

    except HarProcessingError as e:
        print("❌ HAR Processing Error:")
        print(f"   {str(e)}")
        print("💡 This error occurred during HAR file analysis:")
        print("   - Check if the HAR file was generated correctly")
        print("   - Ensure the file is not corrupted or empty")
        print("   - Try generating a new session")
        return 1

    except ZAPIError as e:
        print("❌ ZAPI Error:")
        print(f"   {str(e)}")
        print("💡 This is a general ZAPI error. Please check your configuration.")
        return 1

    except Exception as e:
        print("❌ Unexpected Error:")
        print(f"   {str(e)}")
        print("💡 This is an unexpected error. Please:")
        print("   - Check all your input values")
        print("   - Try running the script again")
        print("   - Contact support if the issue persists")
        return 1

    return 0


if __name__ == "__main__":
    exit(main())


================================================
FILE: docs/introduction.md
================================================
# Introducing ZAPI - Zero-Config API Intelligence

**3 min read**

_Automatically discover, capture, and document APIs from any web application_

We're excited to introduce **ZAPI** - an open-source Python library that automatically captures network traffic and API calls from web applications. Perfect for API discovery, creating LLM training datasets, and understanding how web applications communicate with their backends.

ZAPI makes it easy to:

* **Capture network traffic** from any web application automatically
* **Export HAR files** compatible with Chrome DevTools and other analysis tools
* **Upload and document APIs** to the adopt.ai platform
* **Interact with web pages** using simple Python commands
* **Run headless or visible** browser sessions for debugging
* **Retrieve documented APIs** with pagination support

## Installation

Install ZAPI and its dependencies:

```bash
pip install -r requirements.txt

# Install browser binaries (REQUIRED)
playwright install
```

**Requirements:** Python 3.9+, Playwright 1.40.0+

## Quick Start

### 1. Get Your API Credentials

ZAPI uses OAuth authentication with the adopt.ai platform and supports LLM integration. You'll need:
- A `client_id`
- A `secret` key
- An LLM `provider` (anthropic, openai, google, or groq)
- An LLM `api_key` for your chosen provider
- An LLM `model_name` (use the latest available model for your provider - check your provider's documentation for current model names)

**Getting your client_id and secret:**
Sign up at [app.adopt.ai](https://app.adopt.ai) to get your OAuth credentials.

Add these to your environment or use them directly in your code.

### 2. Your First API Capture

Start ZAPI with just a few lines of code:

```python
from zapi import ZAPI

# Initialize with client credentials and LLM configuration
z = ZAPI(
    client_id="YOUR_CLIENT_ID",
    secret="YOUR_SECRET",
    llm_provider="anthropic",
    llm_api_key="sk-ant-YOUR_API_KEY",
    llm_model_name="your-model-name"  # Use the latest available model for your provider
)

# Launch browser and capture traffic
session = z.launch_browser(url="https://app.example.com/dashboard")

# Export network logs
session.dump_logs("session.har")
session.close()
```

The library will:
1. Authenticates with the adopt.ai OAuth API
2. Encrypts your LLM API key for secure tool ingestion
3. Launches a browser with automatic token injection
4. Capturees all network traffic during your session
5. Exports everything to standard HAR format with encrypted LLM metadata

### 3. Test Your Installation

You can also load credentials from a `.env` file:

```bash
# Create .env file with your credentials
echo "LLM_PROVIDER=anthropic" >> .env
echo "LLM_API_KEY=sk-ant-your-key-here" >> .env
echo "LLM_MODEL_NAME=your-model-name" >> .env  # Use the latest available model for your provider
```

Run the demo script to verify everything works:

```bash
python demo.py
```

## LLM Integration & Security

### Supported LLM Providers

ZAPI supports 4 main LLM providers with full validation:

- **Anthropic**
- **OpenAI**:
- **Google**:
- **Groq**:

### Secure Key Encryption

All LLM API keys are encrypted before being used for tool ingestion:

```python
# Keys are automatically encrypted when ZAPI is initialized
z = ZAPI(
    client_id="YOUR_CLIENT_ID",
    secret="YOUR_SECRET",
    llm_provider="anthropic",
    llm_api_key="sk-ant-your-key",  # Encrypted automatically
    llm_model_name="your-model-name"  # Use the latest available model for your provider
)

# Check if LLM key is configured
if z.has_llm_key():
    print(f"Using provider: {z.get_llm_provider()}")
    print(f"Using model: {z.get_llm_model_name()}")
```

## Core Features & Examples

### Uploading to adopt.ai

Once you've captured traffic, upload it to the adopt.ai platform for automatic API documentation:

```python
z = ZAPI(
    client_id="YOUR_CLIENT_ID",
    secret="YOUR_SECRET",
    llm_provider="anthropic",
    llm_api_key="sk-ant-YOUR_API_KEY",
    llm_model_name="your-model-name"  # Use the latest available model for your provider
)

# Capture traffic
session = z.launch_browser(url="https://app.example.com")
session.dump_logs("session.har")
session.close()

# Upload for documentation (includes encrypted LLM metadata)
z.upload_har("session.har")
```

The adopt.ai platform will:
- Parse all API calls from your HAR file
- Generate documentation automatically
- Use your encrypted LLM key for enhanced processing
- Make APIs available for LLM agents and tools

### HAR Analysis & Cost Estimation

Before uploading, analyze your HAR files to understand what will be processed and estimate costs:

```python
from zapi import analyze_har_file, HarProcessingError

try:
    # Analyze HAR file with detailed statistics
    stats, report, filtered_file = analyze_har_file(
        "session.har",
        save_filtered=True,           # Save filtered version with only API entries
        filtered_output_path="api_only.har"  # Optional custom path
    )

    # Access detailed statistics
    print(f"Total entries: {stats.total_entries:,}")
    print(f"API-relevant entries: {stats.valid_entries:,}")
    print(f"Unique domains: {stats.unique_domains:,}")
    print(f"Estimated cost: ${stats.estimated_cost_usd:.2f}")
    print(f"Estimated time: {stats.estimated_time_minutes:.1f} minutes")

    # Show which entries were filtered out and why
    print("\nSkipped entries by reason:")
    for reason, count in stats.skipped_by_reason.items():
        if count > 0:
            print(f"  {reason.replace('_', ' ').title()}: {count:,}")

    # Print full formatted report
    print("\n" + report)

except HarProcessingError as e:
    print(f"HAR analysis failed: {e}")
```

**HAR Processing Features:**
- **Smart Filtering**: Automatically excludes static assets (JS, CSS, images, fonts)
- **Cost Estimation**: Provides processing cost estimates
- **Time Estimation**: Calculates expected processing time
- **Domain Analysis**: Lists all unique domains found in the session
- **Skip Reasons**: Detailed breakdown of why entries were filtered out
- **Filtered Export**: Option to save a clean HAR file with only API-relevant entries

### Retrieving Documented APIs

After uploading, retrieve your documented APIs programmatically:

```python
z = ZAPI(
    client_id="YOUR_CLIENT_ID",
    secret="YOUR_SECRET",
    llm_provider="groq",
    llm_api_key="gsk_YOUR_GROQ_KEY",
    llm_model_name="mixtral-8x7b-32768"
)

# Get first page of documented APIs
api_list = z.get_documented_apis(page=1, page_size=10)

# Paginate through all APIs
for page in range(1, api_list['total_pages'] + 1):
    apis = z.get_documented_apis(page=page, page_size=10)
    for api in apis['items']:
        print(f"{api['title']}: {api['path']}")
```

### Visible Browser Mode for Debugging

When developing or debugging, run with a visible browser:

```python
# See the browser in action
session = z.launch_browser(
    url="https://app.example.com",
    headless=False  # Makes browser visible
)

# Great for debugging selectors and interactions
input("Press ENTER when done navigating...")
session.dump_logs("debug_session.har")
session.close()
```

## Advanced Usage

### Custom Playwright Options

Pass any Playwright browser launch options:

```python
session = z.launch_browser(
    url="https://app.example.com",
    headless=True,
    wait_until="networkidle",  # Wait for network to be idle
    slow_mo=50,  # Slow down operations by 50ms
    timeout=30000  # 30 second timeout
)
```

## Best Practices

### 1. Use Descriptive HAR Filenames

```python
# Good - descriptive names
session.dump_logs("checkout-flow-2024-11-05.har")
session.dump_logs("user-authentication-session.har")

# Less helpful
session.dump_logs("session1.har")
session.dump_logs("test.har")
```

### 2. Organize HAR Files by Feature

```
captures/
├── authentication/
│   ├── login-flow.har
│   └── oauth-callback.har
├── checkout/
│   ├── cart-operations.har
│   └── payment-processing.har
└── admin/
    └── user-management.har
```

### 3. Always Close Sessions

Use context managers or explicit `close()` calls to clean up resources:

```python
# Option 1: Context manager (preferred)
with z.launch_browser(url="...") as session:
    # Your code here
    pass

# Option 2: Explicit close
session = z.launch_browser(url="...")
try:
    # Your code here
    pass
finally:
    session.close()
```

### 4. Complete Workflow with Analysis

Here's a complete workflow that includes HAR analysis and cost estimation:

```python
from zapi import ZAPI, load_llm_credentials, analyze_har_file

# Load credentials securely
llm_provider, llm_api_key, llm_model_name = load_llm_credentials()

# Initialize ZAPI
z = ZAPI(
    client_id="YOUR_CLIENT_ID",
    secret="YOUR_SECRET",
    llm_provider=llm_provider,
    llm_api_key=llm_api_key,
    llm_model_name=llm_model_name
)

# Capture session
session = z.launch_browser(url="https://app.example.com")
# ... navigate and interact ...
session.dump_logs("session.har")
session.close()

# Analyze before upload with cost estimation
stats, report, _ = analyze_har_file("session.har")
print(f"Found {stats.valid_entries} API entries")
print(f"Estimated cost: ${stats.estimated_cost_usd:.2f}")
print(f"Estimated time: {stats.estimated_time_minutes:.1f} minutes")

# Upload with confirmation
if input("Upload? (y/n): ").lower() == 'y':
    z.upload_har("session.har")
    print("Upload completed!")
```

## API Reference

### ZAPI Class

**`ZAPI(client_id, secret, llm_provider, llm_model_name, llm_api_key)`**
- `client_id` (str): OAuth client ID for authentication
- `secret` (str): OAuth secret key
- `llm_provider` (str): LLM provider name ("anthropic", "openai", "google", "groq")
- `llm_model_name` (str): LLM model name. Use the latest available model for your provider (check your provider's documentation for current model names)
- `llm_api_key` (str): LLM API key for the specified provider
- Raises `ZAPIValidationError` if credentials are empty or LLM key format is invalid
- Raises `ZAPIAuthenticationError` if authentication fails
- Raises `ZAPINetworkError` if network requests fail

**`launch_browser(url, headless=True, wait_until="load", **playwright_options)`**
- Returns: `BrowserSession` instance
- `url` (str): Initial URL to navigate to
- `headless` (bool): Run browser in headless mode
- `wait_until` (str): When navigation is complete ("load", "domcontentloaded", "networkidle")

**`upload_har(har_file)`**
- Uploads HAR file to adopt.ai for API documentation
- `har_file` (str): Path to HAR file
- Includes encrypted LLM metadata if LLM key is configured
- Returns: JSON response from API

**`set_llm_key(provider, api_key, model_name)`**
- Update LLM configuration after initialization
- `provider` (str): LLM provider name
- `api_key` (str): API key for the provider
- `model_name` (str): Model name to use

**`has_llm_key()`**
- Returns: True if LLM key is configured, False otherwise

**`get_llm_provider()`**
- Returns: Configured LLM provider name or None

**`get_llm_model_name()`**
- Returns: Configured LLM model name or None

**`get_documented_apis(page=1, page_size=10)`**
- Retrieves documented APIs with pagination
- `page` (int): Page number (default: 1)
- `page_size` (int): Items per page (default: 10)
- Returns: JSON with `items`, `total`, `page`, `page_size`, `total_pages`

### HAR Analysis Functions

**`analyze_har_file(har_file_path, save_filtered=False, filtered_output_path=None)`**
- Comprehensive HAR file analysis with statistics and filtering
- `har_file_path` (str): Path to the HAR file to analyze
- `save_filtered` (bool): Whether to save a filtered HAR file with only API entries
- `filtered_output_path` (str): Optional path for filtered HAR file (auto-generated if None)
- Returns: `(HarStats, formatted_report, filtered_file_path)` tuple
- Automatically excludes static assets and non-API content
- Provides cost and time estimates for processing

**`load_llm_credentials()`**
- Load LLM credentials securely from environment variables or configuration
- Returns: `(provider, api_key, model_name)` tuple
- Supports .env files and fallback configuration

**`HarProcessor(har_file_path)`**
- Low-level HAR processing class for custom analysis
- Methods: `load_and_process()`, `save_filtered_har()`, `get_summary_report()`

### HarStats Object

```python
@dataclass
class HarStats:
    total_entries: int              # Total entries in HAR file
    valid_entries: int              # API-relevant entries after filtering
    skipped_entries: int            # Entries filtered out
    unique_domains: int             # Number of unique domains
    estimated_cost_usd: float       # Estimated processing cost
    estimated_time_minutes: float   # Estimated processing time
    skipped_by_reason: Dict[str, int]  # Breakdown by skip reason
    domains: List[str]              # List of all domains found
```

### BrowserSession Class

| Method | Description |
|--------|-------------|
| `navigate(url, wait_until="networkidle")` | Navigate to URL |
| `click(selector, **kwargs)` | Click element by CSS selector |
| `fill(selector, value, **kwargs)` | Fill form field |
| `wait_for(selector=None, timeout=None)` | Wait for selector or timeout |
| `dump_logs(filepath)` | Export HAR file |
| `close()` | Close browser and cleanup |

## How ZAPI Works

ZAPI's workflow is simple but powerful:

1. **Authentication**: Calls the adopt.ai OAuth API to obtain an access token
2. **LLM Key Encryption**: Encrypts your LLM API key for secure tool ingestion
3. **Token Injection**: Automatically injects the Bearer token in all request headers
4. **Traffic Capture**: Records complete network activity during browser interactions
5. **Smart Analysis**: Filters HAR files to exclude static assets and estimate costs
6. **Export**: Saves everything to standard HAR format compatible with Chrome DevTools
7. **Documentation**: Uploads to adopt.ai with secured LLM metadata for enhanced API processing

## Use Cases

- **API Discovery**: Reverse-engineer undocumented APIs from web applications
- **LLM Training Data**: Create datasets of API calls for training language models
- **Testing & QA**: Capture network traffic for debugging and analysis
- **Documentation**: Automatically generate API documentation from real usage
- **Integration Development**: Understand third-party APIs without documentation
- **Security Research**: Analyze application behavior and API communication patterns

## Get Started Today

Install ZAPI and start discovering APIs:

```bash
pip install -r requirements.txt
playwright install

# Set up your .env file with credentials
echo "LLM_PROVIDER=anthropic" >> .env
echo "LLM_API_KEY=sk-ant-your-key" >> .env
echo "LLM_MODEL_NAME=your-model-name" >> .env  # Use the latest available model for your provider

python demo.py
```

Join the community and contribute:

* **GitHub**: https://github.com/adoptai/zapi
* **adopt.ai Platform**: https://app.adopt.ai
* **License**: MIT


================================================
FILE: examples/async_usage.py
================================================
"""
Advanced async usage example for ZAPI.

This demonstrates how to use the async API directly for concurrent
operations or integration with async frameworks.
"""

import asyncio

from zapi.session import BrowserSession


async def main():
    print("Advanced async usage example\n")

    # Example 1: Using async methods directly
    print("Example 1: Direct async API usage")
    session = BrowserSession(auth_token="YOUR_TOKEN", headless=True)

    await session._initialize(initial_url="https://app.example.com")
    await session._wait_for_async(timeout=2000)
    await session._dump_logs_async("async_example1.har")
    await session._close_async()
    print("✓ HAR file saved to async_example1.har\n")

    # Example 2: Concurrent sessions (multiple browsers at once)
    print("Example 2: Running multiple sessions concurrently")

    async def capture_session(url, output_file):
        """Helper to capture a session."""
        session = BrowserSession(auth_token="YOUR_TOKEN", headless=True)
        await session._initialize(initial_url=url)
        await session._wait_for_async(timeout=1000)
        await session._dump_logs_async(output_file)
        await session._close_async()
        print(f"✓ Captured {url} -> {output_file}")

    # Run multiple sessions concurrently
    await asyncio.gather(
        capture_session("https://api.example.com/v1/users", "async_users.har"),
        capture_session("https://api.example.com/v1/products", "async_products.har"),
        capture_session("https://api.example.com/v1/orders", "async_orders.har"),
    )
    print("\n✓ All concurrent sessions completed\n")

    # Example 3: Async context manager
    print("Example 3: Using async context manager")
    session = BrowserSession(auth_token="YOUR_TOKEN", headless=True)
    await session._initialize(initial_url="https://app.example.com")

    async with session:
        await session._navigate_async("/dashboard")
        await session._wait_for_async(timeout=2000)
        await session._dump_logs_async("async_context.har")
    print("✓ HAR file saved to async_context.har (auto-cleanup)\n")

    print("All async examples completed!")


if __name__ == "__main__":
    asyncio.run(main())


================================================
FILE: examples/basic_usage.py
================================================
"""
Basic usage example for ZAPI.

This demonstrates the minimal API for launching a browser,
navigating to a URL, and capturing network logs in HAR format.
"""

from zapi import ZAPI


def main():
    # Example 1: Basic usage
    print("Example 1: Basic ZAPI usage")
    z = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
    session = z.launch_browser(url="https://app.example.com/dashboard")

    # The session is already on the dashboard page
    # You can interact with it if needed
    session.wait_for(timeout=2000)  # Wait 2 seconds

    # Dump network logs to HAR file
    session.dump_logs("example1_session.har")
    session.close()
    print("✓ HAR file saved to example1_session.har\n")

    # Example 2: Multi-page navigation with interactions
    print("Example 2: Multi-page navigation with interactions")
    z2 = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
    session2 = z2.launch_browser(url="https://app.example.com")

    # Navigate to different pages
    session2.navigate("/dashboard")
    session2.wait_for(timeout=1000)

    session2.navigate("/profile")
    session2.wait_for(timeout=1000)

    # Click on an element (example)
    # session2.click("#settings-button")

    # Fill a form (example)
    # session2.fill("#search-input", "test query")

    session2.dump_logs("example2_session.har")
    session2.close()
    print("✓ HAR file saved to example2_session.har\n")

    # Example 3: Using as context manager (auto-cleanup)
    print("Example 3: Using context manager for automatic cleanup")
    z3 = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
    session3 = z3.launch_browser(url="https://app.example.com")

    with session3:
        session3.navigate("/api-endpoint")
        session3.wait_for(timeout=2000)
        session3.dump_logs("example3_session.har")
    # Browser automatically closed when exiting context
    print("✓ HAR file saved to example3_session.har (auto-cleanup)\n")

    print("All examples completed! Check the generated .har files.")


if __name__ == "__main__":
    main()


================================================
FILE: examples/langchain/README.md
================================================
# ZAPI LangChain Integration

This example demonstrates how to use ZAPI with LangChain to automatically convert your documented APIs into LangChain tools.

## Quick Start

### 1. Basic Usage (Recommended)

```python
from langchain.agents import create_agent
from zapi import ZAPI, interactive_chat

# Initialize ZAPI and create agent
z = ZAPI()

# Get ZAPI tools automatically
agent = create_agent(
    z.get_llm_model_name(),
    z.get_zapi_tools(),  # Simple one-liner to get all tools
    system_prompt="You are a helpful assistant with access to APIs."
)

# Start interactive chat
interactive_chat(agent)
```

### 2. Run the Demo

```bash
python demo.py
```

## Optional: Custom API Authentication Headers

If your APIs require custom authentication headers, you can provide them via a JSON file.

### Create API Headers File

Create a file named `api-headers.json` in the `zapi/` root directory:

```json
{
  "headers": {
    "Authorization": "Bearer your-api-token-here",
    "X-API-Key": "your-api-key-here",
    "X-Client-ID": "your-client-id-here"
  }
}
```

### Header Examples

**Bearer Token Authentication:**
```json
{
  "headers": {
    "Authorization": "Bearer sk_live_abc123..."
  }
}
```

**API Key Authentication:**
```json
{
  "headers": {
    "X-API-Key": "your_api_key_here",
    "X-Client-ID": "your_client_id"
  }
}
```

**Custom Headers:**
```json
{
  "headers": {
    "X-Custom-Auth": "custom_value",
    "X-Organization": "org_123",
    "X-Tenant": "tenant_456"
  }
}
```

## Usage

```python
from zapi import ZAPI

z = ZAPI()
tools = z.get_zapi_tools()  # Automatically loads api-headers.json if it exists
```

That's it! The `get_zapi_tools()` method automatically:
- Fetches your documented APIs from ZAPI platform
- Loads authentication headers from `api-headers.json` (if present)
- Converts APIs into LangChain-compatible tools

## Creating an Agent

ZAPI works seamlessly with LangChain's agent framework. Here's the complete flow:

```python
from langchain.agents import create_agent
from zapi import ZAPI, interactive_chat

# 1. Initialize ZAPI
z = ZAPI()

# 2. Create agent with ZAPI tools
agent = create_agent(
    z.get_llm_model_name(),      # Gets the LLM model (use the latest available model for your provider)
    z.get_zapi_tools(),           # Gets all your documented APIs as tools
    system_prompt="You are a helpful assistant with access to APIs."
)

# 3. Start chatting!
interactive_chat(agent)
```

### What happens here?

- **`z.get_llm_model_name()`**: Returns the LLM model name configured in your ZAPI credentials
- **`z.get_zapi_tools()`**: Fetches and converts your APIs into LangChain tools
- **`create_agent()`**: Creates a LangChain agent that can use your APIs
- **`interactive_chat()`**: Starts an interactive terminal chat session with the agent

The agent will automatically:
- Understand when to call your APIs based on user queries
- Extract parameters from natural language
- Execute API calls through ZAPI
- Present results in a conversational format

## Security Notes

- **Never commit your actual API keys to version control**
- Add `api-headers.json` to your `.gitignore` file
- Use environment-specific headers files for different environments
- The tool will show which headers are loaded but won't display their values for security

## What ZAPI Does

1. **Fetches Documented APIs**: Retrieves all APIs you've documented in ZAPI platform
2. **Converts to LangChain Tools**: Automatically creates LangChain tools with proper schemas
3. **Handles Authentication**: Applies custom headers (if provided) to all API requests
4. **Executes API Calls**: Routes tool calls through ZAPI backend for execution

## Features

- ✅ **Zero-config**: Works out of the box with `z.get_zapi_tools()`
- ✅ **Type-safe**: Automatically generates proper parameter schemas
- ✅ **Flexible auth**: Supports custom headers via JSON file
- ✅ **Error handling**: Gracefully handles API failures
- ✅ **Interactive chat**: Built-in `interactive_chat()` utility

## File Structure

```
zapi/
├── api-headers.json        # Optional: Your API headers (don't commit this!)
├── examples/
│   └── langchain/
│       ├── demo.py         # Demo script
│       └── README.md       # This file
└── ...
```

## Troubleshooting

- If no headers file is found, the tool will proceed without authentication headers
- Check the console output for confirmation that headers were loaded
- Ensure your JSON file is valid (use a JSON validator if needed)
- Make sure you have documented APIs in your ZAPI platform account


================================================
FILE: examples/langchain/__init__.py
================================================
"""
ZAPI Langchain Examples

This package contains comprehensive examples showing how to use ZAPI
with Langchain to create intelligent agents.

Examples:
- demo.py: Agent creation and usage demonstration
"""


================================================
FILE: examples/langchain/demo.py
================================================
from langchain.agents import create_agent
from zapi import ZAPI, interactive_chat


def demo_zapi_langchain():
    """ZAPI LangChain integration demo."""
    print("\n🚀 ZAPI LangChain - Demo Example")
    print("=" * 40)

    # Initialize ZAPI and create agent
    z = ZAPI()

    agent = create_agent(
        z.get_llm_model_name(), z.get_zapi_tools(), system_prompt="You are a helpful assistant with access to APIs."
    )

    # Start interactive chat
    interactive_chat(agent, debug_mode=False)


# Run the demo
demo_zapi_langchain()


================================================
FILE: examples/llm_keys_usage.py
================================================
"""
Example demonstrating LLM API key management with ZAPI.

This shows how to securely provide LLM API keys for the 4 main supported providers.
Keys will be encrypted and transmitted to the adopt.ai discovery service.

Supported providers: Anthropic, OpenAI, Google, Groq
"""

from zapi import ZAPI, LLMProvider


def main():
    # Example 1: Initialize ZAPI with single LLM key in constructor (Anthropic primary)
    print("Example 1: ZAPI with single LLM key in constructor (Anthropic primary)")

    # Single key approach - one provider per client instance
    z = ZAPI(
        client_id="YOUR_CLIENT_ID",
        secret="YOUR_SECRET",
        llm_provider="anthropic",  # Primary supported provider
        llm_api_key="sk-ant-your-anthropic-key-here",
    )

    print(f"Configured provider: {z.get_llm_provider()}")
    print(f"Has LLM key: {z.has_llm_key()}")

    # Launch browser and capture session
    session = z.launch_browser(url="https://app.example.com", headless=False)
    input("Navigate around the app, then press ENTER to continue...")

    # Export HAR with encrypted LLM key
    session.dump_logs("example_with_key.har")

    # Upload to adopt.ai with encrypted key
    z.upload_har("example_with_key.har")

    session.close()
    print("✓ Session completed with encrypted LLM key included\n")

    # Example 2: Set LLM key after initialization
    print("Example 2: Setting LLM key after initialization")

    z2 = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
    print(f"Initially has key: {z2.has_llm_key()}")

    # Add key later - showcasing one of the 4 main providers
    z2.set_llm_key("anthropic", "sk-ant-another-key-here")

    print(f"After setting key: {z2.has_llm_key()}")
    print(f"Configured provider: {z2.get_llm_provider()}")

    # Example 3: Multiple provider support (single provider per client)
    print("\nExample 3: Using different providers (create separate clients)")

    # OpenAI example
    z3a = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
    z3a.set_llm_key("openai", "sk-your-openai-key-here")
    print(f"OpenAI client provider: {z3a.get_llm_provider()}")

    # Groq example
    z3b = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
    z3b.set_llm_key("groq", "gsk_your-groq-key-here")
    print(f"Groq client provider: {z3b.get_llm_provider()}")

    # Google example
    z3c = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
    z3c.set_llm_key("google", "your-google-api-key-here")
    print(f"Google client provider: {z3c.get_llm_provider()}")

    # Example 4: Working without LLM keys (backward compatibility)
    print("\nExample 4: Working without LLM keys (backward compatibility)")

    z4 = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
    print(f"Has LLM key: {z4.has_llm_key()}")

    # This will work exactly as before - no encrypted keys sent
    session4 = z4.launch_browser(url="https://app.example.com")
    session4.wait_for(timeout=1000)
    session4.dump_logs("example_no_keys.har")
    z4.upload_har("example_no_keys.har")  # byok_enabled: false
    session4.close()
    print("✓ Session completed without LLM keys (legacy mode)")

    # Example 5: Show all 4 supported providers
    print("\nExample 5: All 4 main supported LLM providers")
    print(f"All supported providers: {list(LLMProvider.get_all_providers())}")

    from zapi.providers import get_supported_providers_info, is_primary_provider

    providers_info = get_supported_providers_info()
    for provider_name, info in providers_info.items():
        support_level = "🔥 PRIMARY" if is_primary_provider(provider_name) else "⭐ MAIN"
        print(f"- {info['display_name']}: {support_level} - {info['description']}")

    print("\n💡 ZAPI supports 4 main providers: Anthropic, OpenAI, Google, Groq")
    print("   Each client handles one provider's key for security and simplicity.")
    print("   All providers have complete validation and optimized integration.")

    # Example 6: Demonstrating API key format validation
    print("\nExample 6: API key format validation for each provider")

    key_examples = {
        "anthropic": "sk-ant-api03-example-key-here",
        "openai": "sk-your-openai-key-here",
        "groq": "gsk_your-groq-key-here",
        "google": "your-google-api-key-here",
    }

    for provider, example_key in key_examples.items():
        print(f"- {provider.title()}: {example_key[:15]}...")


if __name__ == "__main__":
    main()


================================================
FILE: examples/simple_usage.py
================================================
"""
Simplest possible ZAPI usage - exactly as shown in documentation.
"""

from zapi import ZAPI


def main():
    # Create ZAPI instance with your client credentials
    z = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")

    # Launch browser and navigate to URL
    session = z.launch_browser(url="https://app.example.com/dashboard")

    # Dump network logs to HAR file
    session.dump_logs("session.har")

    # Clean up
    session.close()

    print("✓ Network logs saved to session.har")


if __name__ == "__main__":
    main()


================================================
FILE: pyproject.toml
================================================
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name = "zapi"
version = "0.1.0"
description = "Zero-Config API Intelligence - automatically discover, understand, and prepare APIs for LLM and agent workflows"
readme = "README.md"
requires-python = ">=3.9"
license = {text = "MIT"}
authors = [
    {name = "ZAPI Contributors"}
]
keywords = ["api", "llm", "automation", "browser", "network", "har"]
classifiers = [
    "Development Status :: 3 - Alpha",
    "Intended Audience :: Developers",
    "License :: OSI Approved :: MIT License",
    "Programming Language :: Python :: 3",
    "Programming Language :: Python :: 3.9",
    "Programming Language :: Python :: 3.10",
    "Programming Language :: Python :: 3.11",
    "Programming Language :: Python :: 3.12",
]
dependencies = [
    "playwright>=1.40.0",
    "cryptography>=41.0.0",
    "httpx>=0.25.0",
    "pydantic>=2.0.0",
    "python-dotenv>=1.0.0",
    "langchain>=1.0.0",
    "langchain-anthropic>=1.0.0",
    "langchain-openai>=1.0.0",
    "click>=8.0.0",
]

[project.urls]
Homepage = "https://github.com/adoptai/zapi"
Repository = "https://github.com/adoptai/zapi"

[project.scripts]
zapi = "zapi.cli:cli"

[tool.setuptools.packages.find]
where = ["."]


include = ["zapi*"]

[tool.ruff]
# Set the maximum line length
line-length = 120

# Target Python 3.9+
target-version = "py39"

# Exclude common directories
exclude = [
    ".git",
    ".github",
    ".venv",
    "venv",
    "__pycache__",
    "*.egg-info",
    "build",
    "dist",
    "docs",
    ".pytest_cache",
    ".ruff_cache",
]

[tool.ruff.lint]
# Enable specific rule sets
select = [
    "E",   # pycodestyle errors
    "W",   # pycodestyle warnings
    "F",   # pyflakes
    "I",   # isort
    "N",   # pep8-naming
    "UP",  # pyupgrade
    "B",   # flake8-bugbear
    "C4",  # flake8-comprehensions
    "SIM", # flake8-simplify
]

# Ignore specific rules
ignore = [
    "E501",  # Line too long (handled by formatter)
    "B008",  # Do not perform function calls in argument defaults
    "B905",  # zip() without an explicit strict= parameter
    "B904",  # Within except clause, raise with from err - too strict for this codebase
    "SIM105",  # Use contextlib.suppress() instead of try-except-pass - we prefer explicit try-except for clarity
]

# Allow autofix for all enabled rules
fixable = ["ALL"]
unfixable = []

[tool.ruff.format]
# Use double quotes for strings
quote-style = "double"

# Indent with spaces
indent-style = "space"

# Use Unix-style line endings
line-ending = "auto"

[tool.ruff.lint.isort]
known-first-party = ["zapi"]


================================================
FILE: requirements.txt
================================================
playwright>=1.40.0
requests>=2.31.0
cryptography>=41.0.0
httpx>=0.25.0
pydantic>=2.0.0
python-dotenv>=1.0.0
langchain>=1.0.0
langchain-anthropic>=1.0.0
langchain-openai>=1.0.0
click>=8.0.0

# Development dependencies
ruff>=0.6.0
pre-commit>=3.0.0



================================================
FILE: scripts/README.md
================================================
# ZAPI Scripts

Utility scripts for ZAPI development and maintenance.

## Pre-commit Script

**File:** `pre-commit.sh`

Runs Ruff linting and formatting checks before allowing a commit.

### Usage

```bash
# Make it executable (one-time)
chmod +x scripts/pre-commit.sh

# Run manually
./scripts/pre-commit.sh
```

### What it checks

- ✅ Ruff linting (with auto-fix suggestions)
- ✅ Ruff formatting (with format suggestions)
- ❌ Exits with error if checks fail

### Alternative: Use pre-commit hooks

For automatic checks on every commit:

```bash
pip install pre-commit
pre-commit install
```

This uses `.pre-commit-config.yaml` and runs automatically on `git commit`.


================================================
FILE: scripts/pre-commit.sh
================================================
#!/bin/bash
# Pre-commit script for ZAPI
# This script runs Ruff linting and formatting checks before allowing a commit

set -e  # Exit on error

echo "🔍 Running pre-commit checks..."
echo ""

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

# Check if ruff is installed
if ! command -v ruff &> /dev/null; then
    echo -e "${RED}❌ Ruff is not installed!${NC}"
    echo "Install it with: pip install ruff"
    exit 1
fi

# Run Ruff linter
echo "📝 Running Ruff linter..."
if ruff check .; then
    echo -e "${GREEN}✅ Linting passed!${NC}"
else
    echo -e "${RED}❌ Linting failed!${NC}"
    echo ""
    echo "Run 'ruff check . --fix' to auto-fix issues"
    exit 1
fi

echo ""

# Run Ruff formatter check
echo "🎨 Checking code formatting..."
if ruff format --check .; then
    echo -e "${GREEN}✅ Formatting check passed!${NC}"
else
    echo -e "${RED}❌ Code is not formatted correctly!${NC}"
    echo ""
    echo "Run 'ruff format .' to format your code"
    exit 1
fi

echo ""
echo -e "${GREEN}✨ All pre-commit checks passed! Ready to commit.${NC}"


================================================
FILE: setup.py
================================================
"""
Setup script for ZAPI - maintained for backwards compatibility.
Prefer using pyproject.toml for modern Python packaging.
"""

from setuptools import find_packages, setup

with open("README.md", encoding="utf-8") as fh:
    long_description = fh.read()

setup(
    name="zapi",
    version="0.1.0",
    author="ZAPI Contributors",
    description="Zero-Config API Intelligence - automatically discover, understand, and prepare APIs for LLM and agent workflows",
    long_description=long_description,
    long_description_content_type="text/markdown",
    url="https://github.com/adoptai/zapi",
    packages=find_packages(),
    classifiers=[
        "Development Status :: 3 - Alpha",
        "Intended Audience :: Developers",
        "License :: OSI Approved :: MIT License",
        "Programming Language :: Python :: 3",
        "Programming Language :: Python :: 3.9",
        "Programming Language :: Python :: 3.10",
        "Programming Language :: Python :: 3.11",
        "Programming Language :: Python :: 3.12",
    ],
    python_requires=">=3.9",
    install_requires=[
        "playwright>=1.40.0",
    ],
    keywords="api llm automation browser network har",
)


================================================
FILE: zapi/__init__.py
================================================
"""
ZAPI - Zero-Config API Intelligence

An open-source library that automatically discovers, understands,
and prepares APIs for LLM and agent workflows.
"""

from .auth import AuthMode
from .constants import BASE_URL
from .core import ZAPI
from .encryption import LLMKeyEncryption
from .exceptions import ZAPIAuthenticationError, ZAPIError, ZAPINetworkError, ZAPIValidationError
from .har_processing import (
    HarProcessingError,
    HarProcessor,
    HarStats,
    analyze_har_file,
)
from .providers import LLMProvider
from .session import BrowserInitializationError, BrowserNavigationError, BrowserSession, BrowserSessionError
from .utils import (
    interactive_chat,
    load_llm_credentials,
)

__version__ = "0.1.0"
__all__ = [
    "ZAPI",
    "BrowserSession",
    "AuthMode",
    "LLMProvider",
    "LLMKeyEncryption",
    "load_llm_credentials",
    # HAR processing
    "HarProcessor",
    "HarStats",
    "analyze_har_file",
    "interactive_chat",
    # Exception classes
    "ZAPIError",
    "ZAPIAuthenticationError",
    "ZAPIValidationError",
    "ZAPINetworkError",
    "BrowserSessionError",
    "BrowserNavigationError",
    "BrowserInitializationError",
    "HarProcessingError",
    "BASE_URL",
]


================================================
FILE: zapi/auth.py
================================================
"""Authentication handlers for different auth modes."""

from typing import Literal

from playwright.async_api import BrowserContext, Page

from .exceptions import AuthError

AuthMode = Literal["localStorage", "cookie", "header"]


async def apply_localstorage_auth(page: Page, token: str, key: str = "authToken") -> None:
    """
    Inject authentication token into localStorage.

    Args:
        page: Playwright page instance
        token: Authentication token
        key: localStorage key name (default: "authToken")
    """
    await page.evaluate(f"localStorage.setItem('{key}', '{token}')")


async def apply_cookie_auth(page: Page, token: str, name: str = "authToken", domain: str = None) -> None:
    """
    Set authentication token as a cookie.

    Args:
        page: Playwright page instance
        token: Authentication token
        name: Cookie name (default: "authToken")
        domain: Cookie domain (optional)
    """
    cookie = {
        "name": name,
        "value": token,
        "path": "/",
    }
    if domain:
        cookie["domain"] = domain

    await page.context.add_cookies([cookie])


async def apply_header_auth(context: BrowserContext, token: str) -> None:
    """
    Add Authorization header to all requests.

    Args:
        context: Playwright browser context
        token: Authentication token (will be added as "Bearer <token>")
    """
    await context.set_extra_http_headers({"Authorization": f"Bearer {token}"})


def get_auth_handler(auth_mode: AuthMode):
    """
    Factory function to get the appropriate auth handler.

    Args:
        auth_mode: Authentication mode ("localStorage", "cookie", or "header")

    Returns:
        Corresponding auth handler function

    Raises:
        AuthError: If auth_mode is not recognized
    """
    handlers = {
        "localStorage": apply_localstorage_auth,
        "cookie": apply_cookie_auth,
        "header": apply_header_auth,
    }

    if auth_mode not in handlers:
        raise AuthError(f"Invalid auth_mode: {auth_mode}. Must be one of: {', '.join(handlers.keys())}")

    return handlers[auth_mode]


================================================
FILE: zapi/cli.py
================================================
"""Command-line interface for ZAPI."""

import time
from pathlib import Path

import click

from .core import ZAPI
from .har_processing import analyze_har_file


@click.group()
def cli():
    """ZAPI command-line tool."""
    pass


@cli.command()
@click.argument("url")
@click.option("--output", default="session.har", help="Output HAR file path.")
@click.option("--headless/--no-headless", default=False, help="Run browser in headless mode.")
def capture(url, output, headless):
    """Capture a browser session to a HAR file."""
    zapi_client = ZAPI()
    output_path = Path(output)

    click.echo(f"🌐 Launching browser to capture: {url}")
    session = zapi_client.launch_browser(url=url, headless=headless)

    try:
        if not headless:
            click.echo("📋 Use the browser freely, then press ENTER to save the HAR...")
            input()
        else:
            click.echo("Running in headless mode. The script will automatically close the session.")
            # In a real-world headless scenario, you might add some automated actions here.
            # For now, we'll just wait for a moment.
            time.sleep(10)  # Wait 10 seconds

        click.echo("💾 Saving session logs...")
        session.dump_logs(str(output_path))
        click.echo(f"✅ Session saved to: {output_path}")
    finally:
        session.close()
        click.echo("🧹 Browser session closed.")


@cli.command()
@click.argument("har_file", type=click.Path(exists=True))
def analyze(har_file):
    """Analyze a HAR file."""
    click.echo(f"🔍 Analyzing HAR file: {har_file}")
    stats, report, filtered_path = analyze_har_file(har_file, save_filtered=True)

    click.echo("\n📊 HAR Analysis Results:")
    click.echo(f"   ✅ API-relevant entries: {stats.valid_entries:,}")
    click.echo(f"   💰 Estimated cost: ${stats.estimated_cost_usd:.2f}")
    click.echo(f"   ⏱️  Estimated processing time: {round(stats.estimated_time_minutes)} minutes")
    if filtered_path:
        click.echo(f"   🧹 Filtered HAR saved to: {filtered_path}")


@cli.command()
@click.argument("har_file", type=click.Path(exists=True))
def upload(har_file):
    """Upload a HAR file to ZAPI."""
    zapi_client = ZAPI()
    click.echo(f"☁️ Uploading HAR file: {har_file}")
    zapi_client.upload_har(har_file)
    click.echo("✅ HAR file uploaded successfully!")


if __name__ == "__main__":
    cli()


================================================
FILE: zapi/constants.py
================================================
BASE_URL = "https://connect.adopt.ai"


================================================
FILE: zapi/core.py
================================================
"""Core ZAPI class implementation."""

import asyncio
import json
from typing import Callable, Optional

import httpx
import requests

from .constants import BASE_URL
from .encryption import LLMKeyEncryption
from .exceptions import (
    AuthError,
    LLMKeyError,
    NetworkError,
    ZAPIError,
    ZAPINetworkError,
    ZAPIValidationError,
)
from .providers import validate_llm_keys
from .session import BrowserSession
from .utils import load_zapi_credentials, set_llm_api_key_env


class ZAPI:
    """
    Zero-Config API Intelligence main class.

    This class provides a simple interface to launch browser sessions,
    capture network traffic, and export HAR files for API discovery.
    """

    def __init__(
        self,
        client_id: Optional[str] = None,
        secret: Optional[str] = None,
        llm_provider: Optional[str] = None,
        llm_model_name: Optional[str] = None,
        llm_api_key: Optional[str] = None,
    ):
        """
        Initialize ZAPI instance.

        Args:
            client_id: Client ID for authentication. If None, loads from ADOPT_CLIENT_ID env var.
            secret: Secret key for authentication. If None, loads from ADOPT_SECRET_KEY env var.
            llm_provider: LLM provider name (e.g., "anthropic"). If None, loads from LLM_PROVIDER env var.
            llm_model_name: LLM model name (e.g., "claude-3-5-sonnet-20241022"). If None, loads from LLM_MODEL_NAME env var.
            llm_api_key: LLM API key for the specified provider. If None, loads from LLM_API_KEY env var.

        Raises:
            ValueError: If client_id or secret is empty, or LLM key format is invalid
            RuntimeError: If token fetch fails
        """
        # Auto-load credentials from environment if not provided
        if client_id is None or secret is None or llm_provider is None or llm_model_name is None or llm_api_key is None:
            env_client_id, env_secret, env_llm_provider, env_llm_model_name, env_llm_api_key = load_zapi_credentials()

            # Use provided values or fallback to environment values
            client_id = client_id or env_client_id
            secret = secret or env_secret
            llm_provider = llm_provider or env_llm_provider
            llm_model_name = llm_model_name or env_llm_model_name
            llm_api_key = llm_api_key or env_llm_api_key

        if not client_id or not client_id.strip():
            raise ZAPIValidationError("client_id cannot be empty")
        if not secret or not secret.strip():
            raise ZAPIValidationError("secret cannot be empty")

        self.client_id = client_id
        self.secret = secret

        # Fetch auth token and extract org_id
        self.auth_token, self.org_id, self.email = self._fetch_auth_token()

        # Initialize encryption handler
        self._key_encryptor = LLMKeyEncryption(self.org_id)

        # Validate and encrypt LLM key if provided
        self._encrypted_llm_key: str = ""
        self._llm_provider: str = llm_provider
        self._llm_model_name: str = llm_model_name
        self.set_llm_key(llm_provider, llm_api_key, llm_model_name)

        # Automatically set LLM API key in environment for LangChain compatibility
        if self._llm_provider and self._encrypted_llm_key:
            try:
                set_llm_api_key_env(self._llm_provider, self.get_decrypted_llm_key())
            except Exception:
                # Silently fail if LangChain integration is not available
                pass

    def _fetch_auth_token(self) -> tuple[str, str]:
        """
        Fetch authentication token from adopt.ai API and extract org_id.

        Returns:
            Tuple of (authentication_token, org_id)

        Raises:
            RuntimeError: If token fetch fails or org_id extraction fails
        """
        url = f"{BASE_URL}/v1/auth/token"
        payload = {"clientId": self.client_id, "secret": self.secret}
        headers = {"accept": "application/json", "Content-Type": "application/json"}

        try:
            response = requests.post(url, json=payload, headers=headers)
            response.raise_for_status()
            data = response.json()

            # Extract token from response
            if "token" in data:
                token = data["token"]
            elif "access_token" in data:
                token = data["access_token"]
            else:
                raise RuntimeError(f"Unexpected response format: {data}")

            # Validate token and extract org_id via backend API
            try:
                loop = asyncio.get_event_loop()
            except RuntimeError:
                loop = asyncio.new_event_loop()
                asyncio.set_event_loop(loop)

            org_id, email = loop.run_until_complete(self._validate_token_and_extract_org_id(token))

            return token, org_id, email

        except requests.exceptions.Timeout:
            raise NetworkError("Authentication request timed out. Please check your internet connection.")
        except requests.exceptions.ConnectionError:
            raise NetworkError(
                "Cannot connect to adopt.ai authentication service. Please check your internet connection."
            )
        except requests.exceptions.HTTPError as e:
            if e.response.status_code == 401:
                error_message = (
                    "Authentication Error: Invalid credentials\\n\\n"
                    "Your ADOPT_CLIENT_ID or ADOPT_SECRET_KEY appears to be incorrect.\\n\\n"
                    "Please check:\\n"
                    "1. Your .env file has the correct credentials\\n"
                    "2. Get valid credentials from https://app.adopt.ai\\n"
                    "3. Ensure no extra spaces in your .env file\\n\\n"
                    "Need help? See: https://docs.zapi.ai/authentication"
                )
                raise AuthError(error_message)
            elif e.response.status_code == 403:
                raise AuthError("Access forbidden. Please check your account permissions.")
            else:
                raise AuthError(f"Authentication failed: HTTP {e.response.status_code}")
        except requests.exceptions.RequestException as e:
            raise NetworkError(f"Failed to fetch authentication token: {e}")

    async def _validate_token_and_extract_org_id(self, token: str) -> str:
        """
        Validate JWT token via backend API and extract org_id.

        Args:
            token: JWT token string

        Returns:
            Organization ID extracted from validated token

        Raises:
            RuntimeError: If token validation fails or org_id extraction fails
        """
        # Use adopt.ai backend API for token validation
        async with httpx.AsyncClient() as client:
            try:
                response = await client.post(
                    f"{BASE_URL}/v1/auth/validate-token",
                    headers={"Authorization": f"Bearer {token}", "Content-Type": "application/json"},
                )
                response.raise_for_status()

                validation_result = response.json()

                # API returns org_id and user_email directly on success
                org_id = validation_result.get("org_id")
                email = validation_result.get("user_email", "")
                if not org_id or not isinstance(org_id, str):
                    raise RuntimeError("Invalid org_id in validation response")

                print(f"Org ID: {org_id}")
                print(f"Email: {email}")

                return org_id, email

            except httpx.HTTPStatusError as e:
                if e.response.status_code == 401:
                    raise AuthError("Token validation failed: Invalid or expired token")
                elif e.response.status_code == 403:
                    raise AuthError("Token validation failed: Access forbidden")
                else:
                    raise NetworkError(f"Backend token validation failed: HTTP {e.response.status_code}")
            except httpx.ConnectTimeout:
                raise NetworkError("Token validation timed out. Please check your internet connection.")
            except httpx.RequestError as e:
                raise NetworkError(f"Token validation request failed: {e}")
            except Exception as e:
                raise ZAPIError(f"Token validation error: {e}")

    def set_llm_key(self, provider: str, api_key: str, model_name: str) -> None:
        """
        Set LLM API key for a specific provider.

        Args:
            provider: Provider name (e.g., "anthropic")
            api_key: API key for the specified provider

        Raises:
            ValueError: If provider or api_key format is invalid
            RuntimeError: If encryption fails
        """
        if not provider or not api_key:
            self._encrypted_llm_key = None
            self._llm_provider = None
            self._llm_model_name = None
            return

        # Validate key format for the provider
        try:
            validated_keys = validate_llm_keys({provider: api_key})
            validated_provider = list(validated_keys.keys())[0]
            validated_key = list(validated_keys.values())[0]
        except LLMKeyError as e:
            raise LLMKeyError(f"LLM key validation failed: {e}")

        # Encrypt only the API key using org_id (provider stored separately)
        try:
            self._encrypted_llm_key = self._key_encryptor.encrypt_key(validated_key)
            self._llm_provider = validated_provider
            self._llm_model_name = model_name
        except Exception as e:
            raise ZAPIError(f"Failed to encrypt LLM key: {e}")

    def get_llm_provider(self) -> Optional[str]:
        """
        Get the configured LLM provider.

        Returns:
            Provider name if configured, None otherwise
        """
        return self._llm_provider

    def get_llm_model_name(self) -> Optional[str]:
        """
        Get the configured LLM model name.

        Returns:
            Model name if configured, None otherwise
        """
        return self._llm_model_name

    def get_encrypted_llm_key(self) -> Optional[str]:
        """
        Get the encrypted LLM API key.

        Returns:
            Encrypted API key if configured, None otherwise
        """
        return self._encrypted_llm_key

    def get_decrypted_llm_key(self) -> Optional[str]:
        """
        Get the decrypted LLM API key.

        Returns:
            Decrypted API key if configured, None otherwise
        """
        try:
            if not self._encrypted_llm_key:
                return None
            return self._key_encryptor.decrypt_key(self._encrypted_llm_key)
        except Exception as e:
            print(f"Failed to decrypt LLM key: {e}")
            return None

    def has_llm_key(self) -> bool:
        """
        Check if LLM key is configured.

        Returns:
            True if LLM key is set, False otherwise
        """
        return self._encrypted_llm_key is not None

    def get_zapi_tools(self) -> list[Callable]:
        """
        Get LangChain tools from ZAPI (created on-demand).

        Returns:
            List of LangChain tool functions
        """
        try:
            from .integrations.langchain.tool import ZAPILangchainTool

            tool_creator = ZAPILangchainTool(self)
            return tool_creator.create_tools()
        except ImportError:
            raise ImportError("LangChain integration not available. Install langchain to use this feature.")

    def launch_browser(
        self, url: str, headless: bool = True, wait_until: str = "load", **playwright_options
    ) -> BrowserSession:
        """
        Launch a browser session with network logging.

        Args:
            url: Initial URL to navigate to
            headless: Whether to run browser in headless mode (default: True)
            wait_until: When to consider navigation complete (default: "load")
                       Options: "load", "domcontentloaded", "networkidle"
            **playwright_options: Additional Playwright browser launch options.
                                 Use `args=["--disable-web-security"]` to disable
                                 web security (for testing only).

        Returns:
            BrowserSession instance ready for navigation and interaction

        Raises:
            ZAPIValidationError: If URL format is invalid
            ZAPIError: If browser launch fails

        Example:
            >>> z = ZAPI(client_id="YOUR_CLIENT_ID", secret="YOUR_SECRET")
            >>> session = z.launch_browser(url="https://app.example.com")
            >>> session.dump_logs("session.har")
            >>> session.close()

            # Disable web security (for testing only):
            >>> session = z.launch_browser(
            ...     url="https://app.example.com",
            ...     args=["--disable-web-security"]
            ... )
        """
        session = BrowserSession(auth_token=self.auth_token, headless=headless, **playwright_options)

        # Initialize the session synchronously with enhanced error handling
        try:
            loop = asyncio.get_event_loop()
        except RuntimeError:
            loop = asyncio.new_event_loop()
            asyncio.set_event_loop(loop)

        try:
            loop.run_until_complete(session._initialize(initial_url=url, wait_until=wait_until))
        except Exception as e:
            # Close session if initialization failed
            try:
                session.close()
            except Exception:
                # Ignore cleanup errors, focus on the original error
                pass

            error_message = str(e)

            # Provide specific error messages for common browser issues
            if "Cannot navigate to invalid URL" in error_message:
                raise ZAPIValidationError(
                    f"Browser cannot navigate to URL: '{url}'. Please check the URL format and ensure it's accessible."
                )
            elif "net::ERR_NAME_NOT_RESOLVED" in error_message:
                raise NetworkError(
                    f"Domain name could not be resolved: '{url}'. "
                    "Please check the URL spelling and your internet connection."
                )
            elif "net::ERR_CONNECTION_REFUSED" in error_message:
                raise NetworkError(
                    f"Connection refused to: '{url}'. The server may be down or the URL may be incorrect."
                )
            elif "Timeout" in error_message:
                raise NetworkError(
                    f"Timeout while loading: '{url}'. "
                    "The website took too long to respond. Please try again or use a different URL."
                )
            else:
                raise ZAPIError(f"Failed to launch browser session: {error_message}")

        return session

    def upload_har(self, har_file: str):
        """
        Upload a HAR file to the ZAPI API with optional encrypted LLM keys.

        Args:
            har_file: Path to the HAR file to upload

        Returns:
            Response JSON from the API

        Raises:
            ZAPIValidationError: If file validation fails
            ZAPINetworkError: If upload fails due to network issues
            ZAPIAuthenticationError: If authentication fails
        """
        url = f"{BASE_URL}/v1/api-discovery/upload-file"

        headers = {"Authorization": f"Bearer {self.auth_token}"}

        # Prepare metadata if LLM key is configured
        metadata = {}
        if self.has_llm_key():
            metadata = {
                "byok_encrypted_llm_key": self._encrypted_llm_key,
                "byok_llm_provider": self._llm_provider,  # Provider sent in plaintext
                "byok_llm_model": self._llm_model_name,
                "byok_enabled": True,
                "is_trial_user": True,
            }

            if self.email:
                metadata["user_email"] = self.email
        else:
            metadata = {
                "byok_enabled": False,
                "is_trial_user": True,
            }

            if self.email:
                metadata["user_email"] = self.email

        # Prepare multipart form data with enhanced error handling
        try:
            with open(har_file, "rb") as f:
                files = {"file": (har_file, f, "application/json")}

                # Add metadata as form data
                data = {"metadata": json.dumps(metadata)}

                response = requests.post(url, headers=headers, files=files, data=data, timeout=60)

        except FileNotFoundError:
            raise ZAPIValidationError(f"HAR file not found: '{har_file}'")
        except PermissionError:
            raise ZAPIValidationError(f"Permission denied reading HAR file: '{har_file}'")
        except requests.exceptions.Timeout:
            raise NetworkError("Upload request timed out. Please try again.")
        except requests.exceptions.ConnectionError:
            raise NetworkError("Cannot connect to ZAPI upload service. Please check your internet connection.")
        except requests.exceptions.HTTPError as e:
            if e.response.status_code == 401:
                raise AuthError("Upload failed: Invalid or expired authentication token")
            elif e.response.status_code == 413:
                raise ZAPIValidationError("HAR file is too large. Please try with a smaller session.")
            elif e.response.status_code == 400:
                raise ZAPIValidationError("Invalid HAR file format. Please ensure the file was generated correctly.")
            else:
                raise NetworkError(f"Upload failed: HTTP {e.response.status_code}")
        except requests.exceptions.RequestException as e:
            raise NetworkError(f"Upload request failed: {e}")

        try:
            response.raise_for_status()
            print("file uploaded successfully")
            if self.has_llm_key():
                print(f"Included encrypted key for provider: {self.get_llm_provider()}")
            return response.json()
        except requests.exceptions.HTTPError:
            # This should be caught above, but just in case
            raise ZAPINetworkError(f"Upload failed with status code: {response.status_code}")
        except json.JSONDecodeError:
            raise ZAPIError("Invalid response format from upload service")

    def get_documented_apis(self, page: int = 1, page_size: int = 10):
        """
        Fetch the list of documented APIs with pagination support.

        Args:
            page: Page number to fetch (default: 1)
            page_size: Number of items per page (default: 10)

        Returns:
            Response JSON containing the list of documented APIs

        Raises:
            requests.exceptions.RequestException: If the request fails
        """
        url = f"{BASE_URL}/v1/tools/apis"
        headers = {"Authorization": f"Bearer {self.auth_token}"}
        params = {"page": page, "page_size": page_size}

        response = requests.get(url, headers=headers, params=params)
        response.raise_for_status()
        return response.json()


================================================
FILE: zapi/encryption.py
================================================
"""Secure encryption/decryption utilities for LLM API keys."""

import base64
import secrets

from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC


class LLMKeyEncryption:
    """Handles encryption/decryption of LLM API keys using org_id as context."""

    # Constants for encryption
    KEY_LENGTH = 32  # 256 bits for AES-256
    NONCE_LENGTH = 12  # 96 bits for GCM
    SALT_LENGTH = 16  # 128 bits
    TAG_LENGTH = 16  # 128 bits for GCM tag
    ITERATIONS = 100000  # PBKDF2 iterations

    def __init__(self, org_id: str):
        """
        Initialize encryption handler with organization ID.

        Args:
            org_id: Organization ID used as encryption context

        Raises:
            ValueError: If org_id is empty or invalid
        """
        if not org_id or not org_id.strip():
            raise ValueError("org_id cannot be empty")

        self.org_id = org_id.strip()

    def _derive_key(self, salt: bytes) -> bytes:
        """
        Derive encryption key from org_id using PBKDF2.

        Args:
            salt: Random salt for key derivation

        Returns:
            Derived encryption key
        """
        kdf = PBKDF2HMAC(
            algorithm=hashes.SHA256(),
            length=self.KEY_LENGTH,
            salt=salt,
            iterations=self.ITERATIONS,
            backend=default_backend(),
        )
        return kdf.derive(self.org_id.encode("utf-8"))

    def encrypt_key(self, api_key: str) -> str:
        """
        Encrypt a single LLM API key using org_id as context.

        Args:
            api_key: API key to encrypt

        Returns:
            Base64-encoded encrypted data with embedded salt and nonce

        Raises:
            ValueError: If encryption fails
        """
        if not api_key or not api_key.strip():
            raise ValueError("api_key cannot be empty")

        try:
            # Generate random salt and nonce
            salt = secrets.token_bytes(self.SALT_LENGTH)
            nonce = secrets.token_bytes(self.NONCE_LENGTH)

            # Derive encryption key
            key = self._derive_key(salt)

            # Only encrypt the API key itself (no provider needed)
            plaintext = api_key.strip().encode("utf-8")

            # Encrypt using AES-256-GCM
            cipher = Cipher(algorithms.AES(key), modes.GCM(nonce), backend=default_backend())
            encryptor = cipher.encryptor()
            ciphertext = encryptor.update(plaintext) + encryptor.finalize()

            # Package: salt + nonce + ciphertext + tag
            encrypted_data = salt + nonce + ciphertext + encryptor.tag

            # Return base64-encoded result
            return base64.b64encode(encrypted_data).decode("ascii")

        except Exception as e:
            raise ValueError(f"Failed to encrypt LLM key: {e}")
        finally:
            # Clear sensitive data from memory
            if "key" in locals():
                key = b"\x00" * len(key)
            if "plaintext" in locals():
                plaintext = b"\x00" * len(plaintext)

    def decrypt_key(self, encrypted_data: str) -> str:
        """
        Decrypt a single LLM API key from encrypted data.

        Args:
            encrypted_data: Base64-encoded encrypted data

        Returns:
            Decrypted API key string

        Raises:
            ValueError: If decryption fails or data is corrupted
        """
        if not encrypted_data or not encrypted_data.strip():
            raise ValueError("encrypted_data cannot be empty")

        key = None
        plaintext = None

        try:
            # Decode base64 data
            try:
                data = base64.b64decode(encrypted_data.encode("ascii"))
            except Exception as e:
                raise ValueError(f"Invalid base64 encoding: {e}")

            # Validate minimum length
            min_length = self.SALT_LENGTH + self.NONCE_LENGTH + self.TAG_LENGTH + 1
            if len(data) < min_length:
                raise ValueError("Encrypted data is too short")

            # Extract components
            salt = data[: self.SALT_LENGTH]
            nonce = data[self.SALT_LENGTH : self.SALT_LENGTH + self.NONCE_LENGTH]
            tag_start = len(data) - self.TAG_LENGTH
            ciphertext = data[self.SALT_LENGTH + self.NONCE_LENGTH : tag_start]
            tag = data[tag_start:]

            # Derive decryption key
            key = self._derive_key(salt)

            # Decrypt using AES-256-GCM
            cipher = Cipher(algorithms.AES(key), modes.GCM(nonce, tag), backend=default_backend())
            decryptor = cipher.decryptor()
            plaintext = decryptor.update(ciphertext) + decryptor.finalize()

            # Return decrypted API key directly
            return plaintext.decode("utf-8")

        except Exception as e:
            if "Invalid base64" in str(e):
                raise
            raise ValueError(f"Failed to decrypt LLM key: {e}")
        finally:
            # Clear sensitive data from memory
            if key is not None:
                key = b"\x00" * len(key)
            if plaintext is not None:
                plaintext = b"\x00" * len(plaintext)


def encrypt_llm_key(org_id: str, api_key: str) -> str:
    """
    Convenience function to encrypt a single LLM key.

    Args:
        org_id: Organization ID for encryption context
        api_key: API key to encrypt

    Returns:
        Base64-encoded encrypted data
    """
    encryptor = LLMKeyEncryption(org_id)
    return encryptor.encrypt_key(api_key)


def decrypt_llm_key(org_id: str, encrypted_data: str) -> str:
    """
    Convenience function to decrypt a single LLM key.

    Args:
        org_id: Organization ID for decryption context
        encrypted_data: Base64-encoded encrypted data

    Returns:
        Decrypted API key string
    """
    decryptor = LLMKeyEncryption(org_id)
    return decryptor.decrypt_key(encrypted_data)


def secure_compare_key(provider1: str, key1: str, provider2: str, key2: str) -> bool:
    """
    Securely compare two provider-key pairs without timing attacks.

    Args:
        provider1: First provider name
        key1: First API key
        provider2: Second provider name
        key2: Second API key

    Returns:
        True if both provider and key match, False otherwise
    """
    # Use secrets.compare_digest for timing-safe comparison
    provider_match = secrets.compare_digest(provider1, provider2)
    key_match = secrets.compare_digest(key1, key2)

    return provider_match and key_match


================================================
FILE: zapi/exceptions.py
================================================
"""Custom exception classes for ZAPI."""


class ZAPIError(Exception):
    """Base exception class for ZAPI errors."""

    pass


class ZAPIAuthenticationError(ZAPIError):
    """Authentication-related errors."""

    pass


class ZAPIValidationError(ZAPIError):
    """Input validation errors."""

    pass


class ZAPINetworkError(ZAPIError):
    """Network-related errors."""

    pass


# Internal aliases for consistency
AuthError = ZAPIAuthenticationError
NetworkError = ZAPINetworkError
LLMKeyError = ZAPIValidationError


================================================
FILE: zapi/har_processing.py
================================================
"""HAR file processing and analysis module."""

import json
import os
import re
from dataclasses import dataclass
from typing import Any, Optional
from urllib.parse import urlparse


@dataclass
class HarStats:
    """Statistics for a HAR file."""

    total_entries: int
    valid_entries: int
    skipped_entries: int
    unique_domains: int
    estimated_cost_usd: float
    estimated_time_minutes: float
    skipped_by_reason: dict[str, int]
    domains: list[str]


class HarProcessingError(Exception):
    """Base exception for HAR processing errors."""

    pass


class HarProcessor:
    """
    Class to preprocess and analyze HAR files.

    Provides functionality to load HAR files, extract entries, and generate
    statistics including cost and time estimates for processing.
    """

    # Cost per entry in USD
    COST_PER_ENTRY = 0.02

    # Time per entry in minutes (24 seconds = 0.4 minutes)
    TIME_PER_ENTRY_MINUTES = 24 / 60

    # Filter patterns for static assets and non-API content
    DENY_EXTENSIONS = re.compile(
        r"\.(js|css|png|jpe?g|gif|svg|webp|ico|bmp|avif|mp4|webm|mp3|wav|woff2?|ttf|otf|map|jpf)(\?.*)?$",
        re.IGNORECASE,
    )

    # MIME types to exclude
    DENY_MIMETYPES = {
        "text/css",
        "text/javascript",
        "application/javascript",
        "application/x-javascript",
        "image/jpeg",
        "image/png",
        "image/gif",
        "image/webp",
        "image/svg+xml",
        "image/x-icon",
        "font/woff",
        "font/woff2",
        "font/ttf",
        "font/otf",
        "audio/mpeg",
        "audio/wav",
        "video/mp4",
        "video/webm",
        "application/pdf",
        "application/font-woff",
    }

    def __init__(self, har_file_path: str):
        """
        Initialize HAR processor with a file path.

        Args:
            har_file_path: Path to the HAR file to process

        Raises:
            HarProcessingError: If file doesn't exist or is not readable
        """
        self.har_file_path = har_file_path
        self.har_data = None
        self.entries = []
        self.skipped_entries_by_reason: dict[str, list[dict]] = {
            "invalid_entry_format": [],
            "non_http_scheme": [],
            "missing_url": [],
            "parsing_error": [],
            "denied_extension": [],
            "denied_mime_type": [],
        }
        self.skipped_counters: dict[str, int] = {
            "invalid_entry_format": 0,
            "non_http_scheme": 0,
            "missing_url": 0,
            "parsing_error": 0,
            "denied_extension": 0,
            "denied_mime_type": 0,
        }
        self.skipped_entries = 0
        self.domains_found = set()

        # Validate file exists and is readable
        if not os.path.exists(har_file_path):
            raise HarProcessingError(f"HAR file not found: {har_file_path}")

        if not os.access(har_file_path, os.R_OK):
            raise HarProcessingError(f"HAR file is not readable: {har_file_path}")

    def load_and_process(self) -> HarStats:
        """
        Load HAR file and process all entries to generate statistics.

        Returns:
            HarStats object containing comprehensive statistics

        Raises:
            HarProcessingError: If file processing fails
        """
        try:
            # Load HAR file content
            with open(self.har_file_path, encoding="utf-8", errors="replace") as f:
                har_file_content = f.read()

            # Parse JSON
            try:
                self.har_data = json.loads(har_file_content)
            except json.JSONDecodeError as e:
                error_message = (
                    "HAR File Error: Invalid JSON format.\\n\\n"
                    f"The file '{self.har_file_path}' could not be parsed as valid JSON.\\n"
                    f"Error details: {e}\\n\\n"
                    "Please check for:\\n"
                    "1. File corruption during download or transfer.\\n"
                    "2. Incomplete file content.\\n"
                    "3. Manual edits that broke the JSON structure."
                )
                raise HarProcessingError(error_message)

            # Validate HAR structure
            if (
                not isinstance(self.har_data, dict)
                or "log" not in self.har_data
                or "entries" not in self.har_data["log"]
            ):
                error_message = (
                    "HAR File Error: Invalid HAR structure.\\n\\n"
                    f"The file '{self.har_file_path}' does not follow the expected HAR format.\\n"
                    "It must contain a `log` object with an `entries` array.\\n\\n"
                    "Please ensure the file was generated by a compatible tool."
                )
                raise HarProcessingError(error_message)

            entries = self.har_data["log"]["entries"]
            if not isinstance(entries, list):
                raise HarProcessingError("HAR entries must be a list")

            # Process each entry
            valid_entries = 0
            for entry in entries:
                if self._process_entry(entry):
                    valid_entries += 1

            # Generate statistics
            total_entries = len(entries)

            return HarStats(
                total_entries=total_entries,
                valid_entries=valid_entries,
                skipped_entries=self.skipped_entries,
                unique_domains=len(self.domains_found),
                estimated_cost_usd=valid_entries * self.COST_PER_ENTRY,
                estimated_time_minutes=valid_entries * self.TIME_PER_ENTRY_MINUTES,
                skipped_by_reason=dict(self.skipped_counters),
                domains=sorted(self.domains_found),
            )

        except FileNotFoundError:
            raise HarProcessingError(f"HAR file not found: {self.har_file_path}")
        except PermissionError:
            raise HarProcessingError(f"Permission denied reading HAR file: {self.har_file_path}")
        except Exception as e:
            raise HarProcessingError(f"Error processing HAR file: {e}")

    def _process_entry(self, entry: dict[str, Any]) -> bool:
        """
        Process a single HAR entry and extract relevant information.

        Args:
            entry: HAR entry dictionary

        Returns:
            True if entry is valid and processed, False if skipped
        """
        try:
            # Basic validation - check for required fields
            if "request" not in entry or "response" not in entry:
                self.skipped_entries_by_reason["invalid_entry_format"].append(entry)
                self.skipped_counters["invalid_entry_format"] += 1
                self.skipped_entries += 1
                return False

            # Extract URL
            url = self._extract_url_from_entry(entry)
            if not url:
                self.skipped_entries_by_reason["missing_url"].append(entry)
                self.skipped_counters["missing_url"] += 1
                self.skipped_entries += 1
                return False

            # Validate HTTP/HTTPS scheme
            if not url.lower().startswith(("http://", "https://")):
                self.skipped_entries_by_reason["non_http_scheme"].append(entry)
                self.skipped_counters["non_http_scheme"] += 1
                self.skipped_entries += 1
                return False

            # Filter by file extensions - exclude static assets
            try:
                parsed_url = urlparse(url)
                path = parsed_url.path
                if self.DENY_EXTENSIONS.search(path):
                    self.skipped_entries_by_reason["denied_extension"].append(entry)
                    self.skipped_counters["denied_extension"] += 1
                    self.skipped_entries += 1
                    return False
            except Exception:
                # URL parsing failed, but we'll continue processing
                pass

            # Filter by response MIME types
            response_content = self._extract_response_content(entry)
            mime_type = response_content.get("mimeType", "").split(";")[0]
            if mime_type in self.DENY_MIMETYPES:
                self.skipped_entries_by_reason["denied_mime_type"].append(entry)
                self.skipped_counters["denied_mime_type"] += 1
                self.skipped_entries += 1
                return False

            # Extract domain information
            try:
                parsed_url = urlparse(url)
                domain = parsed_url.netloc
                if domain:
                    self.domains_found.add(domain)
            except Exception:
                # URL parsing failed, but we'll still count it as valid
                pass

            # Store processed entry
            self.entries.append(entry)
            return True

        except Exception:
            self.skipped_entries_by_reason["parsing_error"].append(entry)
            self.skipped_counters["parsing_error"] += 1
            self.skipped_entries += 1
            return False

    def _extract_url_from_entry(self, entry: dict[str, Any]) -> str:
        """Extract URL from an entry efficiently, returning empty string if not found."""
        try:
            return entry.get("request", {}).get("url", "")
        except (KeyError, AttributeError):
            return ""

    def _extract_response_content(self, entry: dict[str, Any]) -> dict[str, Any]:
        """Extract response content from an entry efficiently, returning empty dict if not found."""
        try:
            return entry.get("response", {}).get("content", {})
        except (KeyError, AttributeError):
            return {}

    def save_filtered_har(self, output_path: str) -> str:
        """
        Save a new HAR file containing only the valid API-relevant entries.

        Args:
            output_path: Path where to save the filtered HAR file

        Returns:
            Path to the saved filtered HAR file

        Raises:
            HarProcessingError: If saving fails or no data has been processed
        """
        if self.har_data is None:
            raise HarProcessingError("No HAR data loaded. Call load_and_process() first.")

        if not self.entries:
            raise HarProcessingError("No valid entries found to save.")

        try:
            # Create a copy of the original HAR structure
            filtered_har = {
                "log": {
                    "version": self.har_data["log"].get("version", "1.2"),
                    "creator": self.har_data["log"].get("creator", {"name": "ZAPI HarProcessor", "version": "1.0.0"}),
                    "browser": self.har_data["log"].get("browser", {}),
                    "pages": self.har_data["log"].get("pages", []),
                    "entries": self.entries,  # Only include the filtered valid entries
                }
            }

            # Add metadata about filtering
            if "creator" not in filtered_har["log"]:
                filtered_har["log"]["creator"] = {}

            filtered_har["log"]["creator"]["name"] = "ZAPI HarProcessor (Filtered)"
            filtered_har["log"]["creator"]["comment"] = (
                f"Filtered HAR file - {len(self.entries)} API entries from {len(self.har_data['log']['entries'])} total entries"
            )

            # Save to file
            with open(output_path, "w", encoding="utf-8") as f:
                json.dump(filtered_har, f, indent=2, ensure_ascii=False)

            return output_path

        except OSError as e:
            raise HarProcessingError(f"Failed to save filtered HAR file: {e}")
        except Exception as e:
            raise HarProcessingError(f"Error creating filtered HAR file: {e}")

    def get_summary_report(self, stats: HarStats) -> str:
        """
        Generate a formatted summary report of the HAR analysis.

        Args:
            stats: HarStats object from load_and_process()

        Returns:
            Formatted string report
        """
        report_lines = [
            "📊 HAR File Analysis Summary",
            "=" * 50,
            f"📁 File: {os.path.basename(self.har_file_path)}",
            f"📋 Total Entries: {stats.total_entries:,}",
            f"✅ Valid Entries: {stats.valid_entries:,}",
            f"⚠️  Skipped Entries: {stats.skipped_entries:,}",
            f"🌐 Unique Domains: {stats.unique_domains:,}",
            "",
            "💰 Cost Analysis (API entries only):",
            f"   • Rate: ${self.COST_PER_ENTRY:.3f} per API entry",
            f"   • Estimated Cost: ${stats.estimated_cost_usd:.2f}",
            "",
            "⏱️  Time Estimate (API entries only):",
            f"   • Rate: {self.TIME_PER_ENTRY_MINUTES:.2f} minutes per API entry",
            f"   • Estimated Time: {stats.estimated_time_minutes:.1f} minutes",
            f"   • Estimated Time: {stats.estimated_time_minutes / 60:.1f} hours",
        ]

        # Add skipped entry breakdown if there are any
        if stats.skipped_entries > 0:
            report_lines.extend(["", "⚠️  Skipped Entry Breakdown:"])
            for reason, count in stats.skipped_by_reason.items():
                if count > 0:
                    reason_display = reason.replace("_", " ").title()
                    report_lines.append(f"   • {reason_display}: {count:,}")

        # Add top domains if there are any
        if stats.domains:
            report_lines.extend(["", "🌐 Top Domains Found:"])
            # Show first 10 domains
            for domain in stats.domains[:10]:
                report_lines.append(f"   • {domain}")

            if len(stats.domains) > 10:
                report_lines.append(f"   • ... and {len(stats.domains) - 10} more")

        return "\n".join(report_lines)


def analyze_har_file(
    har_file_path: str, save_filtered: bool = False, filtered_output_path: str = None
) -> tuple[HarStats, str, Optional[str]]:
    """
    Convenience function to analyze a HAR file and optionally save filtered version.

    Args:
        har_file_path: Path to the HAR file
        save_filtered: Whether to save a filtered HAR file with only API entries
        filtered_output_path: Path for filtered HAR file (auto-generated if None)

    Returns:
        Tuple of (HarStats, formatted_report_string, filtered_file_path_or_none)

    Raises:
        HarProcessingError: If processing fails
    """
    processor = HarProcessor(har_file_path)
    stats = processor.load_and_process()
    report = processor.get_summary_report(stats)

    filtered_file_path = None
    if save_filtered and stats.valid_entries > 0:
        if filtered_output_path is None:
            # Auto-generate filtered file name
            base_name = os.path.splitext(har_file_path)[0]
            filtered_output_path = f"{base_name}_filtered.har"

        filtered_file_path = processor.save_filtered_har(filtered_output_path)

    return stats, report, filtered_file_path


================================================
FILE: zapi/integrations/langchain/tool.py
================================================
"""
ZAPI Langchain Tool - Simple & Clean

Basic conversion of ZAPI documented APIs into Langchain tools.
"""

import os
from typing import Any, Callable, Optional

import requests
from langchain_core.tools import tool

from ...core import ZAPI
from ...utils import load_security_headers


class ZAPILangchainTool:
    """
    Simple tool provider to convert ZAPI APIs into Langchain tools.

    Supports loading security headers from a JSON file for API authentication.
    The headers file should contain a 'headers' object with key-value pairs
    that will be added to all API requests.

    Example headers file (api-headers.json):
    {
        "headers": {
            "Authorization": "Bearer your-token",
            "X-API-Key": "your-api-key",
            "X-Client-ID": "your-client-id"
        }
    }
    """

    def __init__(self, zapi_instance: ZAPI, headers_file: Optional[str] = None):
        self.zapi = zapi_instance
        self.security_headers = load_security_headers(headers_file)

    def create_tools(self) -> list[Callable]:
        """Create Langchain tools from documented APIs."""
        # Get APIs from ZAPI
        response = self.zapi.get_documented_apis(page_size=50)
        apis = response.get("items", [])

        # Create tools
        tools = []
        for api_data in apis:
            try:
                tool_func = self._create_tool(api_data)
                tools.append(tool_func)
            except Exception as e:
                print(f"Error creating tool: {e}")
                continue  # Skip failed tools

        return tools

    def _create_tool(self, api_data: dict[str, Any]) -> Callable:
        """Create a tool from API data."""
        api_id = api_data.get("id", "")
        api_name = api_data.get("title", f"api_{api_id}")
        description = api_data.get("description", f"{api_data.get('api_type', 'GET')} {api_data.get('path', '/')}")

        @tool(description=description)
        def api_tool(**kwargs) -> dict[str, Any]:
            """Dynamically created ZAPI tool for API calls."""
            return self._call_api(api_id, api_data, kwargs)

        # Set the tool name (clean it for use as function name)
        clean_name = api_name.lower().replace(" ", "_").replace("-", "_").replace("/", "_")
        # Remove any non-alphanumeric characters except underscores
        clean_name = "".join(c if c.isalnum() or c == "_" else "_" for c in clean_name)
        # Ensure it starts with a letter or underscore
        if clean_name and not (clean_name[0].isalpha() or clean_name[0] == "_"):
            clean_name = f"api_{clean_name}"

        api_tool.name = clean_name or f"api_{api_id}"

        return api_tool

    def _call_api(self, api_id: str, api_data: dict[str, Any], params: dict[str, Any]) -> dict[str, Any]:
        """Make the actual API call with comprehensive error handling."""
        import logging

        method = api_data.get("api_type", "GET")  # Use 'api_type' instead of 'method'
        path = api_data.get("path", "/")
        base_url = api_data.get("base_url", "") or os.getenv("YOUR_API_BASE_URL", "")

        # Validate base_url
        if not base_url:
            return {
                "error": True,
                "error_type": "configuration_error",
                "message": "No base URL configured for API call",
                "details": "Either set base_url in API configuration or YOUR_API_BASE_URL environment variable",
                "api_id": api_id,
                "path": path,
            }

        # Build URL
        url = f"{base_url.rstrip('/')}{path}"

        # Replace path parameters
        for key, value in params.items():
            url = url.replace(f"{{{key}}}", str(value))

        # Prepare request
        headers = {}
        data = None

        # Add security headers from loaded configuration
        headers.update(self.security_headers)

        # Set data for POST/PUT
        if method.upper() in ["POST", "PUT"]:
            data = {k: v for k, v in params.items() if f"{{{k}}}" not in api_data.get("path", "")}

        # Log request details
        logging.info(f"API Call - {method.upper()} {url}")
        if data:
            logging.debug(f"Request data: {data}")

        # Make request
        response = None
        try:
            response = requests.request(
                method=method, url=url, headers=headers, json=data if data else None, timeout=30
            )

            # Log response details
            logging.info(f"API Response - Status: {response.status_code}")

            # Handle successful responses (2xx)
            if 200 <= response.status_code < 300:
                try:
                    return response.json() if response.content else {"status": "success"}
                except ValueError as e:
                    # JSON parsing failed but status was successful
                    logging.warning(f"JSON parsing failed for successful response: {str(e)}")
                    return {
                        "status": "success",
                        "raw_response": response.text,
                        "content_type": response.headers.get("content-type", "unknown"),
                        "warning": f"Response not valid JSON: {str(e)}",
                    }

            # Handle client errors (4xx) and server errors (5xx)
            else:
                error_response = {
                    "error": True,
                    "status_code": response.status_code,
                    "status_text": response.reason,
                    "url": url,
                    "method": method.upper(),
                }

                # Try to get JSON error response
                try:
                    error_response["response"] = response.json()
                except ValueError:
                    # Not JSON, capture raw text
                    error_response["raw_response"] = response.text

                # Add response headers that might be useful
                useful_headers = ["content-type", "www-authenticate", "retry-after", "x-ratelimit-remaining"]
                response_headers = {k: v for k, v in response.headers.items() if k.lower() in useful_headers}
                if response_headers:
                    error_response["headers"] = response_headers

                logging.error(f"API Error - {response.status_code}: {error_response}")
                return error_response

        except requests.exceptions.Timeout as e:
            error_response = {
                "error": True,
                "error_type": "timeout",
                "message": "Request timed out after 30 seconds",
                "url": url,
                "method": method.upper(),
                "details": str(e),
            }
            logging.error(f"API Timeout: {error_response}")
            return error_response

        except requests.exceptions.ConnectionError as e:
            error_response = {
                "error": True,
                "error_type": "connection_error",
                "message": "Failed to connect to the API endpoint",
                "url": url,
                "method": method.upper(),
                "details": str(e),
            }
            logging.error(f"API Connection Error: {error_response}")
            return error_response

        except requests.exceptions.HTTPError as e:
            error_response = {
                "error": True,
                "error_type": "http_error",
                "message": "HTTP error occurred",
                "url": url,
                "method": method.upper(),
                "details": str(e),
            }
            if response:
                error_response["status_code"] = response.status_code
                error_response["status_text"] = response.reason
            logging.error(f"API HTTP Error: {error_response}")
            return error_response

        except requests.exceptions.RequestException as e:
            error_response = {
                "error": True,
                "error_type": "request_error",
                "message": "Request failed",
                "url": url,
                "method": method.upper(),
                "details": str(e),
            }
            logging.error(f"API Request Error: {error_response}")
            return error_response

        except Exception as e:
            error_response = {
                "error": True,
                "error_type": "unexpected_error",
                "message": "An unexpected error occurred",
                "url": url,
                "method": method.upper(),
                "details": str(e),
                "exception_type": type(e).__name__,
            }
            logging.error(f"API Unexpected Error: {error_response}")
            return error_response


================================================
FILE: zapi/providers.py
================================================
"""LLM Provider enums and validation utilities.

ZAPI supports a generic key-value approach for LLM API keys, allowing developers
to bring their own keys for any provider. We support 4 main providers with
full validation and optimized integration.

Currently supported providers:
- Anthropic, OpenAI, Google, Groq (main supported providers)
"""

from enum import Enum

from .exceptions import LLMKeyError


class LLMProvider(Enum):
    """
    Supported LLM providers for API key management.

    ZAPI supports 4 main LLM providers with optimized integration and validation.
    Each provider has specific API key format validation.
    """

    # Main supported providers
    ANTHROPIC = "anthropic"
    OPENAI = "openai"
    GOOGLE = "google"
    GROQ = "groq"

    @classmethod
    def get_all_providers(cls) -> set[str]:
        """Get all supported provider names."""
        return {provider.value for provider in cls}

    @classmethod
    def is_valid_provider(cls, provider: str) -> bool:
        """Check if a provider name is valid."""
        return provider.lower() in cls.get_all_providers()


def validate_llm_keys(llm_keys: dict[str, str]) -> dict[str, str]:
    """
    Validate LLM keys dictionary for supported providers.

    Supports the 4 main LLM providers with specific validation for each.

    Args:
        llm_keys: Dictionary mapping provider names to API keys
                 Example: {"anthropic": "sk-ant-...", "openai": "sk-...", "groq": "gsk_..."}

    Returns:
        Validated and normalized keys dictionary

    Raises:
        LLMKeyError: If keys format is invalid or providers are unsupported
    """
    if not isinstance(llm_keys, dict):
        raise LLMKeyError("llm_keys must be a dictionary")

    if not llm_keys:
        raise LLMKeyError("llm_keys cannot be empty")

    validated_keys = {}

    supported_providers = ", ".join(LLMProvider.get_all_providers())

    for provider, api_key in llm_keys.items():
        # Normalize provider name to lowercase
        provider_normalized = provider.lower()

        # Validate provider is supported
        if not LLMProvider.is_valid_provider(provider_normalized):
            raise LLMKeyError(f"Unsupported LLM provider: '{provider}'. Supported providers: {supported_providers}")

        # Validate API key format
        if not isinstance(api_key, str) or not api_key.strip():
            raise LLMKeyError(f"API key for provider '{provider}' must be a non-empty string")

        _validate_key_format(provider_normalized, api_key.strip())

        validated_keys[provider_normalized] = api_key.strip()

    return validated_keys


def _validate_key_format(provider: str, api_key: str) -> None:
    """
    Validate API key format for specific providers.

    All 4 main providers receive specific validation tailored to their API key formats.

    Args:
        provider: Provider name (normalized to lowercase)
        api_key: API key to validate

    Raises:
        LLMKeyError: If key format is invalid for the provider
    """
    # Main supported providers - specific validation for each
    if provider == LLMProvider.ANTHROPIC.value:
        if not api_key.startswith("sk-ant-"):
            raise LLMKeyError("Anthropic API keys must start with 'sk-ant-'")
        if len(api_key) < 20:
            raise LLMKeyError("Anthropic API keys must be at least 20 characters long")

    elif provider == LLMProvider.OPENAI.value:
        if not api_key.startswith("sk-"):
            raise LLMKeyError("OpenAI API keys must start with 'sk-'")
        if len(api_key) < 20:
            raise LLMKeyError("OpenAI API keys must be at least 20 characters long")

    elif provider == LLMProvider.GOOGLE.value:
        # Google API keys are typically 39 characters and alphanumeric + hyphens
        if len(api_key) < 20:
            raise LLMKeyError("Google API keys must be at least 20 characters long")

    elif provider == LLMProvider.GROQ.value:
        if not api_key.startswith("gsk_"):
            raise LLMKeyError("Groq API keys must start with 'gsk_'")
        if len(api_key) < 20:
            raise LLMKeyError("Groq API keys must be at least 20 characters long")

    # Generic validation for all providers
    if len(api_key) < 10:
        raise LLMKeyError(f"API key for {provider} is too short (minimum 10 characters)")

    # Additional validation: ensure key contains only valid characters
    if not api_key.replace("-", "").replace("_", "").replace(".", "").isalnum():
        raise LLMKeyError(f"API key for {provider} contains invalid characters")


def get_provider_display_name(provider: str) -> str:
    """
    Get human-readable display name for provider.

    Returns display names for the 4 main supported providers.

    Args:
        provider: Provider name (normalized)

    Returns:
        Display name for the provider
    """
    display_names = {
        # Main supported providers
        LLMProvider.ANTHROPIC.value: "Anthropic",
        LLMProvider.OPENAI.value: "OpenAI",
        LLMProvider.GOOGLE.value: "Google",
        LLMProvider.GROQ.value: "Groq",
    }
    return display_names.get(provider, provider.title())


def is_primary_provider(provider: str) -> bool:
    """
    Check if provider is the primary supported provider.

    Args:
        provider: Provider name (normalized)

    Returns:
        True if provider is primary supported (Anthropic), False otherwise
    """
    return provider.lower() == LLMProvider.ANTHROPIC.value


def get_supported_providers_info() -> dict[str, dict[str, str]]:
    """
    Get information about the 4 main supported providers.

    Returns:
        Dictionary with provider info including support level
    """
    return {
        "anthropic": {
            "display_name": "Anthropic",
            "support_level": "primary",
            "description": "Primary supported provider with complete validation",
        },
        "openai": {
            "display_name": "OpenAI",
            "support_level": "main",
            "description": "Fully supported with complete validation",
        },
        "google": {
            "display_name": "Google",
            "support_level": "main",
            "description": "Fully supported with complete validation",
        },
        "groq": {
            "display_name": "Groq",
            "support_level": "main",
            "description": "Fully supported with complete validation",
        },
    }


================================================
FILE: zapi/session.py
================================================
"""BrowserSession implementation with Playwright integration."""

import asyncio
from pathlib import Path
from typing import Optional, Union

from playwright.async_api import (
    Browser,
    BrowserContext,
    Page,
    Playwright,
    async_playwright,
)
from playwright.async_api import (
    Error as PlaywrightError,
)
from playwright.async_api import (
    TimeoutError as PlaywrightTimeoutError,
)


def _run_async(coro):
    """Helper to run async coroutines synchronously."""
    try:
        loop = asyncio.get_event_loop()
    except RuntimeError:
        loop = asyncio.new_event_loop()
        asyncio.set_event_loop(loop)

    if loop.is_running():
        # If we're already in an async context, just return the coroutine
        return coro
    else:
        # Run synchronously
        return loop.run_until_complete(coro)


class BrowserSessionError(Exception):
    """Base exception for browser session errors."""

    pass


class BrowserNavigationError(BrowserSessionError):
    """Navigation-related browser errors."""

    pass


class BrowserInitializationError(BrowserSessionError):
    """Browser initialization errors."""

    pass


class BrowserSession:
    """
    Manages a Playwright browser session with HAR recording and network log capture.

    This class handles browser lifecycle, authentication injection, navigation,
    and HAR file export for API discovery.
    """

    def __init__(self, auth_token: str, headless: bool = True, **playwright_options):
        """
        Initialize a browser session.

        Args:
            auth_token: Authentication token to inject via Authorization header
            headless: Whether to run browser in headless mode
            **playwright_options: Additional options for Playwright browser launch
        """
        self.auth_token = auth_token
        self.headless = headless
        self.playwright_options = playwright_options

        self._playwright: Optional[Playwright] = None
        self._browser: Optional[Browser] = None
        self._context: Optional[BrowserContext] = None
        self._page: Optional[Page] = None
        self._har_path: Optional[Path] = None

    async def _initialize(self, initial_url: Optional[str] = None, wait_until: str = "load"):
        """
        Initialize Playwright browser, context, and page.

        Args:
            initial_url: Optional initial URL to navigate to
            wait_until: When to consider navigation complete (default: "load")

        Raises:
            BrowserInitializationError: If browser initialization fails
            BrowserNavigationError: If initial navigation fails
        """
        try:
            # Start Playwright
            self._playwright = await async_playwright().start()

            # Launch browser with enhanced error handling
            try:
                # Add stealth args if not present
                launch_options = self.playwright_options.copy()
                args = launch_options.get("args", [])
                if "--disable-blink-features=AutomationControlled" not in args:
                    args.append("--disable-blink-features=AutomationControlled")
                launch_options["args"] = args

                # Default to a more realistic viewport if not specified
                if "viewport" not in launch_options:
                    # None means resize to window size
                    pass

                self._browser = await self._playwright.chromium.launch(headless=self.headless, **launch_options)
            except Exception as e:
                raise BrowserInitializationError(
                    f"Failed to launch browser: {str(e)}. "
                    "This may be due to missing browser dependencies or system restrictions."
                )

            # Create temporary HAR file path
            import tempfile

            self._har_path = Path(tempfile.mktemp(suffix=".har"))

            # Create context with HAR recording
            try:
                # Use a realistic User-Agent
                user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36"

                self._context = await self._browser.new_context(
                    record_har_path=str(self._har_path),
                    record_har_mode="minimal",
                    user_agent=user_agent,
                    viewport={"width": 1280, "height": 720},  # Set a standard viewport
                    device_scale_factor=2,
                    locale="en-US",
                    timezone_id="America/New_York",
                )

                # Add stealth scripts to evade bot detection
                await self._context.add_init_script("""
                    Object.defineProperty(navigator, 'webdriver', {
                        get: () => undefined
                    });

                    // Pass the Chrome Test
                    window.navigator.chrome = {
                        runtime: {},
                    };

                    // Pass the Plugins Length Test
                    Object.defineProperty(navigator, 'plugins', {
                        get: () => [1, 2, 3, 4, 5],
                    });

                    // Pass the Languages Test
                    Object.defineProperty(navigator, 'languages', {
                        get: () => ['en-US', 'en'],
                    });
                """)
            except Exception as e:
                raise BrowserInitializationError(f"Failed to create browser context: {str(e)}")

                pass
                # Original auth injection code removed to prevent CORS issues on public sites
                # auth_handler = get_auth_handler("header")
                # await auth_handler(self._context, self.auth_token)

            # Create page
            try:
                self._page = await self._context.new_page()
            except Exception as e:
                raise BrowserInitializationError(f"Failed to create browser page: {str(e)}")

            # Navigate to initial URL if provided
            if initial_url:
                await self._navigate_async(initial_url, wait_until=wait_until)

        except (BrowserInitializationError, BrowserNavigationError):
            # Re-raise our custom exceptions
            raise
        except Exception as e:
            # Catch any other unexpected errors
            raise BrowserInitializationError(f"Unexpected error during browser initialization: {str(e)}")

    async def _navigate_async(self, url: str, wait_until: str = "load") -> None:
        """
        Internal async navigate method with enhanced error handling.

        Args:
            url: URL to navigate to
            wait_until: When to consider navigation complete
                       ("load", "domcontentloaded", "networkidle")

        Raises:
            BrowserNavigationError: If navigation fails
        """
        if not self._page:
            raise BrowserSessionError("Browser session not initialized. Call _initialize() first.")

        try:
            # Navigate with Authorization header already set
            await self._page.goto(url, wait_until=wait_until, timeout=30000)  # 30 second timeout

        except PlaywrightTimeoutError:
            raise BrowserNavigationError(
                f"Navigation timeout: '{url}' took too long to load. "
                "The website may be slow or unresponsive. Try again or use a different URL."
            )
        except PlaywrightError as e:
            error_message = str(e)

            if "Cannot navigate to invalid URL" in error_message:
                raise BrowserNavigationError(
                    f"Invalid URL format: '{url}'. "
                    "Please ensure the URL is properly formatted (e.g., 'https://example.com')."
                )
            elif "net::ERR_NAME_NOT_RESOLVED" in error_message:
                raise BrowserNavigationError(
                    f"Domain name could not be resolved: '{url}'. "
                    "Please check the URL spelling and your internet connection."
                )
            elif "net::ERR_CONNECTION_REFUSED" in error_message:
                raise BrowserNavigationError(
                    f"Connection refused: '{url}'. The server may be down or the URL may be incorrect."
                )
            elif "net::ERR_CONNECTION_TIMED_OUT" in error_message:
                raise BrowserNavigationError(
                    f"Connection timed out: '{url}'. The server took too long to respond. Please try again."
                )
            elif "net::ERR_INTERNET_DISCONNECTED" in error_message:
                raise BrowserNavigationError("No internet connection detected. Please check your network connection.")
            elif "net::ERR_CERT_AUTHORITY_INVALID" in error_message:
                raise BrowserNavigationError(
                    f"SSL certificate error for: '{url}'. The website's security certificate is invalid or expired."
                )
            else:
                raise BrowserNavigationError(f"Navigation failed for '{url}': {error_message}")
        except Exception as e:
            raise BrowserNavigationError(f"Unexpected navigation error for '{url}': {str(e)}")

    def navigate(self, url: str, wait_until: str = "load") -> None:
        """
        Navigate to a URL with authentication injection.

        Args:
            url: URL to navigate to
            wait_until: When to consider navigation complete
                       ("load", "domcontentloaded", "networkidle")

        Raises:
            BrowserNavigationError: If navigation fails
        """
        _run_async(self._navigate_async(url, wait_until))

    async def _click_async(self, selector: str, **kwargs) -> None:
        """
        Internal async click method with error handling.

        Raises:
            BrowserSessionError: If click operation fails
        """
        if not self._page:
            raise BrowserSessionError("Browser session not initialized.")

        try:
            await self._page.click(selector, **kwargs)
        except PlaywrightTimeoutError:
            raise BrowserSessionError(
                f"Element not found or not clickable: '{selector}'. "
                "Please check the selector or wait for the page to load completely."
            )
        except PlaywrightError as e:
            raise BrowserSessionError(f"Click failed for selector '{selector}': {str(e)}")

    def click(self, selector: str, **kwargs) -> None:
        """
        Click an element by selector.

        Args:
            selector: CSS selector for the element
            **kwargs: Additional options for Playwright click
        """
        _run_async(self._click_async(selector, **kwargs))

    async def _fill_async(self, selector: str, value: str, **kwargs) -> None:
        """
        Internal async fill method with error handling.

        Raises:
            BrowserSessionError: If fill operation fails
        """
        if not self._page:
            raise BrowserSessionError("Browser session not initialized.")

        try:
            await self._page.fill(selector, value, **kwargs)
        except PlaywrightTimeoutError:
            raise BrowserSessionError(
                f"Input element not found: '{selector}'. "
                "Please check the selector or wait for the page to load completely."
            )
        except PlaywrightError as e:
            raise BrowserSessionError(f"Fill failed for selector '{selector}': {str(e)}")

    def fill(self, selector: str, value: str, **kwargs) -> None:
        """
        Fill a form field.

        Args:
            selector: CSS selector for the input element
            value: Value to fill
            **kwargs: Additional options for Playwright fill
        """
        _run_async(self._fill_async(selector, value, **kwargs))

    async def _wait_for_async(self, selector: Optional[str] = None, timeout: Optional[float] = None) -> None:
        """
        Internal async wait_for method with error handling.

        Raises:
            BrowserSessionError: If wait operation fails
        """
        if not self._page:
            raise BrowserSessionError("Browser session not initialized.")

        if selector:
            try:
                await self._page.wait_for_selector(selector, timeout=timeout)
            except PlaywrightTimeoutError:
                raise BrowserSessionError(
                    f"Element not found within timeout: '{selector}'. "
                    "The element may not exist or may take longer to appear."
                )
            except PlaywrightError as e:
                raise BrowserSessionError(f"Wait failed for selector '{selector}': {str(e)}")
        elif timeout:
            try:
                await self._page.wait_for_timeout(timeout)
            except Exception as e:
                raise BrowserSessionError(f"Wait timeout failed: {str(e)}")
        else:
            raise BrowserSessionError("Must provide either selector or timeout")

    def wait_for(self, selector: Optional[str] = None, timeout: Optional[float] = None) -> None:
        """
        Wait for a selector or timeout.

        Args:
            selector: CSS selector to wait for (if None, waits for timeout)
            timeout: Timeout in milliseconds
        """
        _run_async(self._wait_for_async(selector, timeout))

    async def _dump_logs_async(self, filepath: Union[str, Path]) -> None:
        """
        Internal async dump_logs method with error handling.

        Raises:
            BrowserSessionError: If log dumping fails
        """
        if not self._context:
            raise BrowserSessionError("Browser session not initialized.")

        try:
            # Close context to finalize HAR recording
            await self._context.close()
        except Exception as e:
            raise BrowserSessionError(f"Failed to close browser context: {str(e)}")

        # Copy HAR file to destination with enhanced error handling
        try:
            if self._har_path and self._har_path.exists():
                import shutil

                # Ensure destination directory exists
                dest_path = Path(filepath)
                dest_path.parent.mkdir(parents=True, exist_ok=True)

                shutil.copy(self._har_path, filepath)

                # Verify the copy was successful
                if not dest_path.exists():
                    raise BrowserSessionError(f"Failed to create HAR file at: '{filepath}'")

                # Provide immediate feedback about HAR size post-save
                file_size_mb = dest_path.stat().st_size / (1024 * 1024)
                print(f"HAR file saved to '{dest_path}' ({file_size_mb:.1f} MB)")
                if file_size_mb > 100:
                    print("⚠️  Large HAR files (>100 MB) may lead to unexpected upload issues.")
                    print(
                        "   Consider using the filtering utilities in 'zapi.har_processing' to trim the HAR before uploading."
                    )

                # Clean up temporary file
                self._har_path.unlink()
            else:
                raise BrowserSessionError(
                    "HAR file not found. Session may not have been properly initialized "
                    "or no network activity was recorded."
                )
        except PermissionError:
            raise BrowserSessionError(
                f"Permission denied writing to: '{filepath}'. Please check file permissions and directory access."
            )
        except FileNotFoundError:
            raise BrowserSessionError(f"Destination directory does not exist: '{Path(filepath).parent}'")
        except Exception as e:
            raise BrowserSessionError(f"Failed to save HAR file to '{filepath}': {str(e)}")

        # Mark context as closed
        self._context = None
        self._page = None

    def dump_logs(self, filepath: Union[str, Path]) -> None:
        """
        Export captured network logs to a HAR file.

        Args:
            filepath: Path where to save the HAR file
        """
        _run_async(self._dump_logs_async(filepath))

    async def _close_async(self) -> None:
        """Internal async close method."""
        if self._context:
            await self._context.close()

        if self._browser:
            await self._browser.close()

        if self._playwright:
            await self._playwright.stop()

        # Clean up temporary HAR file if it exists
        if self._har_path and self._har_path.exists():
            self._har_path.unlink()

        self._page = None
        self._context = None
        self._browser = None
        self._playwright = None

    def close(self) -> None:
        """
        Close the browser session and cleanup resources.
        """
        _run_async(self._close_async())

    def __enter__(self):
        """Context manager entry."""
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        """Context manager exit."""
        self.close()
        return False

    async def __aenter__(self):
        """Async context manager entry."""
        return self

    async def __aexit__(self, exc_type, exc_val, exc_tb):
        """Async context manager exit."""
        await self._close_async()
        return False


================================================
FILE: zapi/utils.py
================================================
"""Utility functions for ZAPI."""

import json
import os
from typing import Any, Optional

try:
    from dotenv import load_dotenv
    from pydantic import SecretStr

    HAS_DOTENV = True
except ImportError:
    HAS_DOTENV = False
    SecretStr = str  # Fallback to regular string if pydantic not available


def load_security_headers(headers_file: Optional[str] = None) -> dict[str, str]:
    """
    Load security headers from JSON file.

    Args:
        headers_file: Path to JSON file containing headers. If None, uses
                     'api-headers.json' in the zapi root directory.

    Returns:
        Dictionary of headers to add to API requests
    """
    if headers_file is None:
        # Always use the same fixed location: zapi/api-headers.json
        headers_file = "api-headers.json"

    if not os.path.exists(headers_file):
        print(f"ℹ️  No headers file found at '{headers_file}' - proceeding without authentication headers")
        return {}

    try:
        with open(headers_file) as f:
            data = json.load(f)
            headers = data.get("headers", {})
            if headers:
                print(f"✅ Loaded {len(headers)} security headers from '{headers_file}'")
                # Don't print the actual headers for security
                header_names = list(headers.keys())
                print(f"   Headers: {', '.join(header_names)}")
            else:
                print(f"⚠️  Headers file '{headers_file}' found but contains no headers")
            return headers
    except (OSError, json.JSONDecodeError) as e:
        print(f"⚠️  Error loading headers file '{headers_file}': {e}")
        print("   Proceeding without authentication headers")
        return {}


def load_adopt_credentials() -> tuple[Optional[str], Optional[str]]:
    """
    Load ADOPT credentials from .env file or fallback to code defaults.

    Returns:
        Tuple of (client_id, secret) where values are loaded from environment

    Note:
        Requires python-dotenv to be installed for full functionality.
        Falls back gracefully if these packages are not available.
    """
    if not HAS_DOTENV:
        print("⚠️  python-dotenv not installed - using fallback credential loading")
        return None, None

    # Try to load from .env file
    load_dotenv()

    # Check environment variables first
    env_client_id = os.getenv("ADOPT_CLIENT_ID")
    env_secret = os.getenv("ADOPT_SECRET_KEY")

    if env_client_id and env_secret:
        print("✓ Loaded ADOPT credentials from .env file")
        return env_client_id, env_secret

    print("ℹ️  No ADOPT credentials found in .env file")
    return None, None


def load_llm_credentials() -> tuple[Optional[str], Optional[str], Optional[str]]:
    """
    Load LLM credentials from .env file or fallback to code defaults.

    Returns:
        Tuple of (provider, api_key) where api_key is properly handled for security

    Note:
        Requires pydantic and python-dotenv to be installed for full functionality.
        Falls back gracefully if these packages are not available.
    """
    if not HAS_DOTENV:
        print("⚠️  pydantic/python-dotenv not installed - using fallback credential loading")
        return None, None, None

    # Try to load from .env file
    load_dotenv()

    # Check environment variables first
    env_llm_provider = os.getenv("LLM_PROVIDER")
    env_llm_api_key = os.getenv("LLM_API_KEY")
    env_llm_model_name = os.getenv("LLM_MODEL_NAME")

    if env_llm_provider and env_llm_api_key and env_llm_model_name:
        print(f"✓ Loaded LLM credentials from .env file (provider: {env_llm_provider})")
        # Return string directly - SecretStr handling is done in demo.py
        return env_llm_provider, env_llm_api_key, env_llm_model_name

    print("ℹ️  No LLM credentials found in .env file")
    return None, None, None


def load_zapi_credentials() -> tuple[str, str, str, str, str]:
    """
    Load complete ZAPI credentials (ADOPT + LLM) from environment variables with fallbacks.

    This is a convenience function that combines load_adopt_credentials() and load_llm_credentials()
    with sensible fallback values for development/examples.

    Returns:
        Tuple of (client_id, secret, llm_provider, llm_model_name, llm_api_key)

    Note:
        If environment variables are not found, returns fallback placeholder values
        suitable for examples and development.
    """
    # Load ADOPT credentials securely from .env or fallback to code
    print("🔐 Loading ADOPT credentials...")
    client_id, secret = load_adopt_credentials()

    # Fallback to hardcoded values if not found in .env
    if not client_id or not secret:
        print("⚠️  Using fallback credentials - update your .env file for production")
        client_id = "YOUR_CLIENT_ID"
        secret = "YOUR_SECRET"

    # Load LLM credentials securely from .env or fallback to code
    print("🔐 Loading LLM credentials...")
    llm_provider, llm_api_key, llm_model_name = load_llm_credentials()

    # Fallback to hardcoded values if not found in .env
    if not llm_provider or not llm_api_key or not llm_model_name:
        print("⚠️  Using fallback LLM credentials - update your .env file for production")
        llm_provider = llm_provider or "anthropic"
        llm_model_name = llm_model_name or "claude-3-5-sonnet-20241022"
        llm_api_key = llm_api_key or "YOUR_ANTHROPIC_API_KEY"

    return client_id, secret, llm_provider, llm_model_name, llm_api_key


def set_llm_api_key_env(provider: str, api_key: str) -> None:
    """
    Set the appropriate environment variable for the given LLM provider.

    This is required for LangChain v1.0 to automatically detect and use the API keys.

    Args:
        provider: The LLM provider name ('anthropic' or 'openai')
        api_key: The API key to set in the environment

    Raises:
        ValueError: If the provider is not supported
    """
    if provider == "anthropic":
        os.environ["ANTHROPIC_API_KEY"] = api_key
    elif provider == "openai":
        os.environ["OPENAI_API_KEY"] = api_key
    else:
        raise ValueError(f"Unsupported provider: {provider}. Supported providers: anthropic, openai")


def _safe_get(obj: Any, *keys: str, default: Any = None) -> Any:
    """
    Safely get a value from an object or dict using multiple possible keys.
    Tries object attributes first, then dict keys.

    Args:
        obj: Object or dict to get value from
        *keys: Multiple possible keys/attributes to try
        default: Default value if none found

    Returns:
        First found value or default
    """
    for key in keys:
        if hasattr(obj, key):
            value = getattr(obj, key, None)
            if value is not None:
                return value
        if isinstance(obj, dict) and key in obj:
            value = obj[key]
            if value is not None:
                return value
    return default


def _extract_token_metadata(response: Any) -> Optional[str]:
    """
    Extract token usage metadata from agent response.

    Args:
        response: The response object from the agent

    Returns:
        Formatted token usage string or None if no token info found
    """
    try:
        # Get usage metadata from last message
        if not isinstance(response, dict) or not response.get("messages"):
            return None

        usage = getattr(response["messages"][-1], "usage_metadata", None)
        if not usage:
            return None

        # Extract token values (filtering None values)
        token_info = {
            "input": _safe_get(usage, "input_tokens"),
            "output": _safe_get(usage, "output_tokens"),
            "total": _safe_get(usage, "total_tokens"),
        }
        token_info = {k: v for k, v in token_info.items() if v is not None}

        if not token_info:
            return None

        # Calculate total if missing
        if "total" not in token_info and "input" in token_info and "output" in token_info:
            token_info["total"] = token_info["input"] + token_info["output"]

        # Format output
        labels = {"input": "Input", "output": "Output", "total": "Total"}
        return "Tokens - " + " | ".join(f"{labels[k]}: {token_info[k]}" for k in labels if k in token_info)

    except Exception:
        return None


def interactive_chat(agent: Any, single_shot: bool = False, debug_mode: bool = False) -> None:
    """
    Interactive terminal chat with the agent.

    Args:
        agent: The LangChain agent instance
        single_shot: If True, only accepts one prompt and exits
        debug_mode: If True, shows detailed debug information
    """
    print("\n💬 Interactive Chat Mode")
    print("=" * 25)

    if debug_mode:
        print("🐛 Debug mode: ON")
    print("Type your question and press Enter\n")

    history = []
    first_interaction = True

    while True:
        try:
            # Add divider between questions (except for the first one)
            if not first_interaction:
                print("─" * 60)
                print()

            # Get user input
            user_input = input("You: ").strip()

            # Handle commands
            if user_input.lower() in ["exit", "quit"]:
                print("👋 Goodbye!")
                break
            elif user_input.lower() == "help":
                print("\nAvailable commands:")
                print("- 'exit' or 'quit': Exit the chat")
                print("- 'history': Show conversation history")
                print("- 'debug': Toggle debug mode on/off")
                print("- 'help': Show this help message")
                print("- Any other text: Ask the agent\n")
                continue
            elif user_input.lower() == "debug":
                debug_mode = not debug_mode
                status = "ON" if debug_mode else "OFF"
                print(f"🐛 Debug mode: {status}\n")
                continue
            elif user_input.lower() == "history":
                if history:
                    print("\n📜 Conversation History:")
                    for i, (q, a) in enumerate(history, 1):
                        print(f"{i}. You: {q}")
                        print(f"   Agent: {a[:100]}{'...' if len(a) > 100 else ''}\n")
                else:
                    print("No conversation history yet.\n")
                continue
            elif not user_input:
                continue

            # Process with agent
            print("🤖 Agent: ", end="", flush=True)
            try:
                if debug_mode:
                    print(f"\n🐛 [DEBUG] Sending request: {user_input}")
                    print(f"🐛 [DEBUG] Agent type: {type(agent)}")

                response = agent.invoke({"messages": [{"role": "user", "content": user_input}]})

                if debug_mode:
                    print(f"\n🐛 [DEBUG] Response type: {type(response)}")
                    print(
                        f"🐛 [DEBUG] Response keys: {response.keys() if isinstance(response, dict) else 'Not a dict'}"
                    )

                    if isinstance(response, dict) and "messages" in response:
                        messages = response["messages"]
                        print(f"🐛 [DEBUG] Messages count: {len(messages)}")
                        for i, msg in enumerate(messages):
                            print(f"🐛 [DEBUG] Message {i}: {type(msg).__name__}")
                            if hasattr(msg, "content"):
                                content_preview = (
                                    str(msg.content)[:100] + "..." if len(str(msg.content)) > 100 else str(msg.content)
                                )
                                print(f"🐛 [DEBUG] Content preview: {content_preview}")
                            if hasattr(msg, "tool_calls") and msg.tool_calls:
                                print(f"🐛 [DEBUG] Tool calls: {[tc['name'] for tc in msg.tool_calls]}")
                    print()

                # Extract response content
                if hasattr(response, "content"):
                    # Handle AIMessage or similar objects with content attribute
                    agent_response = response.content
                elif isinstance(response, dict) and "messages" in response:
                    # Handle dictionary response with messages array - get last AIMessage
                    messages = response["messages"]
                    if messages:
                        last_message = messages[-1]
                        agent_response = last_message.content if hasattr(last_message, "content") else str(last_message)
                    else:
                        agent_response = str(response)
                elif isinstance(response, dict) and "content" in response:
                    # Handle dictionary response with direct content
                    agent_response = response["content"]
                else:
                    # Fallback to string representation
                    agent_response = str(response)

                if debug_mode:
                    print(f"🐛 [DEBUG] Final response length: {len(str(agent_response))} characters")

                print(agent_response)

                # Extract and display token metadata
                token_info = _extract_token_metadata(response)
                if token_info:
                    print(f"\n📊 {token_info}")

                # Add spacing between interactions
                print()

            except Exception as e:
                if debug_mode:
                    import traceback

                    print("\n🐛 [DEBUG] Exception details:")
                    print(f"🐛 [DEBUG] Exception type: {type(e)}")
                    print(f"🐛 [DEBUG] Exception message: {str(e)}")
                    print("🐛 [DEBUG] Traceback:")
                    traceback.print_exc()
                    print()
                print(f"❌ Error: {e}")
                agent_response = f"Error: {e}"
                # Add spacing after error
                print()

            # Store in history
            history.append((user_input, agent_response))

            # Mark that we've had our first interaction
            first_interaction = False

            # Exit if single shot mode
            if single_shot:
                break

        except KeyboardInterrupt:
            print("\n👋 Goodbye!")
            break
        except Exception as e:
            print(f"❌ Error: {e}")
            if single_shot:
                break
Download .txt
gitextract_rtr6tfjp/

├── .devenv
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.yml
│   │   ├── config.yml
│   │   └── feature-request.yml
│   ├── pull_request_template.md
│   └── workflows/
│       └── ruff-check.yml
├── .gitignore
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── demo.py
├── docs/
│   └── introduction.md
├── examples/
│   ├── async_usage.py
│   ├── basic_usage.py
│   ├── langchain/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   └── demo.py
│   ├── llm_keys_usage.py
│   └── simple_usage.py
├── pyproject.toml
├── requirements.txt
├── scripts/
│   ├── README.md
│   └── pre-commit.sh
├── setup.py
└── zapi/
    ├── __init__.py
    ├── auth.py
    ├── cli.py
    ├── constants.py
    ├── core.py
    ├── encryption.py
    ├── exceptions.py
    ├── har_processing.py
    ├── integrations/
    │   └── langchain/
    │       └── tool.py
    ├── providers.py
    ├── session.py
    └── utils.py
Download .txt
SYMBOL INDEX (98 symbols across 16 files)

FILE: demo.py
  function record_session (line 28) | def record_session(zapi_client: ZAPI, url: str, output_path: Path) -> None:
  function analyze_har_file_with_filter (line 44) | def analyze_har_file_with_filter(source_path: Path) -> Optional[Path]:
  function pick_upload_file (line 63) | def pick_upload_file(original_path: Path, filtered_path: Optional[Path])...
  function main (line 80) | def main() -> int:

FILE: examples/async_usage.py
  function main (line 13) | async def main():

FILE: examples/basic_usage.py
  function main (line 11) | def main():

FILE: examples/langchain/demo.py
  function demo_zapi_langchain (line 5) | def demo_zapi_langchain():

FILE: examples/llm_keys_usage.py
  function main (line 13) | def main():

FILE: examples/simple_usage.py
  function main (line 8) | def main():

FILE: zapi/auth.py
  function apply_localstorage_auth (line 12) | async def apply_localstorage_auth(page: Page, token: str, key: str = "au...
  function apply_cookie_auth (line 24) | async def apply_cookie_auth(page: Page, token: str, name: str = "authTok...
  function apply_header_auth (line 45) | async def apply_header_auth(context: BrowserContext, token: str) -> None:
  function get_auth_handler (line 56) | def get_auth_handler(auth_mode: AuthMode):

FILE: zapi/cli.py
  function cli (line 13) | def cli():
  function capture (line 22) | def capture(url, output, headless):
  function analyze (line 50) | def analyze(har_file):
  function upload (line 65) | def upload(har_file):

FILE: zapi/core.py
  class ZAPI (line 25) | class ZAPI:
    method __init__ (line 33) | def __init__(
    method _fetch_auth_token (line 94) | def _fetch_auth_token(self) -> tuple[str, str]:
    method _validate_token_and_extract_org_id (line 157) | async def _validate_token_and_extract_org_id(self, token: str) -> str:
    method set_llm_key (line 206) | def set_llm_key(self, provider: str, api_key: str, model_name: str) ->...
    method get_llm_provider (line 240) | def get_llm_provider(self) -> Optional[str]:
    method get_llm_model_name (line 249) | def get_llm_model_name(self) -> Optional[str]:
    method get_encrypted_llm_key (line 258) | def get_encrypted_llm_key(self) -> Optional[str]:
    method get_decrypted_llm_key (line 267) | def get_decrypted_llm_key(self) -> Optional[str]:
    method has_llm_key (line 282) | def has_llm_key(self) -> bool:
    method get_zapi_tools (line 291) | def get_zapi_tools(self) -> list[Callable]:
    method launch_browser (line 306) | def launch_browser(
    method upload_har (line 385) | def upload_har(self, har_file: str):
    method get_documented_apis (line 468) | def get_documented_apis(self, page: int = 1, page_size: int = 10):

FILE: zapi/encryption.py
  class LLMKeyEncryption (line 12) | class LLMKeyEncryption:
    method __init__ (line 22) | def __init__(self, org_id: str):
    method _derive_key (line 37) | def _derive_key(self, salt: bytes) -> bytes:
    method encrypt_key (line 56) | def encrypt_key(self, api_key: str) -> str:
    method decrypt_key (line 103) | def decrypt_key(self, encrypted_data: str) -> str:
  function encrypt_llm_key (line 164) | def encrypt_llm_key(org_id: str, api_key: str) -> str:
  function decrypt_llm_key (line 179) | def decrypt_llm_key(org_id: str, encrypted_data: str) -> str:
  function secure_compare_key (line 194) | def secure_compare_key(provider1: str, key1: str, provider2: str, key2: ...

FILE: zapi/exceptions.py
  class ZAPIError (line 4) | class ZAPIError(Exception):
  class ZAPIAuthenticationError (line 10) | class ZAPIAuthenticationError(ZAPIError):
  class ZAPIValidationError (line 16) | class ZAPIValidationError(ZAPIError):
  class ZAPINetworkError (line 22) | class ZAPINetworkError(ZAPIError):

FILE: zapi/har_processing.py
  class HarStats (line 12) | class HarStats:
  class HarProcessingError (line 25) | class HarProcessingError(Exception):
  class HarProcessor (line 31) | class HarProcessor:
    method __init__ (line 75) | def __init__(self, har_file_path: str):
    method load_and_process (line 114) | def load_and_process(self) -> HarStats:
    method _process_entry (line 189) | def _process_entry(self, entry: dict[str, Any]) -> bool:
    method _extract_url_from_entry (line 264) | def _extract_url_from_entry(self, entry: dict[str, Any]) -> str:
    method _extract_response_content (line 271) | def _extract_response_content(self, entry: dict[str, Any]) -> dict[str...
    method save_filtered_har (line 278) | def save_filtered_har(self, output_path: str) -> str:
    method get_summary_report (line 329) | def get_summary_report(self, stats: HarStats) -> str:
  function analyze_har_file (line 379) | def analyze_har_file(

FILE: zapi/integrations/langchain/tool.py
  class ZAPILangchainTool (line 17) | class ZAPILangchainTool:
    method __init__ (line 35) | def __init__(self, zapi_instance: ZAPI, headers_file: Optional[str] = ...
    method create_tools (line 39) | def create_tools(self) -> list[Callable]:
    method _create_tool (line 57) | def _create_tool(self, api_data: dict[str, Any]) -> Callable:
    method _call_api (line 80) | def _call_api(self, api_id: str, api_data: dict[str, Any], params: dic...

FILE: zapi/providers.py
  class LLMProvider (line 16) | class LLMProvider(Enum):
    method get_all_providers (line 31) | def get_all_providers(cls) -> set[str]:
    method is_valid_provider (line 36) | def is_valid_provider(cls, provider: str) -> bool:
  function validate_llm_keys (line 41) | def validate_llm_keys(llm_keys: dict[str, str]) -> dict[str, str]:
  function _validate_key_format (line 86) | def _validate_key_format(provider: str, api_key: str) -> None:
  function get_provider_display_name (line 132) | def get_provider_display_name(provider: str) -> str:
  function is_primary_provider (line 154) | def is_primary_provider(provider: str) -> bool:
  function get_supported_providers_info (line 167) | def get_supported_providers_info() -> dict[str, dict[str, str]]:

FILE: zapi/session.py
  function _run_async (line 22) | def _run_async(coro):
  class BrowserSessionError (line 38) | class BrowserSessionError(Exception):
  class BrowserNavigationError (line 44) | class BrowserNavigationError(BrowserSessionError):
  class BrowserInitializationError (line 50) | class BrowserInitializationError(BrowserSessionError):
  class BrowserSession (line 56) | class BrowserSession:
    method __init__ (line 64) | def __init__(self, auth_token: str, headless: bool = True, **playwrigh...
    method _initialize (line 83) | async def _initialize(self, initial_url: Optional[str] = None, wait_un...
    method _navigate_async (line 186) | async def _navigate_async(self, url: str, wait_until: str = "load") ->...
    method navigate (line 242) | def navigate(self, url: str, wait_until: str = "load") -> None:
    method _click_async (line 256) | async def _click_async(self, selector: str, **kwargs) -> None:
    method click (line 276) | def click(self, selector: str, **kwargs) -> None:
    method _fill_async (line 286) | async def _fill_async(self, selector: str, value: str, **kwargs) -> None:
    method fill (line 306) | def fill(self, selector: str, value: str, **kwargs) -> None:
    method _wait_for_async (line 317) | async def _wait_for_async(self, selector: Optional[str] = None, timeou...
    method wait_for (line 345) | def wait_for(self, selector: Optional[str] = None, timeout: Optional[f...
    method _dump_logs_async (line 355) | async def _dump_logs_async(self, filepath: Union[str, Path]) -> None:
    method dump_logs (line 415) | def dump_logs(self, filepath: Union[str, Path]) -> None:
    method _close_async (line 424) | async def _close_async(self) -> None:
    method close (line 444) | def close(self) -> None:
    method __enter__ (line 450) | def __enter__(self):
    method __exit__ (line 454) | def __exit__(self, exc_type, exc_val, exc_tb):
    method __aenter__ (line 459) | async def __aenter__(self):
    method __aexit__ (line 463) | async def __aexit__(self, exc_type, exc_val, exc_tb):

FILE: zapi/utils.py
  function load_security_headers (line 17) | def load_security_headers(headers_file: Optional[str] = None) -> dict[st...
  function load_adopt_credentials (line 54) | def load_adopt_credentials() -> tuple[Optional[str], Optional[str]]:
  function load_llm_credentials (line 84) | def load_llm_credentials() -> tuple[Optional[str], Optional[str], Option...
  function load_zapi_credentials (line 116) | def load_zapi_credentials() -> tuple[str, str, str, str, str]:
  function set_llm_api_key_env (line 154) | def set_llm_api_key_env(provider: str, api_key: str) -> None:
  function _safe_get (line 175) | def _safe_get(obj: Any, *keys: str, default: Any = None) -> Any:
  function _extract_token_metadata (line 200) | def _extract_token_metadata(response: Any) -> Optional[str]:
  function interactive_chat (line 242) | def interactive_chat(agent: Any, single_shot: bool = False, debug_mode: ...
Condensed preview — 38 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (194K chars).
[
  {
    "path": ".devenv",
    "chars": 91,
    "preview": "LLM_API_KEY=\nLLM_PROVIDER=\nLLM_MODEL_NAME=\nADOPT_CLIENT_ID=\nADOPT_SECRET_KEY=\nYOUR_API_URL="
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.yml",
    "chars": 5035,
    "preview": "name: \"🐞 Bug Report\"\ndescription: \"Report a bug or unexpected behavior in ZAPI\"\ntitle: \"[Bug]: <Short description>\"\nlabe"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "chars": 591,
    "preview": "blank_issues_enabled: false\ncontact_links:\n  - name: 📚 Documentation\n    url: https://github.com/adoptai/zapi/blob/main/"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature-request.yml",
    "chars": 4604,
    "preview": "name: \"🚀 Feature Request\"\ndescription: \"Suggest a new feature or improvement for ZAPI\"\ntitle: \"[Feature]: <Short descrip"
  },
  {
    "path": ".github/pull_request_template.md",
    "chars": 2040,
    "preview": "## Description\n\n<!-- Provide a clear and concise description of what this PR does -->\n\n## Type of Change\n\n<!-- Check all"
  },
  {
    "path": ".github/workflows/ruff-check.yml",
    "chars": 860,
    "preview": "name: Ruff Linting\n\non:\n  pull_request:\n    branches:\n      - main\n      - dev\n    paths:\n      - '**.py'\n      - 'pypro"
  },
  {
    "path": ".gitignore",
    "chars": 611,
    "preview": "# Python\n__pycache__/\n*.py[cod]\n*$py.class\n*.so\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\np"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 1284,
    "preview": "# Pre-commit hooks for ZAPI\n# See https://pre-commit.com for more information\n\nrepos:\n  # Ruff - Fast Python linter and "
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 14563,
    "preview": "# Contributing to ZAPI\n\nThank you for your interest in contributing to ZAPI! This document provides guidelines and instr"
  },
  {
    "path": "LICENSE",
    "chars": 1064,
    "preview": "MIT License\n\nCopyright (c) 2025 AdoptAI\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof"
  },
  {
    "path": "MANIFEST.in",
    "chars": 413,
    "preview": "# Include important files in the distribution\ninclude README.md\ninclude LICENSE\ninclude requirements.txt\ninclude CONTRIB"
  },
  {
    "path": "README.md",
    "chars": 14932,
    "preview": "<h3 align=\"center\">\n  <a name=\"readme-top\"></a>\n  <img\n    src=\"https://asset.adopt.ai/web/icons/github_banner.png\">\n</h"
  },
  {
    "path": "demo.py",
    "chars": 6765,
    "preview": "#!/usr/bin/env python\n\"\"\"ZAPI Demo Script showing capture, analysis, and upload.\"\"\"\n\nfrom pathlib import Path\nfrom typin"
  },
  {
    "path": "docs/introduction.md",
    "chars": 14963,
    "preview": "# Introducing ZAPI - Zero-Config API Intelligence\n\n**3 min read**\n\n_Automatically discover, capture, and document APIs f"
  },
  {
    "path": "examples/async_usage.py",
    "chars": 2208,
    "preview": "\"\"\"\nAdvanced async usage example for ZAPI.\n\nThis demonstrates how to use the async API directly for concurrent\noperation"
  },
  {
    "path": "examples/basic_usage.py",
    "chars": 2064,
    "preview": "\"\"\"\nBasic usage example for ZAPI.\n\nThis demonstrates the minimal API for launching a browser,\nnavigating to a URL, and c"
  },
  {
    "path": "examples/langchain/README.md",
    "chars": 4548,
    "preview": "# ZAPI LangChain Integration\n\nThis example demonstrates how to use ZAPI with LangChain to automatically convert your doc"
  },
  {
    "path": "examples/langchain/__init__.py",
    "chars": 208,
    "preview": "\"\"\"\nZAPI Langchain Examples\n\nThis package contains comprehensive examples showing how to use ZAPI\nwith Langchain to crea"
  },
  {
    "path": "examples/langchain/demo.py",
    "chars": 541,
    "preview": "from langchain.agents import create_agent\nfrom zapi import ZAPI, interactive_chat\n\n\ndef demo_zapi_langchain():\n    \"\"\"ZA"
  },
  {
    "path": "examples/llm_keys_usage.py",
    "chars": 4463,
    "preview": "\"\"\"\nExample demonstrating LLM API key management with ZAPI.\n\nThis shows how to securely provide LLM API keys for the 4 m"
  },
  {
    "path": "examples/simple_usage.py",
    "chars": 544,
    "preview": "\"\"\"\nSimplest possible ZAPI usage - exactly as shown in documentation.\n\"\"\"\n\nfrom zapi import ZAPI\n\n\ndef main():\n    # Cre"
  },
  {
    "path": "pyproject.toml",
    "chars": 2632,
    "preview": "[build-system]\nrequires = [\"setuptools>=61.0\", \"wheel\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"zapi\""
  },
  {
    "path": "requirements.txt",
    "chars": 248,
    "preview": "playwright>=1.40.0\nrequests>=2.31.0\ncryptography>=41.0.0\nhttpx>=0.25.0\npydantic>=2.0.0\npython-dotenv>=1.0.0\nlangchain>=1"
  },
  {
    "path": "scripts/README.md",
    "chars": 671,
    "preview": "# ZAPI Scripts\n\nUtility scripts for ZAPI development and maintenance.\n\n## Pre-commit Script\n\n**File:** `pre-commit.sh`\n\n"
  },
  {
    "path": "scripts/pre-commit.sh",
    "chars": 1101,
    "preview": "#!/bin/bash\n# Pre-commit script for ZAPI\n# This script runs Ruff linting and formatting checks before allowing a commit\n"
  },
  {
    "path": "setup.py",
    "chars": 1181,
    "preview": "\"\"\"\nSetup script for ZAPI - maintained for backwards compatibility.\nPrefer using pyproject.toml for modern Python packag"
  },
  {
    "path": "zapi/__init__.py",
    "chars": 1224,
    "preview": "\"\"\"\nZAPI - Zero-Config API Intelligence\n\nAn open-source library that automatically discovers, understands,\nand prepares "
  },
  {
    "path": "zapi/auth.py",
    "chars": 2120,
    "preview": "\"\"\"Authentication handlers for different auth modes.\"\"\"\n\nfrom typing import Literal\n\nfrom playwright.async_api import Br"
  },
  {
    "path": "zapi/cli.py",
    "chars": 2375,
    "preview": "\"\"\"Command-line interface for ZAPI.\"\"\"\n\nimport time\nfrom pathlib import Path\n\nimport click\n\nfrom .core import ZAPI\nfrom "
  },
  {
    "path": "zapi/constants.py",
    "chars": 38,
    "preview": "BASE_URL = \"https://connect.adopt.ai\"\n"
  },
  {
    "path": "zapi/core.py",
    "chars": 19263,
    "preview": "\"\"\"Core ZAPI class implementation.\"\"\"\n\nimport asyncio\nimport json\nfrom typing import Callable, Optional\n\nimport httpx\nim"
  },
  {
    "path": "zapi/encryption.py",
    "chars": 6748,
    "preview": "\"\"\"Secure encryption/decryption utilities for LLM API keys.\"\"\"\n\nimport base64\nimport secrets\n\nfrom cryptography.hazmat.b"
  },
  {
    "path": "zapi/exceptions.py",
    "chars": 529,
    "preview": "\"\"\"Custom exception classes for ZAPI.\"\"\"\n\n\nclass ZAPIError(Exception):\n    \"\"\"Base exception class for ZAPI errors.\"\"\"\n\n"
  },
  {
    "path": "zapi/har_processing.py",
    "chars": 15030,
    "preview": "\"\"\"HAR file processing and analysis module.\"\"\"\n\nimport json\nimport os\nimport re\nfrom dataclasses import dataclass\nfrom t"
  },
  {
    "path": "zapi/integrations/langchain/tool.py",
    "chars": 8825,
    "preview": "\"\"\"\nZAPI Langchain Tool - Simple & Clean\n\nBasic conversion of ZAPI documented APIs into Langchain tools.\n\"\"\"\n\nimport os\n"
  },
  {
    "path": "zapi/providers.py",
    "chars": 6451,
    "preview": "\"\"\"LLM Provider enums and validation utilities.\n\nZAPI supports a generic key-value approach for LLM API keys, allowing d"
  },
  {
    "path": "zapi/session.py",
    "chars": 17460,
    "preview": "\"\"\"BrowserSession implementation with Playwright integration.\"\"\"\n\nimport asyncio\nfrom pathlib import Path\nfrom typing im"
  },
  {
    "path": "zapi/utils.py",
    "chars": 14558,
    "preview": "\"\"\"Utility functions for ZAPI.\"\"\"\n\nimport json\nimport os\nfrom typing import Any, Optional\n\ntry:\n    from dotenv import l"
  }
]

About this extraction

This page contains the full source code of the adoptai/zapi GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 38 files (178.6 KB), approximately 42.0k tokens, and a symbol index with 98 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!