[
  {
    "path": ".env.example",
    "content": "TAAPI_API_KEY=your_taapi_key_here  # From https://taapi.io\nHYPERLIQUID_PRIVATE_KEY=0x_your_private_key_here  # Wallet private key \nOPENROUTER_API_KEY=your_openrouter_key_here  # From https://openrouter.ai\nASSETS=\"BTC ETH SOL BNB ZEC EIGEN\"\nINTERVAL=\"5m\"\nLLM_MODEL=\"x-ai/grok-4\"\n# Optional: OPENROUTER_REFERER=https://your-site.com, OPENROUTER_APP_TITLE=trading-agent"
  },
  {
    "path": ".gitignore",
    "content": "# Environments\n.env\n.env.*\n!.env.example\n\n# Python\n__pycache__/\n*.py[cod]\n*.pyo\n*.pyd\n*.egg-info/\n*.egg\n\n# Virtual envs\n.venv/\nvenv/\n\n# Editors/OS\n.DS_Store\n.idea/\n.vscode/\n\n# Caches\n.pytest_cache/\n.mypy_cache/\n.cache/\n\nllm_requests.log\ntrading_history.log\n*.log\ndiary.jsonl"
  },
  {
    "path": "Dockerfile",
    "content": "FROM python:3.12-slim\n\n# System deps\nRUN apt-get update && apt-get install -y --no-install-recommends \\\n    build-essential curl ca-certificates git && \\\n    rm -rf /var/lib/apt/lists/*\n\nWORKDIR /app\n\n# Copy project metadata and install deps\nCOPY pyproject.toml poetry.lock ./\n\n# Install Poetry lightweightly\nENV POETRY_VIRTUALENVS_CREATE=false \\\n    POETRY_NO_INTERACTION=1\nRUN pip install --no-cache-dir poetry && \\\n    poetry install --no-interaction --no-ansi --no-root\n\n# Copy source\nCOPY src ./src\n\n# API defaults\nENV APP_PORT=3000\nEXPOSE 3000\n\n# Default command: run as module to keep absolute imports working\nENTRYPOINT [\"poetry\", \"run\", \"python\", \"-m\", \"src.main\"]\n\n\n"
  },
  {
    "path": "README.md",
    "content": "# Nocturne: AI Trading Agent on Hyperliquid\n\nThis project implements an AI-powered trading agent that leverages LLM models to analyze real-time market data from TAAPI, make informed trading decisions, and execute trades on the Hyperliquid decentralized exchange. The agent runs in a continuous loop, monitoring specified cryptocurrency assets at configurable intervals, using technical indicators to decide on buy/sell/hold actions, and manages positions with take-profit and stop-loss orders.\n\n## Table of Contents\n\n- [Disclaimer](#disclaimer)\n- [Architecture](#architecture)\n- [Nocturne Live Agents](#nocturne-live-agents)\n- [Structure](#structure)\n- [Env Configuration](#env-configuration)\n- [Usage](#usage)\n- [Tool Calling](#tool-calling)\n- [Deployment to EigenCloud](#deployment-to-eigencloud)\n\n## Disclaimer\n\nThere is no guarantee of any returns. This code has not been audited. Please use at your own risk.\n\n## Architecture\n\nSee the full [Architecture Documentation](docs/ARCHITECTURE.md) for subsystems, data flow, and design principles.\n\n![Architecture Diagram](docs/architecture.png)\n\n## Nocturne Live Agents \n\n- GPT-5 Pro: [Portfolio Dashboard](https://hypurrscan.io/address/0xa049db4b3dfcb25c3092891010a629d987d26113) | [Live Logs](https://35.190.43.182/logs/0xC0BE8E55f469c1a04c0F6d04356828C5793d8a9D) (Seeded with $200)\n- DeepSeek R1: [Portfolio Dashboard](https://hypurrscan.io/address/0xa663c80d86fd7c045d9927bb6344d7a5827d31db) | [Live Logs](https://35.190.43.182/logs/0x4da68B78ef40D12f378b8498120f2F5A910Af1aD) (Seeded with $100) -- PAUSED\n- Grok 4: [Portfolio Dashboard](https://hypurrscan.io/address/0x3c71f3cf324d0133558c81d42543115ef1a2be79) | [Live Logs](https://35.190.43.182/logs/0xe6a9f97f99847215ea5813812508e9354a22A2e0) (Seeded with $100) -- PAUSED\n\n## Structure\n- `src/main.py`: Entry point, handles user input and main trading loop.\n- `src/agent/decision_maker.py`: LLM logic for trade decisions (OpenRouter with tool calling for TAAPI indicators).\n- `src/indicators/taapi_client.py`: Fetches indicators from TAAPI.\n- `src/trading/hyperliquid_api.py`: Executes trades on Hyperliquid.\n- `src/config_loader.py`: Centralized config loaded from `.env`.\n\n## Env Configuration\nPopulate `.env` (use `.env.example` as reference):\n- TAAPI_API_KEY\n- HYPERLIQUID_PRIVATE_KEY (or LIGHTER_PRIVATE_KEY)\n- OPENROUTER_API_KEY\n- LLM_MODEL \n- Optional: OPENROUTER_BASE_URL (`https://openrouter.ai/api/v1`), OPENROUTER_REFERER, OPENROUTER_APP_TITLE\n\n### Obtaining API Keys\n- **TAAPI_API_KEY**: Sign up at [TAAPI.io](https://taapi.io/) and generate an API key from your dashboard.\n- **HYPERLIQUID_PRIVATE_KEY**: Generate an Ethereum-compatible private key for Hyperliquid. Use tools like MetaMask or `eth_account` library. For security, never share this key.\n- **OPENROUTER_API_KEY**: Create an account at [OpenRouter.ai](https://openrouter.ai/), then generate an API key in your account settings.\n- **LLM_MODEL**: No key needed; specify a model name like \"x-ai/grok-4\" (see OpenRouter models list).\n\n## Usage\nRun: `poetry run python src/main.py --assets BTC ETH --interval 1h`\n\n### Local API Endpoints\nWhen the agent runs, it also serves a minimal API:\n- `GET /diary?limit=200` — returns recent JSONL diary entries as JSON.\n- `GET /logs?path=llm_requests.log&limit=2000` — tails the specified log file.\n\nConfigure bind host/port via env:\n- `API_HOST` (default `0.0.0.0`)\n- `API_PORT` or `APP_PORT` (default `3000`)\n\nDocker:\n```bash\ndocker build --platform linux/amd64 -t trading-agent .\ndocker run --rm -p 3000:3000 --env-file .env trading-agent\n# Now: curl http://localhost:3000/diary\n```\n\n## Tool Calling\nThe agent can dynamically fetch any TAAPI indicator (e.g., EMA, RSI) via tool calls. See [TAAPI Indicators](https://taapi.io/indicators/) and [EMA Example](https://taapi.io/indicators/exponential-moving-average/) for details.\n\n## Deployment to EigenCloud\n\nEigenCloud (via EigenX CLI) allows deploying this trading agent in a Trusted Execution Environment (TEE) with secure key management.\n\n### Prerequisites\n- Allowlisted Ethereum account (Sepolia for testnet). Request onboarding at [EigenCloud Onboarding](https://onboarding.eigencloud.xyz).\n- Docker installed.\n- Sepolia ETH for deployments.\n\n### Installation\n#### macOS/Linux\n```bash\ncurl -fsSL https://eigenx-scripts.s3.us-east-1.amazonaws.com/install-eigenx.sh | bash\n```\n\n#### Windows\n```bash\ncurl -fsSL https://eigenx-scripts.s3.us-east-1.amazonaws.com/install-eigenx.ps1 | powershell -\n```\n\n### Initial Setup\n```bash\ndocker login\neigenx auth login  # Or eigenx auth generate --store (if you don't have a eth account, keep this account separate from your trading account)\n```\n\n### Deploy the Agent\nFrom the project directory:\n```bash\ncp .env.example .env\n# Edit .env: set ASSETS, INTERVAL, API keys\neigenx app deploy\n```\n\n### Monitoring\n```bash\neigenx app info --watch\neigenx app logs --watch\n```\n\n### Updates\nEdit code or .env, then:\n```bash\neigenx app upgrade <app-name>\n```\n\nFor full CLI reference, see the [EigenX Documentation](https://github.com/Layr-Labs/eigenx-cli).\n"
  },
  {
    "path": "docs/ARCHITECTURE.md",
    "content": "## Trading Agent Architecture (High-Level)\n\nThis document outlines the end-to-end flow of the trading agent at a conceptual level. It focuses on subsystems, data flows, and guardrails rather than specific functions.\n\n### Subsystems\n- Config/Env: Centralized runtime settings from `.env` (keys, model, assets, interval).\n- Agent Runtime Loop: Schedules periodic decisions per `--interval` and coordinates all subsystems.\n- Context Builder: Prepares the prompt context with authoritative exchange state, indicators, recent fills, active orders, local diary, and sampled perp mid prices.\n- Decision Engine:\n  - Primary LLM: Produces structured trade decisions for all assets.\n  - Sanitizer LLM: Fast, schema-enforcing post-processor that coerces malformed outputs into the exact JSON array.\n- Risk/Collateral Gate: Validates proposed allocations vs available capital/leverage constraints (and can scale/hold when insufficient).\n- Execution Layer: Places market/trigger orders and extracts order identifiers.\n- Reconciliation: Resolves local intent vs exchange truth (positions/open orders/fills), purges stale local state, and logs outcomes.\n- Observability: Minimal HTTP API to fetch diary and logs for debugging/telemetry.\n\n### Data Principles\n- Authoritative Source: Exchange state (positions, open orders, fills, mids) always supersedes local intent.\n- Perp-Only Pricing: Price context comes from Hyperliquid mids; no spot/perp basis mixing.\n- Compact Signals: Indicators (5m/4h EMA/MACD/RSI) and short sampled price histories keep context lean and informative.\n- Time Semantics: Timestamps are UTC ISO; MinutesOpen computed from stored open times.\n\n### Robustness\n- Structured Outputs: Use JSON Schema with strict mode; fallback to sanitizer.\n- Retry Strategy: Single retry with stricter instruction to output array-only JSON.\n- Reconciliation: Regularly remove stale active trades when no position and no orders exist; log reconcile events.\n- Logging: Requests/responses and diary entries recorded locally for traceability.\n\n\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[project]\nname = \"trading-agent\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\n    {name = \"Gajesh Naik\",email = \"26431906+Gajesh2007@users.noreply.github.com\"}\n]\nreadme = \"README.md\"\nrequires-python = \">=3.12,<4\"\ndependencies = [\n    \"hyperliquid-python-sdk (>=0.20.0,<0.21.0)\",\n    \"python-dotenv (>=1.1.1,<2.0.0)\",\n    \"web3 (>=7.14.0,<8.0.0)\",\n    \"aiohttp (>=3.13.1,<4.0.0)\",\n    \"openai (>=2.5.0,<3.0.0)\",\n    \"requests (>=2.32.5,<3.0.0)\",\n    \"rich (>=14.2.0,<15.0.0)\"\n]\n\n\n[build-system]\nrequires = [\"poetry-core>=2.0.0,<3.0.0\"]\nbuild-backend = \"poetry.core.masonry.api\"\n"
  },
  {
    "path": "src/__init__.py",
    "content": ""
  },
  {
    "path": "src/agent/__init__.py",
    "content": ""
  },
  {
    "path": "src/agent/decision_maker.py",
    "content": "\"\"\"Decision-making agent that orchestrates LLM prompts and indicator lookups.\"\"\"\n\nimport requests\nfrom src.config_loader import CONFIG\nfrom src.indicators.taapi_client import TAAPIClient\nimport json\nimport logging\nfrom datetime import datetime\n\nclass TradingAgent:\n    \"\"\"High-level trading agent that delegates reasoning to an LLM service.\"\"\"\n\n    def __init__(self):\n        \"\"\"Initialize LLM configuration, metadata headers, and indicator helper.\"\"\"\n        self.model = CONFIG[\"llm_model\"]\n        self.api_key = CONFIG[\"openrouter_api_key\"]\n        base = CONFIG[\"openrouter_base_url\"]\n        self.base_url = f\"{base}/chat/completions\"\n        self.referer = CONFIG.get(\"openrouter_referer\")\n        self.app_title = CONFIG.get(\"openrouter_app_title\")\n        self.taapi = TAAPIClient()\n        # Fast/cheap sanitizer model to normalize outputs on parse failures\n        self.sanitize_model = CONFIG.get(\"sanitize_model\") or \"openai/gpt-5\"\n\n    def decide_trade(self, assets, context):\n        \"\"\"Decide for multiple assets in one call.\n\n        Args:\n            assets: Iterable of asset tickers to score.\n            context: Structured market/account state forwarded to the LLM.\n\n        Returns:\n            List of trade decision payloads, one per asset.\n        \"\"\"\n        return self._decide(context, assets=assets)\n\n    def _decide(self, context, assets):\n        \"\"\"Dispatch decision request to the LLM and enforce output contract.\"\"\"\n        system_prompt = (\n            \"You are a rigorous QUANTITATIVE TRADER and interdisciplinary MATHEMATICIAN-ENGINEER optimizing risk-adjusted returns for perpetual futures under real execution, margin, and funding constraints.\\n\"\n            \"You will receive market + account context for SEVERAL assets, including:\\n\"\n            f\"- assets = {json.dumps(assets)}\\n\"\n            \"- per-asset intraday (5m) and higher-timeframe (4h) metrics\\n\"\n            \"- Active Trades with Exit Plans\\n\"\n            \"- Recent Trading History\\n\\n\"\n            \"Always use the 'current time' provided in the user message to evaluate any time-based conditions, such as cooldown expirations or timed exit plans.\\n\\n\"\n            \"Your goal: make decisive, first-principles decisions per asset that minimize churn while capturing edge.\\n\\n\"\n            \"Aggressively pursue setups where calculated risk is outweighed by expected edge; size positions so downside is controlled while upside remains meaningful.\\n\\n\"\n            \"Core policy (low-churn, position-aware)\\n\"\n            \"1) Respect prior plans: If an active trade has an exit_plan with explicit invalidation (e.g., “close if 4h close above EMA50”), DO NOT close or flip early unless that invalidation (or a stronger one) has occurred.\\n\"\n            \"2) Hysteresis: Require stronger evidence to CHANGE a decision than to keep it. Only flip direction if BOTH:\\n\"\n            \"   a) Higher-timeframe structure supports the new direction (e.g., 4h EMA20 vs EMA50 and/or MACD regime), AND\\n\"\n            \"   b) Intraday structure confirms with a decisive break beyond ~0.5×ATR (recent) and momentum alignment (MACD or RSI slope).\\n\"\n            \"   Otherwise, prefer HOLD or adjust TP/SL.\\n\"\n            \"3) Cooldown: After opening, adding, reducing, or flipping, impose a self-cooldown of at least 3 bars of the decision timeframe (e.g., 3×5m = 15m) before another direction change, unless a hard invalidation occurs. Encode this in exit_plan (e.g., “cooldown_bars:3 until 2025-10-19T15:55Z”). You must honor your own cooldowns on future cycles.\\n\"\n            \"4) Funding is a tilt, not a trigger: Do NOT open/close/flip solely due to funding unless expected funding over your intended holding horizon meaningfully exceeds expected edge (e.g., > ~0.25×ATR). Consider that funding accrues discretely and slowly relative to 5m bars.\\n\"\n            \"5) Overbought/oversold ≠ reversal by itself: Treat RSI extremes as risk-of-pullback. You need structure + momentum confirmation to bet against trend. Prefer tightening stops or taking partial profits over instant flips.\\n\"\n            \"6) Prefer adjustments over exits: If the thesis weakens but is not invalidated, first consider: tighten stop (e.g., to a recent swing or ATR multiple), trail TP, or reduce size. Flip only on hard invalidation + fresh confluence.\\n\\n\"\n            \"Decision discipline (per asset)\\n\"\n            \"- Choose one: buy / sell / hold.\\n\"\n            \"- Proactively harvest profits when price action presents a clear, high-quality opportunity that aligns with your thesis.\\n\"\n            \"- You control allocation_usd.\\n\"\n            \"- TP/SL sanity:\\n\"\n            \"  • BUY: tp_price > current_price, sl_price < current_price\\n\"\n            \"  • SELL: tp_price < current_price, sl_price > current_price\\n\"\n            \"  If sensible TP/SL cannot be set, use null and explain the logic.\\n\"\n            \"- exit_plan must include at least ONE explicit invalidation trigger and may include cooldown guidance you will follow later.\\n\\n\"\n            \"Leverage policy (perpetual futures)\\n\"\n            \"- YOU CAN USE LEVERAGE, ATLEAST 3X LEVERAGE TO GET BETTER RETURN, KEEP IT WITHIN 10X IN TOTAL\\n\"\n            \"- In high volatility (elevated ATR) or during funding spikes, reduce or avoid leverage.\\n\"\n            \"- Treat allocation_usd as notional exposure; keep it consistent with safe leverage and available margin.\\n\\n\"\n            \"Tool usage\\n\"\n            \"- Aggressively leverage fetch_taapi_indicator whenever an additional datapoint could sharpen your thesis; keep parameters minimal (indicator, symbol like \\\"BTC/USDT\\\", interval \\\"5m\\\"/\\\"4h\\\", optional period).\\n\"\n            \"- Incorporate tool findings into your reasoning, but NEVER paste raw tool responses into the final JSON—summarize the insight instead.\\n\"\n            \"- Use tools to upgrade your analysis; lack of confidence is a cue to query them before deciding.\"\n            \"Reasoning recipe (first principles)\\n\"\n            \"- Structure (trend, EMAs slope/cross, HH/HL vs LH/LL), Momentum (MACD regime, RSI slope), Liquidity/volatility (ATR, volume), Positioning tilt (funding, OI).\\n\"\n            \"- Favor alignment across 4h and 5m. Counter-trend scalps require stronger intraday confirmation and tighter risk.\\n\\n\"\n            \"Output contract\\n\"\n            \"- Output a STRICT JSON object with exactly two properties in this order:\\n\"\n            \"  • reasoning: long-form string capturing detailed, step-by-step analysis that means you can acknowledge existing information as clarity, or acknowledge that you need more information to make a decision (be verbose).\\n\"\n            \"  • trade_decisions: array ordered to match the provided assets list.\\n\"\n            \"- Each item inside trade_decisions must contain the keys {asset, action, allocation_usd, tp_price, sl_price, exit_plan, rationale}.\\n\"\n            \"- Do not emit Markdown or any extra properties.\\n\"\n        )\n        user_prompt = context\n        messages = [\n            {\"role\": \"system\", \"content\": system_prompt},\n            {\"role\": \"user\", \"content\": user_prompt},\n        ]\n\n        tools = [{\n            \"type\": \"function\",\n            \"function\": {\n                \"name\": \"fetch_taapi_indicator\",\n                \"description\": (\"Fetch any TAAPI indicator. Available: ema, sma, rsi, macd, bbands, stochastic, stochrsi, \"\n                    \"adx, atr, cci, dmi, ichimoku, supertrend, vwap, obv, mfi, willr, roc, mom, sar (parabolic), \"\n                    \"fibonacci, pivotpoints, keltner, donchian, awesome, gator, alligator, and 200+ more. \"\n                    \"See https://taapi.io/indicators/ for full list and parameters.\"),\n                \"parameters\": {\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"indicator\": {\"type\": \"string\"},\n                        \"symbol\": {\"type\": \"string\"},\n                        \"interval\": {\"type\": \"string\"},\n                        \"period\": {\"type\": \"integer\"},\n                        \"backtrack\": {\"type\": \"integer\"},\n                        \"other_params\": {\"type\": \"object\", \"additionalProperties\": {\"type\": [\"string\", \"number\", \"boolean\"]}},\n                    },\n                    \"required\": [\"indicator\", \"symbol\", \"interval\"],\n                    \"additionalProperties\": False,\n                },\n            },\n        }]\n\n        headers = {\n            \"Authorization\": f\"Bearer {self.api_key}\",\n            \"Content-Type\": \"application/json\",\n        }\n        if self.referer:\n            headers[\"HTTP-Referer\"] = self.referer\n        if self.app_title:\n            headers[\"X-Title\"] = self.app_title\n\n        def _post(payload):\n            \"\"\"Send a POST request to OpenRouter, logging request and response metadata.\"\"\"\n            # Log the full request payload for debugging\n            logging.info(\"Sending request to OpenRouter (model: %s)\", payload.get('model'))\n            with open(\"llm_requests.log\", \"a\", encoding=\"utf-8\") as f:\n                f.write(f\"\\n\\n=== {datetime.now()} ===\\n\")\n                f.write(f\"Model: {payload.get('model')}\\n\")\n                f.write(f\"Headers: {json.dumps({k: v for k, v in headers.items() if k != 'Authorization'})}\\n\")\n                f.write(f\"Payload:\\n{json.dumps(payload, indent=2)}\\n\")\n            resp = requests.post(self.base_url, headers=headers, json=payload, timeout=60)\n            logging.info(\"Received response from OpenRouter (status: %s)\", resp.status_code)\n            if resp.status_code != 200:\n                logging.error(\"OpenRouter error: %s - %s\", resp.status_code, resp.text)\n                with open(\"llm_requests.log\", \"a\", encoding=\"utf-8\") as f:\n                    f.write(f\"ERROR Response: {resp.status_code} - {resp.text}\\n\")\n            resp.raise_for_status()\n            return resp.json()\n\n        def _sanitize_output(raw_content: str, assets_list):\n            \"\"\"Coerce arbitrary LLM output into the required reasoning + decisions schema.\"\"\"\n            try:\n                schema = {\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"reasoning\": {\"type\": \"string\"},\n                        \"trade_decisions\": {\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"object\",\n                                \"properties\": {\n                                    \"asset\": {\"type\": \"string\", \"enum\": assets_list},\n                                    \"action\": {\"type\": \"string\", \"enum\": [\"buy\", \"sell\", \"hold\"]},\n                                    \"allocation_usd\": {\"type\": \"number\"},\n                                    \"tp_price\": {\"type\": [\"number\", \"null\"]},\n                                    \"sl_price\": {\"type\": [\"number\", \"null\"]},\n                                    \"exit_plan\": {\"type\": \"string\"},\n                                    \"rationale\": {\"type\": \"string\"},\n                                },\n                                \"required\": [\"asset\", \"action\", \"allocation_usd\", \"tp_price\", \"sl_price\", \"exit_plan\", \"rationale\"],\n                                \"additionalProperties\": False,\n                            },\n                            \"minItems\": 1,\n                        }\n                    },\n                    \"required\": [\"reasoning\", \"trade_decisions\"],\n                    \"additionalProperties\": False,\n                }\n                payload = {\n                    \"model\": self.sanitize_model,\n                    \"messages\": [\n                        {\"role\": \"system\", \"content\": (\n                            \"You are a strict JSON normalizer. Return ONLY a JSON array matching the provided JSON Schema. \"\n                            \"If input is wrapped or has prose/markdown, fix it. Do not add fields.\"\n                        )},\n                        {\"role\": \"user\", \"content\": raw_content},\n                    ],\n                    \"response_format\": {\n                        \"type\": \"json_schema\",\n                        \"json_schema\": {\n                            \"name\": \"trade_decisions\",\n                            \"strict\": True,\n                            \"schema\": schema,\n                        },\n                    },\n                    \"temperature\": 0,\n                }\n                resp = _post(payload)\n                msg = resp.get(\"choices\", [{}])[0].get(\"message\", {})\n                parsed = msg.get(\"parsed\")\n                if isinstance(parsed, dict):\n                    if \"trade_decisions\" in parsed:\n                        return parsed\n                # fallback: try content\n                content = msg.get(\"content\") or \"[]\"\n                try:\n                    loaded = json.loads(content)\n                    if isinstance(loaded, dict) and \"trade_decisions\" in loaded:\n                        return loaded\n                except (json.JSONDecodeError, KeyError, ValueError, TypeError):\n                    pass\n                return {\"reasoning\": \"\", \"trade_decisions\": []}\n            except (requests.RequestException, json.JSONDecodeError, KeyError, ValueError, TypeError) as se:\n                logging.error(\"Sanitize failed: %s\", se)\n                return {\"reasoning\": \"\", \"trade_decisions\": []}\n\n        allow_tools = True\n        allow_structured = True\n\n        def _build_schema():\n            \"\"\"Assemble the JSON schema used for structured LLM responses.\"\"\"\n            base_properties = {\n                \"asset\": {\"type\": \"string\", \"enum\": assets},\n                \"action\": {\"type\": \"string\", \"enum\": [\"buy\", \"sell\", \"hold\"]},\n                \"allocation_usd\": {\"type\": \"number\", \"minimum\": 0},\n                \"tp_price\": {\"type\": [\"number\", \"null\"]},\n                \"sl_price\": {\"type\": [\"number\", \"null\"]},\n                \"exit_plan\": {\"type\": \"string\"},\n                \"rationale\": {\"type\": \"string\"},\n            }\n            required_keys = [\"asset\", \"action\", \"allocation_usd\", \"tp_price\", \"sl_price\", \"exit_plan\", \"rationale\"]\n            return {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"reasoning\": {\"type\": \"string\"},\n                    \"trade_decisions\": {\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"object\",\n                            \"properties\": base_properties,\n                            \"required\": required_keys,\n                            \"additionalProperties\": False,\n                        },\n                        \"minItems\": 1,\n                    }\n                },\n                \"required\": [\"reasoning\", \"trade_decisions\"],\n                \"additionalProperties\": False,\n            }\n\n        for _ in range(6):\n            data = {\"model\": self.model, \"messages\": messages}\n            if allow_structured:\n                data[\"response_format\"] = {\n                    \"type\": \"json_schema\",\n                    \"json_schema\": {\n                        \"name\": \"trade_decisions\",\n                        \"strict\": True,\n                        \"schema\": _build_schema(),\n                    },\n                }\n            if allow_tools:\n                data[\"tools\"] = tools\n                data[\"tool_choice\"] = \"auto\"\n            if CONFIG.get(\"reasoning_enabled\"):\n                data[\"reasoning\"] = {\n                    \"enabled\": True,\n                    \"effort\": CONFIG.get(\"reasoning_effort\") or \"high\",\n                    # \"max_tokens\": CONFIG.get(\"reasoning_max_tokens\") or 100000,\n                    \"exclude\": False,\n                }\n            if CONFIG.get(\"provider_config\") or CONFIG.get(\"provider_quantizations\"):\n                provider_payload = dict(CONFIG.get(\"provider_config\") or {})\n                quantizations = CONFIG.get(\"provider_quantizations\")\n                if quantizations:\n                    provider_payload[\"quantizations\"] = quantizations\n                data[\"provider\"] = provider_payload\n            try:\n                resp_json = _post(data)\n            except requests.HTTPError as e:\n                try:\n                    err = e.response.json()\n                except (json.JSONDecodeError, ValueError, AttributeError):\n                    err = {}\n                raw = (err.get(\"error\", {}).get(\"metadata\", {}) or {}).get(\"raw\", \"\")\n                provider = (err.get(\"error\", {}).get(\"metadata\", {}) or {}).get(\"provider_name\", \"\")\n                if e.response.status_code == 422 and provider.lower().startswith(\"xai\") and \"deserialize\" in raw.lower():\n                    logging.warning(\"xAI rejected tool schema; retrying without tools.\")\n                    if allow_tools:\n                        allow_tools = False\n                        continue\n                # Provider may not support structured outputs / response_format\n                err_text = json.dumps(err)\n                if allow_structured and (\"response_format\" in err_text or \"structured\" in err_text or e.response.status_code in (400, 422)):\n                    logging.warning(\"Provider rejected structured outputs; retrying without response_format.\")\n                    allow_structured = False\n                    continue\n                raise\n\n            choice = resp_json[\"choices\"][0]\n            message = choice[\"message\"]\n            messages.append(message)\n\n            tool_calls = message.get(\"tool_calls\") or []\n            if allow_tools and tool_calls:\n                for tc in tool_calls:\n                    if tc.get(\"type\") == \"function\" and tc.get(\"function\", {}).get(\"name\") == \"fetch_taapi_indicator\":\n                        args = json.loads(tc[\"function\"].get(\"arguments\") or \"{}\")\n                        try:\n                            params = {\n                                \"secret\": self.taapi.api_key,\n                                \"exchange\": \"binance\",\n                                \"symbol\": args[\"symbol\"],\n                                \"interval\": args[\"interval\"],\n                            }\n                            if args.get(\"period\") is not None:\n                                params[\"period\"] = args[\"period\"]\n                            if args.get(\"backtrack\") is not None:\n                                params[\"backtrack\"] = args[\"backtrack\"]\n                            if isinstance(args.get(\"other_params\"), dict):\n                                params.update(args[\"other_params\"])\n                            ind_resp = requests.get(f\"{self.taapi.base_url}{args['indicator']}\", params=params, timeout=30).json()\n                            messages.append({\n                                \"role\": \"tool\",\n                                \"tool_call_id\": tc.get(\"id\"),\n                                \"name\": \"fetch_taapi_indicator\",\n                                \"content\": json.dumps(ind_resp),\n                            })\n                        except (requests.RequestException, json.JSONDecodeError, KeyError, ValueError) as ex:\n                            messages.append({\n                                \"role\": \"tool\",\n                                \"tool_call_id\": tc.get(\"id\"),\n                                \"name\": \"fetch_taapi_indicator\",\n                                \"content\": f\"Error: {str(ex)}\",\n                            })\n                continue\n\n            try:\n                # Prefer parsed field from structured outputs if present\n                if isinstance(message.get(\"parsed\"), dict):\n                    parsed = message.get(\"parsed\")\n                else:\n                    content = message.get(\"content\") or \"{}\"\n                    parsed = json.loads(content)\n\n                if not isinstance(parsed, dict):\n                    logging.error(\"Expected dict payload, got: %s; attempting sanitize\", type(parsed))\n                    sanitized = _sanitize_output(content if 'content' in locals() else json.dumps(parsed), assets)\n                    if sanitized.get(\"trade_decisions\"):\n                        return sanitized\n                    return {\"reasoning\": \"\", \"trade_decisions\": []}\n\n                reasoning_text = parsed.get(\"reasoning\", \"\") or \"\"\n                decisions = parsed.get(\"trade_decisions\")\n\n                if isinstance(decisions, list):\n                    normalized = []\n                    for item in decisions:\n                        if isinstance(item, dict):\n                            item.setdefault(\"allocation_usd\", 0.0)\n                            item.setdefault(\"tp_price\", None)\n                            item.setdefault(\"sl_price\", None)\n                            item.setdefault(\"exit_plan\", \"\")\n                            item.setdefault(\"rationale\", \"\")\n                            normalized.append(item)\n                        elif isinstance(item, list) and len(item) >= 7:\n                            normalized.append({\n                                \"asset\": item[0],\n                                \"action\": item[1],\n                                \"allocation_usd\": float(item[2]) if item[2] else 0.0,\n                                \"tp_price\": float(item[3]) if item[3] and item[3] != \"null\" else None,\n                                \"sl_price\": float(item[4]) if item[4] and item[4] != \"null\" else None,\n                                \"exit_plan\": item[5] if len(item) > 5 else \"\",\n                                \"rationale\": item[6] if len(item) > 6 else \"\"\n                            })\n                    return {\"reasoning\": reasoning_text, \"trade_decisions\": normalized}\n\n                logging.error(\"trade_decisions missing or invalid; attempting sanitize\")\n                sanitized = _sanitize_output(content if 'content' in locals() else json.dumps(parsed), assets)\n                if sanitized.get(\"trade_decisions\"):\n                    return sanitized\n                return {\"reasoning\": reasoning_text, \"trade_decisions\": []}\n            except (json.JSONDecodeError, KeyError, ValueError, TypeError) as e:\n                logging.error(\"JSON parse error: %s, content: %s\", e, content[:200])\n                # Try sanitizer as last resort\n                sanitized = _sanitize_output(content, assets)\n                if sanitized.get(\"trade_decisions\"):\n                    return sanitized\n                return {\n                    \"reasoning\": \"Parse error\",\n                    \"trade_decisions\": [{\n                        \"asset\": a,\n                        \"action\": \"hold\",\n                        \"allocation_usd\": 0.0,\n                        \"tp_price\": None,\n                        \"sl_price\": None,\n                        \"exit_plan\": \"\",\n                        \"rationale\": \"Parse error\"\n                    } for a in assets]\n                }\n\n        return {\n            \"reasoning\": \"tool loop cap\",\n            \"trade_decisions\": [{\n                \"asset\": a,\n                \"action\": \"hold\",\n                \"allocation_usd\": 0.0,\n                \"tp_price\": None,\n                \"sl_price\": None,\n                \"exit_plan\": \"\",\n                \"rationale\": \"tool loop cap\"\n            } for a in assets]\n        }\n"
  },
  {
    "path": "src/config_loader.py",
    "content": "\"\"\"Centralized environment variable loading for the trading agent configuration.\"\"\"\n\nimport json\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n\ndef _get_env(name: str, default: str | None = None, required: bool = False) -> str | None:\n    \"\"\"Fetch an environment variable with optional default and required validation.\"\"\"\n    value = os.getenv(name, default)\n    if required and (value is None or value == \"\"):\n        raise RuntimeError(f\"Missing required environment variable: {name}\")\n    return value\n\n\ndef _get_bool(name: str, default: bool = False) -> bool:\n    raw = os.getenv(name)\n    if raw is None:\n        return default\n    return raw.strip().lower() in {\"1\", \"true\", \"yes\", \"on\"}\n\n\ndef _get_int(name: str, default: int | None = None) -> int | None:\n    raw = os.getenv(name)\n    if raw is None or raw.strip() == \"\":\n        return default\n    try:\n        return int(raw)\n    except ValueError as exc:\n        raise RuntimeError(f\"Invalid integer for {name}: {raw}\") from exc\n\n\ndef _get_json(name: str, default: dict | None = None) -> dict | None:\n    raw = os.getenv(name)\n    if raw is None or raw.strip() == \"\":\n        return default\n    try:\n        parsed = json.loads(raw)\n        if not isinstance(parsed, dict):\n            raise RuntimeError(f\"Environment variable {name} must be a JSON object\")\n        return parsed\n    except json.JSONDecodeError as exc:\n        raise RuntimeError(f\"Invalid JSON for {name}: {raw}\") from exc\n\n\ndef _get_list(name: str, default: list[str] | None = None) -> list[str] | None:\n    raw = os.getenv(name)\n    if raw is None or raw.strip() == \"\":\n        return default\n    raw = raw.strip()\n    # Support JSON-style lists\n    if raw.startswith(\"[\") and raw.endswith(\"]\"):\n        try:\n            parsed = json.loads(raw)\n            if not isinstance(parsed, list):\n                raise RuntimeError(f\"Environment variable {name} must be a list if using JSON syntax\")\n            return [str(item).strip().strip('\"\\'') for item in parsed if str(item).strip()]\n        except json.JSONDecodeError as exc:\n            raise RuntimeError(f\"Invalid JSON list for {name}: {raw}\") from exc\n    # Fallback: comma separated string\n    values = []\n    for item in raw.split(\",\"):\n        cleaned = item.strip().strip('\"\\'')\n        if cleaned:\n            values.append(cleaned)\n    return values or default\n\n\nCONFIG = {\n    \"taapi_api_key\": _get_env(\"TAAPI_API_KEY\", required=True),\n    \"hyperliquid_private_key\": _get_env(\"HYPERLIQUID_PRIVATE_KEY\") or _get_env(\"LIGHTER_PRIVATE_KEY\"),\n    \"mnemonic\": _get_env(\"MNEMONIC\"),\n    # Hyperliquid network/base URL overrides\n    \"hyperliquid_base_url\": _get_env(\"HYPERLIQUID_BASE_URL\"),\n    \"hyperliquid_network\": _get_env(\"HYPERLIQUID_NETWORK\", \"mainnet\"),\n    # LLM via OpenRouter\n    \"openrouter_api_key\": _get_env(\"OPENROUTER_API_KEY\", required=True),\n    \"openrouter_base_url\": _get_env(\"OPENROUTER_BASE_URL\", \"https://openrouter.ai/api/v1\"),\n    \"openrouter_referer\": _get_env(\"OPENROUTER_REFERER\"),\n    \"openrouter_app_title\": _get_env(\"OPENROUTER_APP_TITLE\", \"trading-agent\"),\n    \"llm_model\": _get_env(\"LLM_MODEL\", \"x-ai/grok-4\"),\n    # Reasoning tokens\n    \"reasoning_enabled\": _get_bool(\"REASONING_ENABLED\", False),\n    \"reasoning_effort\": _get_env(\"REASONING_EFFORT\", \"high\"),\n    # Provider routing\n    \"provider_config\": _get_json(\"PROVIDER_CONFIG\"),\n    \"provider_quantizations\": _get_list(\"PROVIDER_QUANTIZATIONS\"),\n    # Runtime controls via env\n    \"assets\": _get_env(\"ASSETS\"),  # e.g., \"BTC ETH SOL\" or \"BTC,ETH,SOL\"\n    \"interval\": _get_env(\"INTERVAL\"),  # e.g., \"5m\", \"1h\"\n    # API server\n    \"api_host\": _get_env(\"API_HOST\", \"0.0.0.0\"),\n    \"api_port\": _get_env(\"APP_PORT\") or _get_env(\"API_PORT\") or \"3000\",\n}\n"
  },
  {
    "path": "src/indicators/__init__.py",
    "content": ""
  },
  {
    "path": "src/indicators/taapi_client.py",
    "content": "\"\"\"Client helper for interacting with the TAAPI technical analysis API.\"\"\"\n\nimport requests\nimport os\nimport time\nimport logging\nfrom src.config_loader import CONFIG\n\n\nclass TAAPIClient:\n    \"\"\"Fetches TA indicators with retry/backoff semantics for resilience.\"\"\"\n\n    def __init__(self):\n        \"\"\"Initialize TAAPI credentials and base URL.\"\"\"\n        self.api_key = CONFIG[\"taapi_api_key\"]\n        self.base_url = \"https://api.taapi.io/\"\n\n    def _get_with_retry(self, url, params, retries=3, backoff=0.5):\n        \"\"\"Perform a GET request with exponential backoff retry logic.\"\"\"\n        for attempt in range(retries):\n            try:\n                resp = requests.get(url, params=params, timeout=10)\n                resp.raise_for_status()\n                return resp.json()\n            except requests.HTTPError as e:\n                if e.response.status_code >= 500 and attempt < retries - 1:\n                    wait = backoff * (2 ** attempt)\n                    logging.warning(f\"TAAPI {e.response.status_code}, retrying in {wait}s\")\n                    time.sleep(wait)\n                else:\n                    raise\n            except requests.Timeout as e:\n                if attempt < retries - 1:\n                    wait = backoff * (2 ** attempt)\n                    logging.warning(f\"TAAPI timeout, retrying in {wait}s\")\n                    time.sleep(wait)\n                else:\n                    raise\n        raise RuntimeError(\"Max retries exceeded\")\n\n    def get_indicators(self, asset, interval):\n        \"\"\"Return a curated bundle of intraday indicators for ``asset``.\"\"\"\n        params = {\n            \"secret\": self.api_key,\n            \"exchange\": \"binance\",\n            \"symbol\": f\"{asset}/USDT\",\n            \"interval\": interval\n        }\n        rsi_response = self._get_with_retry(f\"{self.base_url}rsi\", params)\n        macd_response = self._get_with_retry(f\"{self.base_url}macd\", params)\n        sma_response = self._get_with_retry(f\"{self.base_url}sma\", params)\n        ema_response = self._get_with_retry(f\"{self.base_url}ema\", params)\n        bbands_response = self._get_with_retry(f\"{self.base_url}bbands\", params)\n        return {\n            \"rsi\": rsi_response.get(\"value\"),\n            \"macd\": macd_response,\n            \"sma\": sma_response.get(\"value\"),\n            \"ema\": ema_response.get(\"value\"),\n            \"bbands\": bbands_response\n        }\n\n    def get_historical_indicator(self, indicator, symbol, interval, results=10, params=None):\n        \"\"\"Fetch historical indicator data with optional overrides.\"\"\"\n        base_params = {\n            \"secret\": self.api_key,\n            \"exchange\": \"binance\",\n            \"symbol\": symbol,\n            \"interval\": interval,\n            \"results\": results\n        }\n        if params:\n            base_params.update(params)\n        response = self._get_with_retry(f\"{self.base_url}{indicator}\", base_params)\n        return response\n\n    def fetch_series(self, indicator: str, symbol: str, interval: str, results: int = 10, params: dict | None = None, value_key: str = \"value\") -> list:\n        \"\"\"Fetch and normalize a historical indicator series.\n\n        Args:\n            indicator: TAAPI indicator slug (e.g. ``\"ema\"``).\n            symbol: Market pair identifier (e.g. ``\"BTC/USDT\"``).\n            interval: Candle interval requested from TAAPI.\n            results: Number of datapoints to request.\n            params: Additional TAAPI query parameters.\n            value_key: Key to extract from the TAAPI response payload.\n\n        Returns:\n            List of floats rounded to 4 decimals, or an empty list on error.\n        \"\"\"\n        try:\n            data = self.get_historical_indicator(indicator, symbol, interval, results=results, params=params)\n            if isinstance(data, dict):\n                # Simple indicators: {\"value\": [1,2,3]}\n                if value_key in data and isinstance(data[value_key], list):\n                    return [round(v, 4) if isinstance(v, (int, float)) else v for v in data[value_key]]\n                # Error response\n                if \"error\" in data:\n                    import logging\n                    logging.error(f\"TAAPI error for {indicator} {symbol} {interval}: {data.get('error')}\")\n                    return []\n            return []\n        except Exception as e:\n            import logging\n            logging.error(f\"TAAPI fetch_series exception for {indicator}: {e}\")\n            return []\n\n    def fetch_value(self, indicator: str, symbol: str, interval: str, params: dict | None = None, key: str = \"value\"):\n        \"\"\"Fetch a single indicator value for the latest candle.\"\"\"\n        try:\n            base_params = {\n                \"secret\": self.api_key,\n                \"exchange\": \"binance\",\n                \"symbol\": symbol,\n                \"interval\": interval\n            }\n            if params:\n                base_params.update(params)\n            data = self._get_with_retry(f\"{self.base_url}{indicator}\", base_params)\n            if isinstance(data, dict):\n                val = data.get(key)\n                return round(val, 4) if isinstance(val, (int, float)) else val\n            return None\n        except Exception:\n            return None\n"
  },
  {
    "path": "src/main.py",
    "content": "\"\"\"Entry-point script that wires together the trading agent, data feeds, and API.\"\"\"\n\nimport sys\nimport argparse\nimport pathlib\nsys.path.append(str(pathlib.Path(__file__).parent.parent))\nfrom src.agent.decision_maker import TradingAgent\nfrom src.indicators.taapi_client import TAAPIClient\nfrom src.trading.hyperliquid_api import HyperliquidAPI\nimport asyncio\nimport logging\nfrom collections import deque, OrderedDict\nfrom datetime import datetime, timezone\nimport math  # For Sharpe\nfrom dotenv import load_dotenv\nimport os\nimport json\nfrom aiohttp import web\nfrom src.utils.formatting import format_number as fmt, format_size as fmt_sz\nfrom src.utils.prompt_utils import json_default, round_or_none, round_series\n\nload_dotenv()\n\nlogging.basicConfig(level=logging.INFO, format=\"%(asctime)s - %(levelname)s - %(message)s\")\n\n\ndef clear_terminal():\n    \"\"\"Clear the terminal screen on Windows or POSIX systems.\"\"\"\n    os.system('cls' if os.name == 'nt' else 'clear')\n\n\ndef get_interval_seconds(interval_str):\n    \"\"\"Convert interval strings like '5m' or '1h' to seconds.\"\"\"\n    if interval_str.endswith('m'):\n        return int(interval_str[:-1]) * 60\n    elif interval_str.endswith('h'):\n        return int(interval_str[:-1]) * 3600\n    elif interval_str.endswith('d'):\n        return int(interval_str[:-1]) * 86400\n    else:\n        raise ValueError(f\"Unsupported interval: {interval_str}\")\n\ndef main():\n    \"\"\"Parse CLI args, bootstrap dependencies, and launch the trading loop.\"\"\"\n    clear_terminal()\n    parser = argparse.ArgumentParser(description=\"LLM-based Trading Agent on Hyperliquid\")\n    parser.add_argument(\"--assets\", type=str, nargs=\"+\", required=False, help=\"Assets to trade, e.g., BTC ETH\")\n    parser.add_argument(\"--interval\", type=str, required=False, help=\"Interval period, e.g., 1h\")\n    args = parser.parse_args()\n\n    # Allow assets/interval via .env (CONFIG) if CLI not provided\n    from src.config_loader import CONFIG\n    assets_env = CONFIG.get(\"assets\")\n    interval_env = CONFIG.get(\"interval\")\n    if (not args.assets or len(args.assets) == 0) and assets_env:\n        # Support space or comma separated\n        if \",\" in assets_env:\n            args.assets = [a.strip() for a in assets_env.split(\",\") if a.strip()]\n        else:\n            args.assets = [a.strip() for a in assets_env.split(\" \") if a.strip()]\n    if not args.interval and interval_env:\n        args.interval = interval_env\n\n    if not args.assets or not args.interval:\n        parser.error(\"Please provide --assets and --interval, or set ASSETS and INTERVAL in .env\")\n\n    taapi = TAAPIClient()\n    hyperliquid = HyperliquidAPI()\n    agent = TradingAgent()\n\n\n    start_time = datetime.now(timezone.utc)\n    invocation_count = 0\n    trade_log = []  # For Sharpe: list of returns\n    active_trades = []  # {'asset','is_long','amount','entry_price','tp_oid','sl_oid','exit_plan'}\n    recent_events = deque(maxlen=200)\n    diary_path = \"diary.jsonl\"\n    initial_account_value = None\n    # Perp mid-price history sampled each loop (authoritative, avoids spot/perp basis mismatch)\n    price_history = {}\n\n    print(f\"Starting trading agent for assets: {args.assets} at interval: {args.interval}\")\n\n    def add_event(msg: str):\n        \"\"\"Log an informational event and push it into the recent events deque.\"\"\"\n        logging.info(msg)\n\n    async def run_loop():\n        \"\"\"Main trading loop that gathers data, calls the agent, and executes trades.\"\"\"\n        nonlocal invocation_count, initial_account_value\n        while True:\n            invocation_count += 1\n            minutes_since_start = (datetime.now(timezone.utc) - start_time).total_seconds() / 60\n\n            # Global account state\n            state = await hyperliquid.get_user_state()\n            total_value = state.get('total_value') or state['balance'] + sum(p.get('pnl', 0) for p in state['positions'])\n            sharpe = calculate_sharpe(trade_log)\n\n            account_value = total_value\n            if initial_account_value is None:\n                initial_account_value = account_value\n            total_return_pct = ((account_value - initial_account_value) / initial_account_value * 100.0) if initial_account_value else 0.0\n\n            positions = []\n            for pos_wrap in state['positions']:\n                pos = pos_wrap\n                coin = pos.get('coin')\n                current_px = await hyperliquid.get_current_price(coin) if coin else None\n                positions.append({\n                    \"symbol\": coin,\n                    \"quantity\": round_or_none(pos.get('szi'), 6),\n                    \"entry_price\": round_or_none(pos.get('entryPx'), 2),\n                    \"current_price\": round_or_none(current_px, 2),\n                    \"liquidation_price\": round_or_none(pos.get('liquidationPx') or pos.get('liqPx'), 2),\n                    \"unrealized_pnl\": round_or_none(pos.get('pnl'), 4),\n                    \"leverage\": pos.get('leverage')\n                })\n\n            recent_diary = []\n            try:\n                with open(diary_path, \"r\") as f:\n                    lines = f.readlines()\n                    for line in lines[-10:]:\n                        entry = json.loads(line)\n                        recent_diary.append(entry)\n            except Exception:\n                pass\n\n            open_orders_struct = []\n            try:\n                open_orders = await hyperliquid.get_open_orders()\n                for o in open_orders[:50]:\n                    open_orders_struct.append({\n                        \"coin\": o.get('coin'),\n                        \"oid\": o.get('oid'),\n                        \"is_buy\": o.get('isBuy'),\n                        \"size\": round_or_none(o.get('sz'), 6),\n                        \"price\": round_or_none(o.get('px'), 2),\n                        \"trigger_price\": round_or_none(o.get('triggerPx'), 2),\n                        \"order_type\": o.get('orderType')\n                    })\n            except Exception:\n                open_orders = []\n\n            # Reconcile active trades\n            try:\n                assets_with_positions = set()\n                for pos in state['positions']:\n                    try:\n                        if abs(float(pos.get('szi') or 0)) > 0:\n                            assets_with_positions.add(pos.get('coin'))\n                    except Exception:\n                        continue\n                assets_with_orders = {o.get('coin') for o in (open_orders or []) if o.get('coin')}\n                for tr in active_trades[:]:\n                    asset = tr.get('asset')\n                    if asset not in assets_with_positions and asset not in assets_with_orders:\n                        add_event(f\"Reconciling stale active trade for {asset} (no position, no orders)\")\n                        active_trades.remove(tr)\n                        with open(diary_path, \"a\") as f:\n                            f.write(json.dumps({\n                                \"timestamp\": datetime.now(timezone.utc).isoformat(),\n                                \"asset\": asset,\n                                \"action\": \"reconcile_close\",\n                                \"reason\": \"no_position_no_orders\",\n                                \"opened_at\": tr.get('opened_at')\n                            }) + \"\\n\")\n            except Exception:\n                pass\n\n            recent_fills_struct = []\n            try:\n                fills = await hyperliquid.get_recent_fills(limit=50)\n                for f_entry in fills[-20:]:\n                    try:\n                        t_raw = f_entry.get('time') or f_entry.get('timestamp')\n                        timestamp = None\n                        if t_raw is not None:\n                            try:\n                                t_int = int(t_raw)\n                                if t_int > 1e12:\n                                    timestamp = datetime.fromtimestamp(t_int / 1000, tz=timezone.utc).isoformat()\n                                else:\n                                    timestamp = datetime.fromtimestamp(t_int, tz=timezone.utc).isoformat()\n                            except Exception:\n                                timestamp = str(t_raw)\n                        recent_fills_struct.append({\n                            \"timestamp\": timestamp,\n                            \"coin\": f_entry.get('coin') or f_entry.get('asset'),\n                            \"is_buy\": f_entry.get('isBuy'),\n                            \"size\": round_or_none(f_entry.get('sz') or f_entry.get('size'), 6),\n                            \"price\": round_or_none(f_entry.get('px') or f_entry.get('price'), 2)\n                        })\n                    except Exception:\n                        continue\n            except Exception:\n                pass\n\n            dashboard = {\n                \"total_return_pct\": round(total_return_pct, 2),\n                \"balance\": round_or_none(state['balance'], 2),\n                \"account_value\": round_or_none(account_value, 2),\n                \"sharpe_ratio\": round_or_none(sharpe, 3),\n                \"positions\": positions,\n                \"active_trades\": [\n                    {\n                        \"asset\": tr.get('asset'),\n                        \"is_long\": tr.get('is_long'),\n                        \"amount\": round_or_none(tr.get('amount'), 6),\n                        \"entry_price\": round_or_none(tr.get('entry_price'), 2),\n                        \"tp_oid\": tr.get('tp_oid'),\n                        \"sl_oid\": tr.get('sl_oid'),\n                        \"exit_plan\": tr.get('exit_plan'),\n                        \"opened_at\": tr.get('opened_at')\n                    }\n                    for tr in active_trades\n                ],\n                \"open_orders\": open_orders_struct,\n                \"recent_diary\": recent_diary,\n                \"recent_fills\": recent_fills_struct,\n            }\n\n            # Gather data for ALL assets first\n            market_sections = []\n            asset_prices = {}\n            for asset in args.assets:\n                try:\n                    current_price = await hyperliquid.get_current_price(asset)\n                    asset_prices[asset] = current_price\n                    if asset not in price_history:\n                        price_history[asset] = deque(maxlen=60)\n                    price_history[asset].append({\"t\": datetime.now(timezone.utc).isoformat(), \"mid\": round_or_none(current_price, 2)})\n                    oi = await hyperliquid.get_open_interest(asset)\n                    funding = await hyperliquid.get_funding_rate(asset)\n\n                    intraday_tf = \"5m\"\n                    ema_series = taapi.fetch_series(\"ema\", f\"{asset}/USDT\", intraday_tf, results=10, params={\"period\": 20}, value_key=\"value\")\n                    macd_series = taapi.fetch_series(\"macd\", f\"{asset}/USDT\", intraday_tf, results=10, value_key=\"valueMACD\")\n                    rsi7_series = taapi.fetch_series(\"rsi\", f\"{asset}/USDT\", intraday_tf, results=10, params={\"period\": 7}, value_key=\"value\")\n                    rsi14_series = taapi.fetch_series(\"rsi\", f\"{asset}/USDT\", intraday_tf, results=10, params={\"period\": 14}, value_key=\"value\")\n\n                    lt_ema20 = taapi.fetch_value(\"ema\", f\"{asset}/USDT\", \"4h\", params={\"period\": 20}, key=\"value\")\n                    lt_ema50 = taapi.fetch_value(\"ema\", f\"{asset}/USDT\", \"4h\", params={\"period\": 50}, key=\"value\")\n                    lt_atr3 = taapi.fetch_value(\"atr\", f\"{asset}/USDT\", \"4h\", params={\"period\": 3}, key=\"value\")\n                    lt_atr14 = taapi.fetch_value(\"atr\", f\"{asset}/USDT\", \"4h\", params={\"period\": 14}, key=\"value\")\n                    lt_macd_series = taapi.fetch_series(\"macd\", f\"{asset}/USDT\", \"4h\", results=10, value_key=\"valueMACD\")\n                    lt_rsi_series = taapi.fetch_series(\"rsi\", f\"{asset}/USDT\", \"4h\", results=10, params={\"period\": 14}, value_key=\"value\")\n\n                    recent_mids = [entry[\"mid\"] for entry in list(price_history.get(asset, []))[-10:]]\n                    funding_annualized = round(funding * 24 * 365 * 100, 2) if funding else None\n\n                    market_sections.append({\n                        \"asset\": asset,\n                        \"current_price\": round_or_none(current_price, 2),\n                        \"intraday\": {\n                            \"ema20\": round_or_none(ema_series[-1], 2) if ema_series else None,\n                            \"macd\": round_or_none(macd_series[-1], 2) if macd_series else None,\n                            \"rsi7\": round_or_none(rsi7_series[-1], 2) if rsi7_series else None,\n                            \"rsi14\": round_or_none(rsi14_series[-1], 2) if rsi14_series else None,\n                            \"series\": {\n                                \"ema20\": round_series(ema_series, 2),\n                                \"macd\": round_series(macd_series, 2),\n                                \"rsi7\": round_series(rsi7_series, 2),\n                                \"rsi14\": round_series(rsi14_series, 2)\n                            }\n                        },\n                        \"long_term\": {\n                            \"ema20\": round_or_none(lt_ema20, 2),\n                            \"ema50\": round_or_none(lt_ema50, 2),\n                            \"atr3\": round_or_none(lt_atr3, 2),\n                            \"atr14\": round_or_none(lt_atr14, 2),\n                            \"macd_series\": round_series(lt_macd_series, 2),\n                            \"rsi_series\": round_series(lt_rsi_series, 2)\n                        },\n                        \"open_interest\": round_or_none(oi, 2),\n                        \"funding_rate\": round_or_none(funding, 8),\n                        \"funding_annualized_pct\": funding_annualized,\n                        \"recent_mid_prices\": recent_mids\n                    })\n                except Exception as e:\n                    add_event(f\"Data gather error {asset}: {e}\")\n                    continue\n\n            # Single LLM call with all assets\n            context_payload = OrderedDict([\n                (\"invocation\", {\n                    \"minutes_since_start\": round(minutes_since_start, 2),\n                    \"current_time\": datetime.now(timezone.utc).isoformat(),\n                    \"invocation_count\": invocation_count\n                }),\n                (\"account\", dashboard),\n                (\"market_data\", market_sections),\n                (\"instructions\", {\n                    \"assets\": args.assets,\n                    \"requirement\": \"Decide actions for all assets and return a strict JSON array matching the schema.\"\n                })\n            ])\n            context = json.dumps(context_payload, default=json_default)\n            add_event(f\"Combined prompt length: {len(context)} chars for {len(args.assets)} assets\")\n            with open(\"prompts.log\", \"a\") as f:\n                f.write(f\"\\n\\n--- {datetime.now()} - ALL ASSETS ---\\n{json.dumps(context_payload, indent=2, default=json_default)}\\n\")\n\n            def _is_failed_outputs(outs):\n                \"\"\"Return True when outputs are missing or clearly invalid.\"\"\"\n                if not isinstance(outs, dict):\n                    return True\n                decisions = outs.get(\"trade_decisions\")\n                if not isinstance(decisions, list) or not decisions:\n                    return True\n                try:\n                    return all(\n                        isinstance(o, dict)\n                        and (o.get('action') == 'hold')\n                        and ('parse error' in (o.get('rationale', '').lower()))\n                        for o in decisions\n                    )\n                except Exception:\n                    return True\n\n            try:\n                outputs = agent.decide_trade(args.assets, context)\n                if not isinstance(outputs, dict):\n                    add_event(f\"Invalid output format (expected dict): {outputs}\")\n                    outputs = {}\n            except Exception as e:\n                import traceback\n                add_event(f\"Agent error: {e}\")\n                add_event(f\"Traceback: {traceback.format_exc()}\")\n                outputs = {}\n\n            # Retry once on failure/parse error with a stricter instruction prefix\n            if _is_failed_outputs(outputs):\n                add_event(\"Retrying LLM once due to invalid/parse-error output\")\n                context_retry_payload = OrderedDict([\n                    (\"retry_instruction\", \"Return ONLY the JSON array per schema with no prose.\"),\n                    (\"original_context\", context_payload)\n                ])\n                context_retry = json.dumps(context_retry_payload, default=json_default)\n                try:\n                    outputs = agent.decide_trade(args.assets, context_retry)\n                    if not isinstance(outputs, dict):\n                        add_event(f\"Retry invalid format: {outputs}\")\n                        outputs = {}\n                except Exception as e:\n                    import traceback\n                    add_event(f\"Retry agent error: {e}\")\n                    add_event(f\"Retry traceback: {traceback.format_exc()}\")\n                    outputs = {}\n\n            reasoning_text = outputs.get(\"reasoning\", \"\") if isinstance(outputs, dict) else \"\"\n            if reasoning_text:\n                add_event(f\"LLM reasoning summary: {reasoning_text}\")\n\n            # Execute trades for each asset\n            for output in outputs.get(\"trade_decisions\", []) if isinstance(outputs, dict) else []:\n                try:\n                    asset = output.get(\"asset\")\n                    if not asset or asset not in args.assets:\n                        continue\n                    action = output.get(\"action\")\n                    current_price = asset_prices.get(asset, 0)\n                    action = output[\"action\"]\n                    rationale = output.get(\"rationale\", \"\")\n                    if rationale:\n                        add_event(f\"Decision rationale for {asset}: {rationale}\")\n                    if action in (\"buy\", \"sell\"):\n                        is_buy = action == \"buy\"\n                        alloc_usd = float(output.get(\"allocation_usd\", 0.0))\n                        if alloc_usd <= 0:\n                            add_event(f\"Holding {asset}: zero/negative allocation\")\n                            continue\n                        amount = alloc_usd / current_price\n\n                        order = await hyperliquid.place_buy_order(asset, amount) if is_buy else await hyperliquid.place_sell_order(asset, amount)\n                        # Confirm by checking recent fills for this asset shortly after placing\n                        await asyncio.sleep(1)\n                        fills_check = await hyperliquid.get_recent_fills(limit=10)\n                        filled = False\n                        for fc in reversed(fills_check):\n                            try:\n                                if (fc.get('coin') == asset or fc.get('asset') == asset):\n                                    filled = True\n                                    break\n                            except Exception:\n                                continue\n                        trade_log.append({\"type\": action, \"price\": current_price, \"amount\": amount, \"exit_plan\": output[\"exit_plan\"], \"filled\": filled})\n                        tp_oid = None\n                        sl_oid = None\n                        if output[\"tp_price\"]:\n                            tp_order = await hyperliquid.place_take_profit(asset, is_buy, amount, output[\"tp_price\"])\n                            tp_oids = hyperliquid.extract_oids(tp_order)\n                            tp_oid = tp_oids[0] if tp_oids else None\n                            add_event(f\"TP placed {asset} at {output['tp_price']}\")\n                        if output[\"sl_price\"]:\n                            sl_order = await hyperliquid.place_stop_loss(asset, is_buy, amount, output[\"sl_price\"])\n                            sl_oids = hyperliquid.extract_oids(sl_order)\n                            sl_oid = sl_oids[0] if sl_oids else None\n                            add_event(f\"SL placed {asset} at {output['sl_price']}\")\n                        # Reconcile: if opposite-side position exists or TP/SL just filled, clear stale active_trades for this asset\n                        for existing in active_trades[:]:\n                            if existing.get('asset') == asset:\n                                try:\n                                    active_trades.remove(existing)\n                                except ValueError:\n                                    pass\n                        active_trades.append({\n                            \"asset\": asset,\n                            \"is_long\": is_buy,\n                            \"amount\": amount,\n                            \"entry_price\": current_price,\n                            \"tp_oid\": tp_oid,\n                            \"sl_oid\": sl_oid,\n                            \"exit_plan\": output[\"exit_plan\"],\n                            \"opened_at\": datetime.now().isoformat()\n                        })\n                        add_event(f\"{action.upper()} {asset} amount {amount:.4f} at ~{current_price}\")\n                        if rationale:\n                            add_event(f\"Post-trade rationale for {asset}: {rationale}\")\n                        # Write to diary after confirming fills status\n                        with open(diary_path, \"a\") as f:\n                            diary_entry = {\n                                \"timestamp\": datetime.now(timezone.utc).isoformat(),\n                                \"asset\": asset,\n                                \"action\": action,\n                                \"allocation_usd\": alloc_usd,\n                                \"amount\": amount,\n                                \"entry_price\": current_price,\n                                \"tp_price\": output.get(\"tp_price\"),\n                                \"tp_oid\": tp_oid,\n                                \"sl_price\": output.get(\"sl_price\"),\n                                \"sl_oid\": sl_oid,\n                                \"exit_plan\": output.get(\"exit_plan\", \"\"),\n                                \"rationale\": output.get(\"rationale\", \"\"),\n                                \"order_result\": str(order),\n                                \"opened_at\": datetime.now(timezone.utc).isoformat(),\n                                \"filled\": filled\n                            }\n                            f.write(json.dumps(diary_entry) + \"\\n\")\n                    else:\n                        add_event(f\"Hold {asset}: {output.get('rationale', '')}\")\n                        # Write hold to diary\n                        with open(diary_path, \"a\") as f:\n                            diary_entry = {\n                                \"timestamp\": datetime.now().isoformat(),\n                                \"asset\": asset,\n                                \"action\": \"hold\",\n                                \"rationale\": output.get(\"rationale\", \"\")\n                            }\n                            f.write(json.dumps(diary_entry) + \"\\n\")\n                except Exception as e:\n                    import traceback\n                    add_event(f\"Execution error {asset}: {e}\")\n\n            await asyncio.sleep(get_interval_seconds(args.interval))\n\n    async def handle_diary(request):\n        \"\"\"Return diary entries as JSON or newline-delimited text.\"\"\"\n        try:\n            raw = request.query.get('raw')\n            download = request.query.get('download')\n            if raw or download:\n                if not os.path.exists(diary_path):\n                    return web.Response(text=\"\", content_type=\"text/plain\")\n                with open(diary_path, \"r\") as f:\n                    data = f.read()\n                headers = {}\n                if download:\n                    headers[\"Content-Disposition\"] = f\"attachment; filename=diary.jsonl\"\n                return web.Response(text=data, content_type=\"text/plain\", headers=headers)\n            limit = int(request.query.get('limit', '200'))\n            with open(diary_path, \"r\") as f:\n                lines = f.readlines()\n            start = max(0, len(lines) - limit)\n            entries = [json.loads(l) for l in lines[start:]]\n            return web.json_response({\"entries\": entries})\n        except FileNotFoundError:\n            return web.json_response({\"entries\": []})\n        except Exception as e:\n            return web.json_response({\"error\": str(e)}, status=500)\n\n    async def handle_logs(request):\n        \"\"\"Stream log files with optional download or tailing behaviour.\"\"\"\n        try:\n            path = request.query.get('path', 'llm_requests.log')\n            download = request.query.get('download')\n            limit_param = request.query.get('limit')\n            if not os.path.exists(path):\n                return web.Response(text=\"\", content_type=\"text/plain\")\n            with open(path, \"r\") as f:\n                data = f.read()\n            if download or (limit_param and (limit_param.lower() == 'all' or limit_param == '-1')):\n                headers = {}\n                if download:\n                    headers[\"Content-Disposition\"] = f\"attachment; filename={os.path.basename(path)}\"\n                return web.Response(text=data, content_type=\"text/plain\", headers=headers)\n            limit = int(limit_param) if limit_param else 2000\n            return web.Response(text=data[-limit:], content_type=\"text/plain\")\n        except Exception as e:\n            return web.json_response({\"error\": str(e)}, status=500)\n\n    async def start_api(app):\n        \"\"\"Register HTTP endpoints for observing diary entries and logs.\"\"\"\n        app.router.add_get('/diary', handle_diary)\n        app.router.add_get('/logs', handle_logs)\n\n    async def main_async():\n        \"\"\"Start the aiohttp server and kick off the trading loop.\"\"\"\n        app = web.Application()\n        await start_api(app)\n        from src.config_loader import CONFIG as CFG\n        runner = web.AppRunner(app)\n        await runner.setup()\n        site = web.TCPSite(runner, CFG.get(\"api_host\"), int(CFG.get(\"api_port\")))\n        await site.start()\n        await run_loop()\n\n    def calculate_total_return(state, trade_log):\n        \"\"\"Compute percent return relative to an assumed initial balance.\"\"\"\n        initial = 10000\n        current = state['balance'] + sum(p.get('pnl', 0) for p in state.get('positions', []))\n        return ((current - initial) / initial) * 100 if initial else 0\n\n    def calculate_sharpe(returns):\n        \"\"\"Compute a naive Sharpe-like ratio from the trade log.\"\"\"\n        if not returns:\n            return 0\n        vals = [r.get('pnl', 0) if 'pnl' in r else 0 for r in returns]\n        if not vals:\n            return 0\n        mean = sum(vals) / len(vals)\n        var = sum((v - mean) ** 2 for v in vals) / len(vals)\n        std = math.sqrt(var) if var > 0 else 0\n        return mean / std if std > 0 else 0\n\n    async def check_exit_condition(trade, taapi, hyperliquid):\n        \"\"\"Evaluate whether a given trade's exit plan triggers a close.\"\"\"\n        plan = (trade.get(\"exit_plan\") or \"\").lower()\n        if not plan:\n            return False\n        try:\n            if \"macd\" in plan and \"below\" in plan:\n                macd = taapi.get_indicators(trade[\"asset\"], \"4h\")[\"macd\"][\"valueMACD\"]\n                threshold = float(plan.split(\"below\")[-1].strip())\n                return macd < threshold\n            if \"close above ema50\" in plan:\n                ema50 = taapi.get_historical_indicator(\"ema\", f\"{trade['asset']}/USDT\", \"4h\", results=1, params={\"period\": 50})[0][\"value\"]\n                current = await hyperliquid.get_current_price(trade[\"asset\"])\n                return current > ema50\n        except Exception:\n            return False\n        return False\n\n    asyncio.run(main_async())\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "src/trading/__init__.py",
    "content": ""
  },
  {
    "path": "src/trading/hyperliquid_api.py",
    "content": "\"\"\"High-level Hyperliquid exchange client with async retry helpers.\n\nThis module wraps the Hyperliquid `Exchange` and `Info` SDK classes to provide a\nsingle entry point for submitting trades, managing orders, and retrieving market\nstate.  It normalizes retry behaviour, adds logging, and caches metadata so that\nthe trading agent can depend on predictable, non-blocking IO.\n\"\"\"\n\nimport asyncio\nimport logging\nimport aiohttp\nfrom typing import TYPE_CHECKING\nfrom src.config_loader import CONFIG\nfrom hyperliquid.exchange import Exchange\nfrom hyperliquid.info import Info\nfrom hyperliquid.utils import constants  # For MAINNET/TESTNET\nfrom eth_account import Account as _Account\nfrom eth_account.signers.local import LocalAccount\nfrom websocket._exceptions import WebSocketConnectionClosedException\nimport socket\n\nif TYPE_CHECKING:\n    # Type stubs for linter - eth_account's type stubs are incorrect\n    class Account:\n        @staticmethod\n        def from_key(_private_key: str) -> LocalAccount: ...\n        @staticmethod\n        def from_mnemonic(_mnemonic: str) -> LocalAccount: ...\n        @staticmethod\n        def enable_unaudited_hdwallet_features() -> None: ...\nelse:\n    Account = _Account\n\nclass HyperliquidAPI:\n    \"\"\"Facade around Hyperliquid SDK clients with async convenience methods.\n\n    The class owns wallet credentials, connection configuration, and provides\n    coroutine helpers that keep retry semantics and logging consistent across\n    the trading agent.\n    \"\"\"\n\n    def __init__(self):\n        \"\"\"Initialize wallet credentials and instantiate exchange clients.\n\n        Raises:\n            ValueError: If neither a private key nor mnemonic is present in the\n                configuration.\n        \"\"\"\n        self._meta_cache = None\n        if \"hyperliquid_private_key\" in CONFIG and CONFIG[\"hyperliquid_private_key\"]:\n            self.wallet = Account.from_key(CONFIG[\"hyperliquid_private_key\"])\n        elif \"mnemonic\" in CONFIG and CONFIG[\"mnemonic\"]:\n            Account.enable_unaudited_hdwallet_features()\n            self.wallet = Account.from_mnemonic(CONFIG[\"mnemonic\"])\n        else:\n            raise ValueError(\"Either HYPERLIQUID_PRIVATE_KEY/LIGHTER_PRIVATE_KEY or MNEMONIC must be provided\")\n        # Choose base URL: allow override via env-config; fallback to network selection\n        network = (CONFIG.get(\"hyperliquid_network\") or \"mainnet\").lower()\n        base_url = CONFIG.get(\"hyperliquid_base_url\")\n        if not base_url:\n            if network == \"testnet\":\n                base_url = getattr(constants, \"TESTNET_API_URL\", constants.MAINNET_API_URL)\n            else:\n                base_url = constants.MAINNET_API_URL\n        self.base_url = base_url\n        self._build_clients()\n\n    def _build_clients(self):\n        \"\"\"Instantiate exchange and info client instances for the active base URL.\"\"\"\n        self.info = Info(self.base_url)\n        self.exchange = Exchange(self.wallet, self.base_url)\n\n    def _reset_clients(self):\n        \"\"\"Recreate SDK clients after connection failures while logging failures.\"\"\"\n        try:\n            self._build_clients()\n            logging.warning(\"Hyperliquid clients re-instantiated after connection issue\")\n        except (ValueError, AttributeError, RuntimeError) as e:\n            logging.error(\"Failed to reset Hyperliquid clients: %s\", e)\n\n    async def _retry(self, fn, *args, max_attempts: int = 3, backoff_base: float = 0.5, reset_on_fail: bool = True, to_thread: bool = True, **kwargs):\n        \"\"\"Retry helper with exponential backoff and optional thread offloading.\n\n        Args:\n            fn: Callable to invoke, either sync (supports `asyncio.to_thread`) or\n                async depending on ``to_thread``. The callable should raise\n                exceptions rather than returning sentinel values.\n            *args: Positional arguments forwarded to ``fn``.\n            max_attempts: Maximum number of attempts before surfacing the last\n                exception.\n            backoff_base: Initial delay in seconds, doubled after each failure.\n            reset_on_fail: Whether to rebuild Hyperliquid clients after a\n                failure.\n            to_thread: If ``True`` the callable is executed in a worker thread.\n            **kwargs: Keyword arguments forwarded to ``fn``.\n\n        Returns:\n            Result produced by ``fn``.\n\n        Raises:\n            Exception: Propagates any exception raised by ``fn`` after retries.\n        \"\"\"\n        last_err = None\n        for attempt in range(max_attempts):\n            try:\n                if to_thread:\n                    return await asyncio.to_thread(fn, *args, **kwargs)\n                return await fn(*args, **kwargs)\n            except (WebSocketConnectionClosedException, aiohttp.ClientError, ConnectionError, TimeoutError, socket.timeout) as e:\n                last_err = e\n                logging.warning(\"HL call failed (attempt %s/%s): %s\", attempt + 1, max_attempts, e)\n                if reset_on_fail:\n                    self._reset_clients()\n                await asyncio.sleep(backoff_base * (2 ** attempt))\n                continue\n            except (RuntimeError, ValueError, KeyError, AttributeError) as e:\n                # Unknown errors: don't spin forever, but allow a quick reset once\n                last_err = e\n                logging.warning(\"HL call unexpected error (attempt %s/%s): %s\", attempt + 1, max_attempts, e)\n                if reset_on_fail and attempt == 0:\n                    self._reset_clients()\n                    await asyncio.sleep(backoff_base)\n                    continue\n                break\n        raise last_err if last_err else RuntimeError(\"Hyperliquid retry: unknown error\")\n\n    def round_size(self, asset, amount):\n        \"\"\"Round order size to the asset precision defined by market metadata.\n\n        Args:\n            asset: Symbol of the market whose contract size we are rounding to.\n            amount: Desired contract size before rounding.\n\n        Returns:\n            The input ``amount`` rounded to the market's ``szDecimals`` precision.\n        \"\"\"\n        meta = self._meta_cache[0] if hasattr(self, '_meta_cache') and self._meta_cache else None\n        if meta:\n            universe = meta.get(\"universe\", [])\n            asset_info = next((u for u in universe if u.get(\"name\") == asset), None)\n            if asset_info:\n                decimals = asset_info.get(\"szDecimals\", 8)\n                return round(amount, decimals)\n        return round(amount, 8)\n\n    async def place_buy_order(self, asset, amount, slippage=0.01):\n        \"\"\"Submit a market buy order with exchange-side rounding and retry logic.\n\n        Args:\n            asset: Market symbol to open.\n            amount: Contract size to open before rounding.\n            slippage: Maximum acceptable slippage expressed as a decimal.\n\n        Returns:\n            Raw SDK response from :meth:`Exchange.market_open`.\n        \"\"\"\n        amount = self.round_size(asset, amount)\n        return await self._retry(lambda: self.exchange.market_open(asset, True, amount, None, slippage))\n\n    async def place_sell_order(self, asset, amount, slippage=0.01):\n        \"\"\"Submit a market sell order with exchange-side rounding and retry logic.\n\n        Args:\n            asset: Market symbol to open.\n            amount: Contract size to open before rounding.\n            slippage: Maximum acceptable slippage expressed as a decimal.\n\n        Returns:\n            Raw SDK response from :meth:`Exchange.market_open`.\n        \"\"\"\n        amount = self.round_size(asset, amount)\n        return await self._retry(lambda: self.exchange.market_open(asset, False, amount, None, slippage))\n\n    async def place_take_profit(self, asset, is_buy, amount, tp_price):\n        \"\"\"Create a reduce-only trigger order that executes a take-profit exit.\n\n        Args:\n            asset: Market symbol to trade.\n            is_buy: ``True`` if the original position is long; dictates close\n                direction.\n            amount: Contract size to close.\n            tp_price: Trigger price for the take-profit order.\n\n        Returns:\n            Raw SDK response from `Exchange.order`.\n        \"\"\"\n        amount = self.round_size(asset, amount)\n        order_type = {\"trigger\": {\"triggerPx\": tp_price, \"isMarket\": True, \"tpsl\": \"tp\"}}\n        return await self._retry(lambda: self.exchange.order(asset, not is_buy, amount, tp_price, order_type, True))\n\n    async def place_stop_loss(self, asset, is_buy, amount, sl_price):\n        \"\"\"Create a reduce-only trigger order that executes a stop-loss exit.\n\n        Args:\n            asset: Market symbol to trade.\n            is_buy: ``True`` if the original position is long; dictates close\n                direction.\n            amount: Contract size to close.\n            sl_price: Trigger price for the stop-loss order.\n\n        Returns:\n            Raw SDK response from `Exchange.order`.\n        \"\"\"\n        amount = self.round_size(asset, amount)\n        order_type = {\"trigger\": {\"triggerPx\": sl_price, \"isMarket\": True, \"tpsl\": \"sl\"}}\n        return await self._retry(lambda: self.exchange.order(asset, not is_buy, amount, sl_price, order_type, True))\n\n    async def cancel_order(self, asset, oid):\n        \"\"\"Cancel a single order by identifier for a given asset.\n\n        Args:\n            asset: Market symbol associated with the order.\n            oid: Hyperliquid order identifier to cancel.\n\n        Returns:\n            Raw SDK response from :meth:`Exchange.cancel`.\n        \"\"\"\n        return await self._retry(lambda: self.exchange.cancel(asset, oid))\n\n    async def cancel_all_orders(self, asset):\n        \"\"\"Cancel every open order for ``asset`` owned by the configured wallet.\"\"\"\n        try:\n            open_orders = await self._retry(lambda: self.info.frontend_open_orders(self.wallet.address))\n            for order in open_orders:\n                if order.get(\"coin\") == asset:\n                    oid = order.get(\"oid\")\n                    if oid:\n                        await self.cancel_order(asset, oid)\n            return {\"status\": \"ok\", \"cancelled_count\": len([o for o in open_orders if o.get(\"coin\") == asset])}\n        except (RuntimeError, ValueError, KeyError, ConnectionError) as e:\n            logging.error(\"Cancel all orders error for %s: %s\", asset, e)\n            return {\"status\": \"error\", \"message\": str(e)}\n\n    async def get_open_orders(self):\n        \"\"\"Fetch and normalize open orders associated with the wallet.\n\n        Returns:\n            List of order dictionaries augmented with ``triggerPx`` when present.\n        \"\"\"\n        try:\n            orders = await self._retry(lambda: self.info.frontend_open_orders(self.wallet.address))\n            # Normalize trigger price if present in orderType\n            for o in orders:\n                try:\n                    ot = o.get(\"orderType\")\n                    if isinstance(ot, dict) and \"trigger\" in ot:\n                        trig = ot.get(\"trigger\") or {}\n                        if \"triggerPx\" in trig:\n                            o[\"triggerPx\"] = float(trig[\"triggerPx\"])\n                except (ValueError, KeyError, TypeError):\n                    continue\n            return orders\n        except (RuntimeError, ValueError, KeyError, ConnectionError) as e:\n            logging.error(\"Get open orders error: %s\", e)\n            return []\n\n    async def get_recent_fills(self, limit: int = 50):\n        \"\"\"Return the most recent fills when supported by the SDK variant.\n\n        Args:\n            limit: Maximum number of fills to return.\n\n        Returns:\n            List of fill dictionaries or an empty list if unsupported.\n        \"\"\"\n        try:\n            # Some SDK versions expose user_fills; fall back gracefully if absent\n            if hasattr(self.info, 'user_fills'):\n                fills = await self._retry(lambda: self.info.user_fills(self.wallet.address))\n            elif hasattr(self.info, 'fills'):\n                fills = await self._retry(lambda: self.info.fills(self.wallet.address))\n            else:\n                return []\n            if isinstance(fills, list):\n                return fills[-limit:]\n            return []\n        except (RuntimeError, ValueError, KeyError, ConnectionError, AttributeError) as e:\n            logging.error(\"Get recent fills error: %s\", e)\n            return []\n\n    def extract_oids(self, order_result):\n        \"\"\"Extract resting or filled order identifiers from an exchange response.\n\n        Args:\n            order_result: Raw order response payload returned by the exchange.\n\n        Returns:\n            List of order identifiers present in resting or filled status entries.\n        \"\"\"\n        oids = []\n        try:\n            statuses = order_result[\"response\"][\"data\"][\"statuses\"]\n            for st in statuses:\n                if \"resting\" in st and \"oid\" in st[\"resting\"]:\n                    oids.append(st[\"resting\"][\"oid\"])\n                if \"filled\" in st and \"oid\" in st[\"filled\"]:\n                    oids.append(st[\"filled\"][\"oid\"])\n        except (KeyError, TypeError, ValueError):\n            pass\n        return oids\n\n    async def get_user_state(self):\n        \"\"\"Retrieve wallet state with enriched position PnL calculations.\n\n        Returns:\n            Dictionary with ``balance``, ``total_value``, and ``positions``.\n        \"\"\"\n        state = await self._retry(lambda: self.info.user_state(self.wallet.address))\n        positions = state.get(\"assetPositions\", [])\n        total_value = float(state.get(\"accountValue\", 0.0))\n        enriched_positions = []\n        for pos_wrap in positions:\n            pos = pos_wrap[\"position\"]\n            entry_px = float(pos.get(\"entryPx\", 0) or 0)\n            size = float(pos.get(\"szi\", 0) or 0)\n            side = \"long\" if size > 0 else \"short\"\n            current_px = await self.get_current_price(pos[\"coin\"]) if entry_px and size else 0.0\n            pnl = (current_px - entry_px) * abs(size) if side == \"long\" else (entry_px - current_px) * abs(size)\n            pos[\"pnl\"] = pnl\n            pos[\"notional_entry\"] = abs(size) * entry_px\n            enriched_positions.append(pos)\n        balance = float(state.get(\"withdrawable\", 0.0))\n        if not total_value:\n            total_value = balance + sum(max(p.get(\"pnl\", 0.0), 0.0) for p in enriched_positions)\n        return {\"balance\": balance, \"total_value\": total_value, \"positions\": enriched_positions}\n\n    async def get_current_price(self, asset):\n        \"\"\"Return the latest mid-price for ``asset``.\n\n        Args:\n            asset: Market symbol to query.\n\n        Returns:\n            Mid-price as a float, or ``0.0`` when unavailable.\n        \"\"\"\n        mids = await self._retry(self.info.all_mids)\n        return float(mids.get(asset, 0.0))\n\n    async def get_meta_and_ctxs(self):\n        \"\"\"Return cached meta/context information, fetching once per lifecycle.\n\n        Returns:\n            Cached metadata response as returned by\n            :meth:`Info.meta_and_asset_ctxs`.\n        \"\"\"\n        if not self._meta_cache:\n            response = await self._retry(self.info.meta_and_asset_ctxs)\n            self._meta_cache = response\n        return self._meta_cache\n\n    async def get_open_interest(self, asset):\n        \"\"\"Return open interest for ``asset`` if it exists in cached metadata.\n\n        Args:\n            asset: Market symbol to query.\n\n        Returns:\n            Rounded open interest or ``None`` if unavailable.\n        \"\"\"\n        try:\n            data = await self.get_meta_and_ctxs()\n            if isinstance(data, list) and len(data) >= 2:\n                meta, asset_ctxs = data[0], data[1]\n                universe = meta.get(\"universe\", [])\n                asset_idx = next((i for i, u in enumerate(universe) if u.get(\"name\") == asset), None)\n                if asset_idx is not None and asset_idx < len(asset_ctxs):\n                    oi = asset_ctxs[asset_idx].get(\"openInterest\")\n                    return round(float(oi), 2) if oi else None\n            return None\n        except (RuntimeError, ValueError, KeyError, ConnectionError, TypeError) as e:\n            logging.error(\"OI fetch error for %s: %s\", asset, e)\n            return None\n\n    async def get_funding_rate(self, asset):\n        \"\"\"Return the most recent funding rate for ``asset`` if available.\n\n        Args:\n            asset: Market symbol to query.\n\n        Returns:\n            Funding rate as a float or ``None`` when not present.\n        \"\"\"\n        try:\n            data = await self.get_meta_and_ctxs()\n            if isinstance(data, list) and len(data) >= 2:\n                meta, asset_ctxs = data[0], data[1]\n                universe = meta.get(\"universe\", [])\n                asset_idx = next((i for i, u in enumerate(universe) if u.get(\"name\") == asset), None)\n                if asset_idx is not None and asset_idx < len(asset_ctxs):\n                    funding = asset_ctxs[asset_idx].get(\"funding\")\n                    return round(float(funding), 8) if funding else None\n            return None\n        except (RuntimeError, ValueError, KeyError, ConnectionError, TypeError) as e:\n            logging.error(\"Funding fetch error for %s: %s\", asset, e)\n            return None\n"
  },
  {
    "path": "src/utils/__init__.py",
    "content": "\"\"\"Utility modules for the trading agent.\"\"\"\n\n\n"
  },
  {
    "path": "src/utils/formatting.py",
    "content": "\"\"\"Utility helpers for consistently formatting numeric values.\"\"\"\n\n\ndef format_number(value, decimals=2):\n    \"\"\"Round ``value`` to ``decimals`` digits when possible, otherwise return raw.\"\"\"\n    try:\n        return round(float(value), decimals)\n    except Exception:\n        return value\n\n\ndef format_size(value):\n    \"\"\"Format position sizes with 6 decimal place precision.\"\"\"\n    return format_number(value, 6)\n\n\n"
  },
  {
    "path": "src/utils/prompt_utils.py",
    "content": "\"\"\"Prompt serialization helpers shared across agent entry points.\"\"\"\n\nfrom __future__ import annotations\n\nfrom datetime import datetime\nfrom typing import Iterable, Any\n\n\ndef json_default(obj: Any) -> Any:\n    \"\"\"Serialize datetime and set objects for JSON dumps.\"\"\"\n    if isinstance(obj, datetime):\n        return obj.isoformat()\n    if isinstance(obj, set):\n        return list(obj)\n    return str(obj)\n\n\ndef safe_float(value: Any) -> float | None:\n    \"\"\"Cast ``value`` to float when possible, otherwise return ``None``.\"\"\"\n    try:\n        return float(value)\n    except (TypeError, ValueError):\n        return None\n\n\ndef round_or_none(value: Any, decimals: int = 2) -> float | None:\n    \"\"\"Round numeric values to ``decimals`` places, preserving ``None``.\"\"\"\n    numeric = safe_float(value)\n    if numeric is None:\n        return None\n    return round(numeric, decimals)\n\n\ndef round_series(series: Iterable[Any] | None, decimals: int = 2) -> list[float | None]:\n    \"\"\"Round each entry in ``series`` to ``decimals`` places when numeric.\"\"\"\n    if not series:\n        return []\n    rounded: list[float | None] = []\n    for val in series:\n        numeric = safe_float(val)\n        rounded.append(round(numeric, decimals) if numeric is not None else None)\n    return rounded\n"
  }
]