Repository: Gajesh2007/ai-trading-agent
Branch: master
Commit: bc14127d9f4b
Files: 18
Total size: 86.0 KB
Directory structure:
gitextract_gby4b4zr/
├── .env.example
├── .gitignore
├── Dockerfile
├── README.md
├── docs/
│ └── ARCHITECTURE.md
├── pyproject.toml
└── src/
├── __init__.py
├── agent/
│ ├── __init__.py
│ └── decision_maker.py
├── config_loader.py
├── indicators/
│ ├── __init__.py
│ └── taapi_client.py
├── main.py
├── trading/
│ ├── __init__.py
│ └── hyperliquid_api.py
└── utils/
├── __init__.py
├── formatting.py
└── prompt_utils.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .env.example
================================================
TAAPI_API_KEY=your_taapi_key_here # From https://taapi.io
HYPERLIQUID_PRIVATE_KEY=0x_your_private_key_here # Wallet private key
OPENROUTER_API_KEY=your_openrouter_key_here # From https://openrouter.ai
ASSETS="BTC ETH SOL BNB ZEC EIGEN"
INTERVAL="5m"
LLM_MODEL="x-ai/grok-4"
# Optional: OPENROUTER_REFERER=https://your-site.com, OPENROUTER_APP_TITLE=trading-agent
================================================
FILE: .gitignore
================================================
# Environments
.env
.env.*
!.env.example
# Python
__pycache__/
*.py[cod]
*.pyo
*.pyd
*.egg-info/
*.egg
# Virtual envs
.venv/
venv/
# Editors/OS
.DS_Store
.idea/
.vscode/
# Caches
.pytest_cache/
.mypy_cache/
.cache/
llm_requests.log
trading_history.log
*.log
diary.jsonl
================================================
FILE: Dockerfile
================================================
FROM python:3.12-slim
# System deps
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential curl ca-certificates git && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy project metadata and install deps
COPY pyproject.toml poetry.lock ./
# Install Poetry lightweightly
ENV POETRY_VIRTUALENVS_CREATE=false \
POETRY_NO_INTERACTION=1
RUN pip install --no-cache-dir poetry && \
poetry install --no-interaction --no-ansi --no-root
# Copy source
COPY src ./src
# API defaults
ENV APP_PORT=3000
EXPOSE 3000
# Default command: run as module to keep absolute imports working
ENTRYPOINT ["poetry", "run", "python", "-m", "src.main"]
================================================
FILE: README.md
================================================
# Nocturne: AI Trading Agent on Hyperliquid
This project implements an AI-powered trading agent that leverages LLM models to analyze real-time market data from TAAPI, make informed trading decisions, and execute trades on the Hyperliquid decentralized exchange. The agent runs in a continuous loop, monitoring specified cryptocurrency assets at configurable intervals, using technical indicators to decide on buy/sell/hold actions, and manages positions with take-profit and stop-loss orders.
## Table of Contents
- [Disclaimer](#disclaimer)
- [Architecture](#architecture)
- [Nocturne Live Agents](#nocturne-live-agents)
- [Structure](#structure)
- [Env Configuration](#env-configuration)
- [Usage](#usage)
- [Tool Calling](#tool-calling)
- [Deployment to EigenCloud](#deployment-to-eigencloud)
## Disclaimer
There is no guarantee of any returns. This code has not been audited. Please use at your own risk.
## Architecture
See the full [Architecture Documentation](docs/ARCHITECTURE.md) for subsystems, data flow, and design principles.

## Nocturne Live Agents
- GPT-5 Pro: [Portfolio Dashboard](https://hypurrscan.io/address/0xa049db4b3dfcb25c3092891010a629d987d26113) | [Live Logs](https://35.190.43.182/logs/0xC0BE8E55f469c1a04c0F6d04356828C5793d8a9D) (Seeded with $200)
- DeepSeek R1: [Portfolio Dashboard](https://hypurrscan.io/address/0xa663c80d86fd7c045d9927bb6344d7a5827d31db) | [Live Logs](https://35.190.43.182/logs/0x4da68B78ef40D12f378b8498120f2F5A910Af1aD) (Seeded with $100) -- PAUSED
- Grok 4: [Portfolio Dashboard](https://hypurrscan.io/address/0x3c71f3cf324d0133558c81d42543115ef1a2be79) | [Live Logs](https://35.190.43.182/logs/0xe6a9f97f99847215ea5813812508e9354a22A2e0) (Seeded with $100) -- PAUSED
## Structure
- `src/main.py`: Entry point, handles user input and main trading loop.
- `src/agent/decision_maker.py`: LLM logic for trade decisions (OpenRouter with tool calling for TAAPI indicators).
- `src/indicators/taapi_client.py`: Fetches indicators from TAAPI.
- `src/trading/hyperliquid_api.py`: Executes trades on Hyperliquid.
- `src/config_loader.py`: Centralized config loaded from `.env`.
## Env Configuration
Populate `.env` (use `.env.example` as reference):
- TAAPI_API_KEY
- HYPERLIQUID_PRIVATE_KEY (or LIGHTER_PRIVATE_KEY)
- OPENROUTER_API_KEY
- LLM_MODEL
- Optional: OPENROUTER_BASE_URL (`https://openrouter.ai/api/v1`), OPENROUTER_REFERER, OPENROUTER_APP_TITLE
### Obtaining API Keys
- **TAAPI_API_KEY**: Sign up at [TAAPI.io](https://taapi.io/) and generate an API key from your dashboard.
- **HYPERLIQUID_PRIVATE_KEY**: Generate an Ethereum-compatible private key for Hyperliquid. Use tools like MetaMask or `eth_account` library. For security, never share this key.
- **OPENROUTER_API_KEY**: Create an account at [OpenRouter.ai](https://openrouter.ai/), then generate an API key in your account settings.
- **LLM_MODEL**: No key needed; specify a model name like "x-ai/grok-4" (see OpenRouter models list).
## Usage
Run: `poetry run python src/main.py --assets BTC ETH --interval 1h`
### Local API Endpoints
When the agent runs, it also serves a minimal API:
- `GET /diary?limit=200` — returns recent JSONL diary entries as JSON.
- `GET /logs?path=llm_requests.log&limit=2000` — tails the specified log file.
Configure bind host/port via env:
- `API_HOST` (default `0.0.0.0`)
- `API_PORT` or `APP_PORT` (default `3000`)
Docker:
```bash
docker build --platform linux/amd64 -t trading-agent .
docker run --rm -p 3000:3000 --env-file .env trading-agent
# Now: curl http://localhost:3000/diary
```
## Tool Calling
The agent can dynamically fetch any TAAPI indicator (e.g., EMA, RSI) via tool calls. See [TAAPI Indicators](https://taapi.io/indicators/) and [EMA Example](https://taapi.io/indicators/exponential-moving-average/) for details.
## Deployment to EigenCloud
EigenCloud (via EigenX CLI) allows deploying this trading agent in a Trusted Execution Environment (TEE) with secure key management.
### Prerequisites
- Allowlisted Ethereum account (Sepolia for testnet). Request onboarding at [EigenCloud Onboarding](https://onboarding.eigencloud.xyz).
- Docker installed.
- Sepolia ETH for deployments.
### Installation
#### macOS/Linux
```bash
curl -fsSL https://eigenx-scripts.s3.us-east-1.amazonaws.com/install-eigenx.sh | bash
```
#### Windows
```bash
curl -fsSL https://eigenx-scripts.s3.us-east-1.amazonaws.com/install-eigenx.ps1 | powershell -
```
### Initial Setup
```bash
docker login
eigenx auth login # Or eigenx auth generate --store (if you don't have a eth account, keep this account separate from your trading account)
```
### Deploy the Agent
From the project directory:
```bash
cp .env.example .env
# Edit .env: set ASSETS, INTERVAL, API keys
eigenx app deploy
```
### Monitoring
```bash
eigenx app info --watch
eigenx app logs --watch
```
### Updates
Edit code or .env, then:
```bash
eigenx app upgrade <app-name>
```
For full CLI reference, see the [EigenX Documentation](https://github.com/Layr-Labs/eigenx-cli).
================================================
FILE: docs/ARCHITECTURE.md
================================================
## Trading Agent Architecture (High-Level)
This document outlines the end-to-end flow of the trading agent at a conceptual level. It focuses on subsystems, data flows, and guardrails rather than specific functions.
### Subsystems
- Config/Env: Centralized runtime settings from `.env` (keys, model, assets, interval).
- Agent Runtime Loop: Schedules periodic decisions per `--interval` and coordinates all subsystems.
- Context Builder: Prepares the prompt context with authoritative exchange state, indicators, recent fills, active orders, local diary, and sampled perp mid prices.
- Decision Engine:
- Primary LLM: Produces structured trade decisions for all assets.
- Sanitizer LLM: Fast, schema-enforcing post-processor that coerces malformed outputs into the exact JSON array.
- Risk/Collateral Gate: Validates proposed allocations vs available capital/leverage constraints (and can scale/hold when insufficient).
- Execution Layer: Places market/trigger orders and extracts order identifiers.
- Reconciliation: Resolves local intent vs exchange truth (positions/open orders/fills), purges stale local state, and logs outcomes.
- Observability: Minimal HTTP API to fetch diary and logs for debugging/telemetry.
### Data Principles
- Authoritative Source: Exchange state (positions, open orders, fills, mids) always supersedes local intent.
- Perp-Only Pricing: Price context comes from Hyperliquid mids; no spot/perp basis mixing.
- Compact Signals: Indicators (5m/4h EMA/MACD/RSI) and short sampled price histories keep context lean and informative.
- Time Semantics: Timestamps are UTC ISO; MinutesOpen computed from stored open times.
### Robustness
- Structured Outputs: Use JSON Schema with strict mode; fallback to sanitizer.
- Retry Strategy: Single retry with stricter instruction to output array-only JSON.
- Reconciliation: Regularly remove stale active trades when no position and no orders exist; log reconcile events.
- Logging: Requests/responses and diary entries recorded locally for traceability.
================================================
FILE: pyproject.toml
================================================
[project]
name = "trading-agent"
version = "0.1.0"
description = ""
authors = [
{name = "Gajesh Naik",email = "26431906+Gajesh2007@users.noreply.github.com"}
]
readme = "README.md"
requires-python = ">=3.12,<4"
dependencies = [
"hyperliquid-python-sdk (>=0.20.0,<0.21.0)",
"python-dotenv (>=1.1.1,<2.0.0)",
"web3 (>=7.14.0,<8.0.0)",
"aiohttp (>=3.13.1,<4.0.0)",
"openai (>=2.5.0,<3.0.0)",
"requests (>=2.32.5,<3.0.0)",
"rich (>=14.2.0,<15.0.0)"
]
[build-system]
requires = ["poetry-core>=2.0.0,<3.0.0"]
build-backend = "poetry.core.masonry.api"
================================================
FILE: src/__init__.py
================================================
================================================
FILE: src/agent/__init__.py
================================================
================================================
FILE: src/agent/decision_maker.py
================================================
"""Decision-making agent that orchestrates LLM prompts and indicator lookups."""
import requests
from src.config_loader import CONFIG
from src.indicators.taapi_client import TAAPIClient
import json
import logging
from datetime import datetime
class TradingAgent:
"""High-level trading agent that delegates reasoning to an LLM service."""
def __init__(self):
"""Initialize LLM configuration, metadata headers, and indicator helper."""
self.model = CONFIG["llm_model"]
self.api_key = CONFIG["openrouter_api_key"]
base = CONFIG["openrouter_base_url"]
self.base_url = f"{base}/chat/completions"
self.referer = CONFIG.get("openrouter_referer")
self.app_title = CONFIG.get("openrouter_app_title")
self.taapi = TAAPIClient()
# Fast/cheap sanitizer model to normalize outputs on parse failures
self.sanitize_model = CONFIG.get("sanitize_model") or "openai/gpt-5"
def decide_trade(self, assets, context):
"""Decide for multiple assets in one call.
Args:
assets: Iterable of asset tickers to score.
context: Structured market/account state forwarded to the LLM.
Returns:
List of trade decision payloads, one per asset.
"""
return self._decide(context, assets=assets)
def _decide(self, context, assets):
"""Dispatch decision request to the LLM and enforce output contract."""
system_prompt = (
"You are a rigorous QUANTITATIVE TRADER and interdisciplinary MATHEMATICIAN-ENGINEER optimizing risk-adjusted returns for perpetual futures under real execution, margin, and funding constraints.\n"
"You will receive market + account context for SEVERAL assets, including:\n"
f"- assets = {json.dumps(assets)}\n"
"- per-asset intraday (5m) and higher-timeframe (4h) metrics\n"
"- Active Trades with Exit Plans\n"
"- Recent Trading History\n\n"
"Always use the 'current time' provided in the user message to evaluate any time-based conditions, such as cooldown expirations or timed exit plans.\n\n"
"Your goal: make decisive, first-principles decisions per asset that minimize churn while capturing edge.\n\n"
"Aggressively pursue setups where calculated risk is outweighed by expected edge; size positions so downside is controlled while upside remains meaningful.\n\n"
"Core policy (low-churn, position-aware)\n"
"1) Respect prior plans: If an active trade has an exit_plan with explicit invalidation (e.g., “close if 4h close above EMA50”), DO NOT close or flip early unless that invalidation (or a stronger one) has occurred.\n"
"2) Hysteresis: Require stronger evidence to CHANGE a decision than to keep it. Only flip direction if BOTH:\n"
" a) Higher-timeframe structure supports the new direction (e.g., 4h EMA20 vs EMA50 and/or MACD regime), AND\n"
" b) Intraday structure confirms with a decisive break beyond ~0.5×ATR (recent) and momentum alignment (MACD or RSI slope).\n"
" Otherwise, prefer HOLD or adjust TP/SL.\n"
"3) Cooldown: After opening, adding, reducing, or flipping, impose a self-cooldown of at least 3 bars of the decision timeframe (e.g., 3×5m = 15m) before another direction change, unless a hard invalidation occurs. Encode this in exit_plan (e.g., “cooldown_bars:3 until 2025-10-19T15:55Z”). You must honor your own cooldowns on future cycles.\n"
"4) Funding is a tilt, not a trigger: Do NOT open/close/flip solely due to funding unless expected funding over your intended holding horizon meaningfully exceeds expected edge (e.g., > ~0.25×ATR). Consider that funding accrues discretely and slowly relative to 5m bars.\n"
"5) Overbought/oversold ≠ reversal by itself: Treat RSI extremes as risk-of-pullback. You need structure + momentum confirmation to bet against trend. Prefer tightening stops or taking partial profits over instant flips.\n"
"6) Prefer adjustments over exits: If the thesis weakens but is not invalidated, first consider: tighten stop (e.g., to a recent swing or ATR multiple), trail TP, or reduce size. Flip only on hard invalidation + fresh confluence.\n\n"
"Decision discipline (per asset)\n"
"- Choose one: buy / sell / hold.\n"
"- Proactively harvest profits when price action presents a clear, high-quality opportunity that aligns with your thesis.\n"
"- You control allocation_usd.\n"
"- TP/SL sanity:\n"
" • BUY: tp_price > current_price, sl_price < current_price\n"
" • SELL: tp_price < current_price, sl_price > current_price\n"
" If sensible TP/SL cannot be set, use null and explain the logic.\n"
"- exit_plan must include at least ONE explicit invalidation trigger and may include cooldown guidance you will follow later.\n\n"
"Leverage policy (perpetual futures)\n"
"- YOU CAN USE LEVERAGE, ATLEAST 3X LEVERAGE TO GET BETTER RETURN, KEEP IT WITHIN 10X IN TOTAL\n"
"- In high volatility (elevated ATR) or during funding spikes, reduce or avoid leverage.\n"
"- Treat allocation_usd as notional exposure; keep it consistent with safe leverage and available margin.\n\n"
"Tool usage\n"
"- Aggressively leverage fetch_taapi_indicator whenever an additional datapoint could sharpen your thesis; keep parameters minimal (indicator, symbol like \"BTC/USDT\", interval \"5m\"/\"4h\", optional period).\n"
"- Incorporate tool findings into your reasoning, but NEVER paste raw tool responses into the final JSON—summarize the insight instead.\n"
"- Use tools to upgrade your analysis; lack of confidence is a cue to query them before deciding."
"Reasoning recipe (first principles)\n"
"- Structure (trend, EMAs slope/cross, HH/HL vs LH/LL), Momentum (MACD regime, RSI slope), Liquidity/volatility (ATR, volume), Positioning tilt (funding, OI).\n"
"- Favor alignment across 4h and 5m. Counter-trend scalps require stronger intraday confirmation and tighter risk.\n\n"
"Output contract\n"
"- Output a STRICT JSON object with exactly two properties in this order:\n"
" • reasoning: long-form string capturing detailed, step-by-step analysis that means you can acknowledge existing information as clarity, or acknowledge that you need more information to make a decision (be verbose).\n"
" • trade_decisions: array ordered to match the provided assets list.\n"
"- Each item inside trade_decisions must contain the keys {asset, action, allocation_usd, tp_price, sl_price, exit_plan, rationale}.\n"
"- Do not emit Markdown or any extra properties.\n"
)
user_prompt = context
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]
tools = [{
"type": "function",
"function": {
"name": "fetch_taapi_indicator",
"description": ("Fetch any TAAPI indicator. Available: ema, sma, rsi, macd, bbands, stochastic, stochrsi, "
"adx, atr, cci, dmi, ichimoku, supertrend, vwap, obv, mfi, willr, roc, mom, sar (parabolic), "
"fibonacci, pivotpoints, keltner, donchian, awesome, gator, alligator, and 200+ more. "
"See https://taapi.io/indicators/ for full list and parameters."),
"parameters": {
"type": "object",
"properties": {
"indicator": {"type": "string"},
"symbol": {"type": "string"},
"interval": {"type": "string"},
"period": {"type": "integer"},
"backtrack": {"type": "integer"},
"other_params": {"type": "object", "additionalProperties": {"type": ["string", "number", "boolean"]}},
},
"required": ["indicator", "symbol", "interval"],
"additionalProperties": False,
},
},
}]
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json",
}
if self.referer:
headers["HTTP-Referer"] = self.referer
if self.app_title:
headers["X-Title"] = self.app_title
def _post(payload):
"""Send a POST request to OpenRouter, logging request and response metadata."""
# Log the full request payload for debugging
logging.info("Sending request to OpenRouter (model: %s)", payload.get('model'))
with open("llm_requests.log", "a", encoding="utf-8") as f:
f.write(f"\n\n=== {datetime.now()} ===\n")
f.write(f"Model: {payload.get('model')}\n")
f.write(f"Headers: {json.dumps({k: v for k, v in headers.items() if k != 'Authorization'})}\n")
f.write(f"Payload:\n{json.dumps(payload, indent=2)}\n")
resp = requests.post(self.base_url, headers=headers, json=payload, timeout=60)
logging.info("Received response from OpenRouter (status: %s)", resp.status_code)
if resp.status_code != 200:
logging.error("OpenRouter error: %s - %s", resp.status_code, resp.text)
with open("llm_requests.log", "a", encoding="utf-8") as f:
f.write(f"ERROR Response: {resp.status_code} - {resp.text}\n")
resp.raise_for_status()
return resp.json()
def _sanitize_output(raw_content: str, assets_list):
"""Coerce arbitrary LLM output into the required reasoning + decisions schema."""
try:
schema = {
"type": "object",
"properties": {
"reasoning": {"type": "string"},
"trade_decisions": {
"type": "array",
"items": {
"type": "object",
"properties": {
"asset": {"type": "string", "enum": assets_list},
"action": {"type": "string", "enum": ["buy", "sell", "hold"]},
"allocation_usd": {"type": "number"},
"tp_price": {"type": ["number", "null"]},
"sl_price": {"type": ["number", "null"]},
"exit_plan": {"type": "string"},
"rationale": {"type": "string"},
},
"required": ["asset", "action", "allocation_usd", "tp_price", "sl_price", "exit_plan", "rationale"],
"additionalProperties": False,
},
"minItems": 1,
}
},
"required": ["reasoning", "trade_decisions"],
"additionalProperties": False,
}
payload = {
"model": self.sanitize_model,
"messages": [
{"role": "system", "content": (
"You are a strict JSON normalizer. Return ONLY a JSON array matching the provided JSON Schema. "
"If input is wrapped or has prose/markdown, fix it. Do not add fields."
)},
{"role": "user", "content": raw_content},
],
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "trade_decisions",
"strict": True,
"schema": schema,
},
},
"temperature": 0,
}
resp = _post(payload)
msg = resp.get("choices", [{}])[0].get("message", {})
parsed = msg.get("parsed")
if isinstance(parsed, dict):
if "trade_decisions" in parsed:
return parsed
# fallback: try content
content = msg.get("content") or "[]"
try:
loaded = json.loads(content)
if isinstance(loaded, dict) and "trade_decisions" in loaded:
return loaded
except (json.JSONDecodeError, KeyError, ValueError, TypeError):
pass
return {"reasoning": "", "trade_decisions": []}
except (requests.RequestException, json.JSONDecodeError, KeyError, ValueError, TypeError) as se:
logging.error("Sanitize failed: %s", se)
return {"reasoning": "", "trade_decisions": []}
allow_tools = True
allow_structured = True
def _build_schema():
"""Assemble the JSON schema used for structured LLM responses."""
base_properties = {
"asset": {"type": "string", "enum": assets},
"action": {"type": "string", "enum": ["buy", "sell", "hold"]},
"allocation_usd": {"type": "number", "minimum": 0},
"tp_price": {"type": ["number", "null"]},
"sl_price": {"type": ["number", "null"]},
"exit_plan": {"type": "string"},
"rationale": {"type": "string"},
}
required_keys = ["asset", "action", "allocation_usd", "tp_price", "sl_price", "exit_plan", "rationale"]
return {
"type": "object",
"properties": {
"reasoning": {"type": "string"},
"trade_decisions": {
"type": "array",
"items": {
"type": "object",
"properties": base_properties,
"required": required_keys,
"additionalProperties": False,
},
"minItems": 1,
}
},
"required": ["reasoning", "trade_decisions"],
"additionalProperties": False,
}
for _ in range(6):
data = {"model": self.model, "messages": messages}
if allow_structured:
data["response_format"] = {
"type": "json_schema",
"json_schema": {
"name": "trade_decisions",
"strict": True,
"schema": _build_schema(),
},
}
if allow_tools:
data["tools"] = tools
data["tool_choice"] = "auto"
if CONFIG.get("reasoning_enabled"):
data["reasoning"] = {
"enabled": True,
"effort": CONFIG.get("reasoning_effort") or "high",
# "max_tokens": CONFIG.get("reasoning_max_tokens") or 100000,
"exclude": False,
}
if CONFIG.get("provider_config") or CONFIG.get("provider_quantizations"):
provider_payload = dict(CONFIG.get("provider_config") or {})
quantizations = CONFIG.get("provider_quantizations")
if quantizations:
provider_payload["quantizations"] = quantizations
data["provider"] = provider_payload
try:
resp_json = _post(data)
except requests.HTTPError as e:
try:
err = e.response.json()
except (json.JSONDecodeError, ValueError, AttributeError):
err = {}
raw = (err.get("error", {}).get("metadata", {}) or {}).get("raw", "")
provider = (err.get("error", {}).get("metadata", {}) or {}).get("provider_name", "")
if e.response.status_code == 422 and provider.lower().startswith("xai") and "deserialize" in raw.lower():
logging.warning("xAI rejected tool schema; retrying without tools.")
if allow_tools:
allow_tools = False
continue
# Provider may not support structured outputs / response_format
err_text = json.dumps(err)
if allow_structured and ("response_format" in err_text or "structured" in err_text or e.response.status_code in (400, 422)):
logging.warning("Provider rejected structured outputs; retrying without response_format.")
allow_structured = False
continue
raise
choice = resp_json["choices"][0]
message = choice["message"]
messages.append(message)
tool_calls = message.get("tool_calls") or []
if allow_tools and tool_calls:
for tc in tool_calls:
if tc.get("type") == "function" and tc.get("function", {}).get("name") == "fetch_taapi_indicator":
args = json.loads(tc["function"].get("arguments") or "{}")
try:
params = {
"secret": self.taapi.api_key,
"exchange": "binance",
"symbol": args["symbol"],
"interval": args["interval"],
}
if args.get("period") is not None:
params["period"] = args["period"]
if args.get("backtrack") is not None:
params["backtrack"] = args["backtrack"]
if isinstance(args.get("other_params"), dict):
params.update(args["other_params"])
ind_resp = requests.get(f"{self.taapi.base_url}{args['indicator']}", params=params, timeout=30).json()
messages.append({
"role": "tool",
"tool_call_id": tc.get("id"),
"name": "fetch_taapi_indicator",
"content": json.dumps(ind_resp),
})
except (requests.RequestException, json.JSONDecodeError, KeyError, ValueError) as ex:
messages.append({
"role": "tool",
"tool_call_id": tc.get("id"),
"name": "fetch_taapi_indicator",
"content": f"Error: {str(ex)}",
})
continue
try:
# Prefer parsed field from structured outputs if present
if isinstance(message.get("parsed"), dict):
parsed = message.get("parsed")
else:
content = message.get("content") or "{}"
parsed = json.loads(content)
if not isinstance(parsed, dict):
logging.error("Expected dict payload, got: %s; attempting sanitize", type(parsed))
sanitized = _sanitize_output(content if 'content' in locals() else json.dumps(parsed), assets)
if sanitized.get("trade_decisions"):
return sanitized
return {"reasoning": "", "trade_decisions": []}
reasoning_text = parsed.get("reasoning", "") or ""
decisions = parsed.get("trade_decisions")
if isinstance(decisions, list):
normalized = []
for item in decisions:
if isinstance(item, dict):
item.setdefault("allocation_usd", 0.0)
item.setdefault("tp_price", None)
item.setdefault("sl_price", None)
item.setdefault("exit_plan", "")
item.setdefault("rationale", "")
normalized.append(item)
elif isinstance(item, list) and len(item) >= 7:
normalized.append({
"asset": item[0],
"action": item[1],
"allocation_usd": float(item[2]) if item[2] else 0.0,
"tp_price": float(item[3]) if item[3] and item[3] != "null" else None,
"sl_price": float(item[4]) if item[4] and item[4] != "null" else None,
"exit_plan": item[5] if len(item) > 5 else "",
"rationale": item[6] if len(item) > 6 else ""
})
return {"reasoning": reasoning_text, "trade_decisions": normalized}
logging.error("trade_decisions missing or invalid; attempting sanitize")
sanitized = _sanitize_output(content if 'content' in locals() else json.dumps(parsed), assets)
if sanitized.get("trade_decisions"):
return sanitized
return {"reasoning": reasoning_text, "trade_decisions": []}
except (json.JSONDecodeError, KeyError, ValueError, TypeError) as e:
logging.error("JSON parse error: %s, content: %s", e, content[:200])
# Try sanitizer as last resort
sanitized = _sanitize_output(content, assets)
if sanitized.get("trade_decisions"):
return sanitized
return {
"reasoning": "Parse error",
"trade_decisions": [{
"asset": a,
"action": "hold",
"allocation_usd": 0.0,
"tp_price": None,
"sl_price": None,
"exit_plan": "",
"rationale": "Parse error"
} for a in assets]
}
return {
"reasoning": "tool loop cap",
"trade_decisions": [{
"asset": a,
"action": "hold",
"allocation_usd": 0.0,
"tp_price": None,
"sl_price": None,
"exit_plan": "",
"rationale": "tool loop cap"
} for a in assets]
}
================================================
FILE: src/config_loader.py
================================================
"""Centralized environment variable loading for the trading agent configuration."""
import json
import os
from dotenv import load_dotenv
load_dotenv()
def _get_env(name: str, default: str | None = None, required: bool = False) -> str | None:
"""Fetch an environment variable with optional default and required validation."""
value = os.getenv(name, default)
if required and (value is None or value == ""):
raise RuntimeError(f"Missing required environment variable: {name}")
return value
def _get_bool(name: str, default: bool = False) -> bool:
raw = os.getenv(name)
if raw is None:
return default
return raw.strip().lower() in {"1", "true", "yes", "on"}
def _get_int(name: str, default: int | None = None) -> int | None:
raw = os.getenv(name)
if raw is None or raw.strip() == "":
return default
try:
return int(raw)
except ValueError as exc:
raise RuntimeError(f"Invalid integer for {name}: {raw}") from exc
def _get_json(name: str, default: dict | None = None) -> dict | None:
raw = os.getenv(name)
if raw is None or raw.strip() == "":
return default
try:
parsed = json.loads(raw)
if not isinstance(parsed, dict):
raise RuntimeError(f"Environment variable {name} must be a JSON object")
return parsed
except json.JSONDecodeError as exc:
raise RuntimeError(f"Invalid JSON for {name}: {raw}") from exc
def _get_list(name: str, default: list[str] | None = None) -> list[str] | None:
raw = os.getenv(name)
if raw is None or raw.strip() == "":
return default
raw = raw.strip()
# Support JSON-style lists
if raw.startswith("[") and raw.endswith("]"):
try:
parsed = json.loads(raw)
if not isinstance(parsed, list):
raise RuntimeError(f"Environment variable {name} must be a list if using JSON syntax")
return [str(item).strip().strip('"\'') for item in parsed if str(item).strip()]
except json.JSONDecodeError as exc:
raise RuntimeError(f"Invalid JSON list for {name}: {raw}") from exc
# Fallback: comma separated string
values = []
for item in raw.split(","):
cleaned = item.strip().strip('"\'')
if cleaned:
values.append(cleaned)
return values or default
CONFIG = {
"taapi_api_key": _get_env("TAAPI_API_KEY", required=True),
"hyperliquid_private_key": _get_env("HYPERLIQUID_PRIVATE_KEY") or _get_env("LIGHTER_PRIVATE_KEY"),
"mnemonic": _get_env("MNEMONIC"),
# Hyperliquid network/base URL overrides
"hyperliquid_base_url": _get_env("HYPERLIQUID_BASE_URL"),
"hyperliquid_network": _get_env("HYPERLIQUID_NETWORK", "mainnet"),
# LLM via OpenRouter
"openrouter_api_key": _get_env("OPENROUTER_API_KEY", required=True),
"openrouter_base_url": _get_env("OPENROUTER_BASE_URL", "https://openrouter.ai/api/v1"),
"openrouter_referer": _get_env("OPENROUTER_REFERER"),
"openrouter_app_title": _get_env("OPENROUTER_APP_TITLE", "trading-agent"),
"llm_model": _get_env("LLM_MODEL", "x-ai/grok-4"),
# Reasoning tokens
"reasoning_enabled": _get_bool("REASONING_ENABLED", False),
"reasoning_effort": _get_env("REASONING_EFFORT", "high"),
# Provider routing
"provider_config": _get_json("PROVIDER_CONFIG"),
"provider_quantizations": _get_list("PROVIDER_QUANTIZATIONS"),
# Runtime controls via env
"assets": _get_env("ASSETS"), # e.g., "BTC ETH SOL" or "BTC,ETH,SOL"
"interval": _get_env("INTERVAL"), # e.g., "5m", "1h"
# API server
"api_host": _get_env("API_HOST", "0.0.0.0"),
"api_port": _get_env("APP_PORT") or _get_env("API_PORT") or "3000",
}
================================================
FILE: src/indicators/__init__.py
================================================
================================================
FILE: src/indicators/taapi_client.py
================================================
"""Client helper for interacting with the TAAPI technical analysis API."""
import requests
import os
import time
import logging
from src.config_loader import CONFIG
class TAAPIClient:
"""Fetches TA indicators with retry/backoff semantics for resilience."""
def __init__(self):
"""Initialize TAAPI credentials and base URL."""
self.api_key = CONFIG["taapi_api_key"]
self.base_url = "https://api.taapi.io/"
def _get_with_retry(self, url, params, retries=3, backoff=0.5):
"""Perform a GET request with exponential backoff retry logic."""
for attempt in range(retries):
try:
resp = requests.get(url, params=params, timeout=10)
resp.raise_for_status()
return resp.json()
except requests.HTTPError as e:
if e.response.status_code >= 500 and attempt < retries - 1:
wait = backoff * (2 ** attempt)
logging.warning(f"TAAPI {e.response.status_code}, retrying in {wait}s")
time.sleep(wait)
else:
raise
except requests.Timeout as e:
if attempt < retries - 1:
wait = backoff * (2 ** attempt)
logging.warning(f"TAAPI timeout, retrying in {wait}s")
time.sleep(wait)
else:
raise
raise RuntimeError("Max retries exceeded")
def get_indicators(self, asset, interval):
"""Return a curated bundle of intraday indicators for ``asset``."""
params = {
"secret": self.api_key,
"exchange": "binance",
"symbol": f"{asset}/USDT",
"interval": interval
}
rsi_response = self._get_with_retry(f"{self.base_url}rsi", params)
macd_response = self._get_with_retry(f"{self.base_url}macd", params)
sma_response = self._get_with_retry(f"{self.base_url}sma", params)
ema_response = self._get_with_retry(f"{self.base_url}ema", params)
bbands_response = self._get_with_retry(f"{self.base_url}bbands", params)
return {
"rsi": rsi_response.get("value"),
"macd": macd_response,
"sma": sma_response.get("value"),
"ema": ema_response.get("value"),
"bbands": bbands_response
}
def get_historical_indicator(self, indicator, symbol, interval, results=10, params=None):
"""Fetch historical indicator data with optional overrides."""
base_params = {
"secret": self.api_key,
"exchange": "binance",
"symbol": symbol,
"interval": interval,
"results": results
}
if params:
base_params.update(params)
response = self._get_with_retry(f"{self.base_url}{indicator}", base_params)
return response
def fetch_series(self, indicator: str, symbol: str, interval: str, results: int = 10, params: dict | None = None, value_key: str = "value") -> list:
"""Fetch and normalize a historical indicator series.
Args:
indicator: TAAPI indicator slug (e.g. ``"ema"``).
symbol: Market pair identifier (e.g. ``"BTC/USDT"``).
interval: Candle interval requested from TAAPI.
results: Number of datapoints to request.
params: Additional TAAPI query parameters.
value_key: Key to extract from the TAAPI response payload.
Returns:
List of floats rounded to 4 decimals, or an empty list on error.
"""
try:
data = self.get_historical_indicator(indicator, symbol, interval, results=results, params=params)
if isinstance(data, dict):
# Simple indicators: {"value": [1,2,3]}
if value_key in data and isinstance(data[value_key], list):
return [round(v, 4) if isinstance(v, (int, float)) else v for v in data[value_key]]
# Error response
if "error" in data:
import logging
logging.error(f"TAAPI error for {indicator} {symbol} {interval}: {data.get('error')}")
return []
return []
except Exception as e:
import logging
logging.error(f"TAAPI fetch_series exception for {indicator}: {e}")
return []
def fetch_value(self, indicator: str, symbol: str, interval: str, params: dict | None = None, key: str = "value"):
"""Fetch a single indicator value for the latest candle."""
try:
base_params = {
"secret": self.api_key,
"exchange": "binance",
"symbol": symbol,
"interval": interval
}
if params:
base_params.update(params)
data = self._get_with_retry(f"{self.base_url}{indicator}", base_params)
if isinstance(data, dict):
val = data.get(key)
return round(val, 4) if isinstance(val, (int, float)) else val
return None
except Exception:
return None
================================================
FILE: src/main.py
================================================
"""Entry-point script that wires together the trading agent, data feeds, and API."""
import sys
import argparse
import pathlib
sys.path.append(str(pathlib.Path(__file__).parent.parent))
from src.agent.decision_maker import TradingAgent
from src.indicators.taapi_client import TAAPIClient
from src.trading.hyperliquid_api import HyperliquidAPI
import asyncio
import logging
from collections import deque, OrderedDict
from datetime import datetime, timezone
import math # For Sharpe
from dotenv import load_dotenv
import os
import json
from aiohttp import web
from src.utils.formatting import format_number as fmt, format_size as fmt_sz
from src.utils.prompt_utils import json_default, round_or_none, round_series
load_dotenv()
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
def clear_terminal():
"""Clear the terminal screen on Windows or POSIX systems."""
os.system('cls' if os.name == 'nt' else 'clear')
def get_interval_seconds(interval_str):
"""Convert interval strings like '5m' or '1h' to seconds."""
if interval_str.endswith('m'):
return int(interval_str[:-1]) * 60
elif interval_str.endswith('h'):
return int(interval_str[:-1]) * 3600
elif interval_str.endswith('d'):
return int(interval_str[:-1]) * 86400
else:
raise ValueError(f"Unsupported interval: {interval_str}")
def main():
"""Parse CLI args, bootstrap dependencies, and launch the trading loop."""
clear_terminal()
parser = argparse.ArgumentParser(description="LLM-based Trading Agent on Hyperliquid")
parser.add_argument("--assets", type=str, nargs="+", required=False, help="Assets to trade, e.g., BTC ETH")
parser.add_argument("--interval", type=str, required=False, help="Interval period, e.g., 1h")
args = parser.parse_args()
# Allow assets/interval via .env (CONFIG) if CLI not provided
from src.config_loader import CONFIG
assets_env = CONFIG.get("assets")
interval_env = CONFIG.get("interval")
if (not args.assets or len(args.assets) == 0) and assets_env:
# Support space or comma separated
if "," in assets_env:
args.assets = [a.strip() for a in assets_env.split(",") if a.strip()]
else:
args.assets = [a.strip() for a in assets_env.split(" ") if a.strip()]
if not args.interval and interval_env:
args.interval = interval_env
if not args.assets or not args.interval:
parser.error("Please provide --assets and --interval, or set ASSETS and INTERVAL in .env")
taapi = TAAPIClient()
hyperliquid = HyperliquidAPI()
agent = TradingAgent()
start_time = datetime.now(timezone.utc)
invocation_count = 0
trade_log = [] # For Sharpe: list of returns
active_trades = [] # {'asset','is_long','amount','entry_price','tp_oid','sl_oid','exit_plan'}
recent_events = deque(maxlen=200)
diary_path = "diary.jsonl"
initial_account_value = None
# Perp mid-price history sampled each loop (authoritative, avoids spot/perp basis mismatch)
price_history = {}
print(f"Starting trading agent for assets: {args.assets} at interval: {args.interval}")
def add_event(msg: str):
"""Log an informational event and push it into the recent events deque."""
logging.info(msg)
async def run_loop():
"""Main trading loop that gathers data, calls the agent, and executes trades."""
nonlocal invocation_count, initial_account_value
while True:
invocation_count += 1
minutes_since_start = (datetime.now(timezone.utc) - start_time).total_seconds() / 60
# Global account state
state = await hyperliquid.get_user_state()
total_value = state.get('total_value') or state['balance'] + sum(p.get('pnl', 0) for p in state['positions'])
sharpe = calculate_sharpe(trade_log)
account_value = total_value
if initial_account_value is None:
initial_account_value = account_value
total_return_pct = ((account_value - initial_account_value) / initial_account_value * 100.0) if initial_account_value else 0.0
positions = []
for pos_wrap in state['positions']:
pos = pos_wrap
coin = pos.get('coin')
current_px = await hyperliquid.get_current_price(coin) if coin else None
positions.append({
"symbol": coin,
"quantity": round_or_none(pos.get('szi'), 6),
"entry_price": round_or_none(pos.get('entryPx'), 2),
"current_price": round_or_none(current_px, 2),
"liquidation_price": round_or_none(pos.get('liquidationPx') or pos.get('liqPx'), 2),
"unrealized_pnl": round_or_none(pos.get('pnl'), 4),
"leverage": pos.get('leverage')
})
recent_diary = []
try:
with open(diary_path, "r") as f:
lines = f.readlines()
for line in lines[-10:]:
entry = json.loads(line)
recent_diary.append(entry)
except Exception:
pass
open_orders_struct = []
try:
open_orders = await hyperliquid.get_open_orders()
for o in open_orders[:50]:
open_orders_struct.append({
"coin": o.get('coin'),
"oid": o.get('oid'),
"is_buy": o.get('isBuy'),
"size": round_or_none(o.get('sz'), 6),
"price": round_or_none(o.get('px'), 2),
"trigger_price": round_or_none(o.get('triggerPx'), 2),
"order_type": o.get('orderType')
})
except Exception:
open_orders = []
# Reconcile active trades
try:
assets_with_positions = set()
for pos in state['positions']:
try:
if abs(float(pos.get('szi') or 0)) > 0:
assets_with_positions.add(pos.get('coin'))
except Exception:
continue
assets_with_orders = {o.get('coin') for o in (open_orders or []) if o.get('coin')}
for tr in active_trades[:]:
asset = tr.get('asset')
if asset not in assets_with_positions and asset not in assets_with_orders:
add_event(f"Reconciling stale active trade for {asset} (no position, no orders)")
active_trades.remove(tr)
with open(diary_path, "a") as f:
f.write(json.dumps({
"timestamp": datetime.now(timezone.utc).isoformat(),
"asset": asset,
"action": "reconcile_close",
"reason": "no_position_no_orders",
"opened_at": tr.get('opened_at')
}) + "\n")
except Exception:
pass
recent_fills_struct = []
try:
fills = await hyperliquid.get_recent_fills(limit=50)
for f_entry in fills[-20:]:
try:
t_raw = f_entry.get('time') or f_entry.get('timestamp')
timestamp = None
if t_raw is not None:
try:
t_int = int(t_raw)
if t_int > 1e12:
timestamp = datetime.fromtimestamp(t_int / 1000, tz=timezone.utc).isoformat()
else:
timestamp = datetime.fromtimestamp(t_int, tz=timezone.utc).isoformat()
except Exception:
timestamp = str(t_raw)
recent_fills_struct.append({
"timestamp": timestamp,
"coin": f_entry.get('coin') or f_entry.get('asset'),
"is_buy": f_entry.get('isBuy'),
"size": round_or_none(f_entry.get('sz') or f_entry.get('size'), 6),
"price": round_or_none(f_entry.get('px') or f_entry.get('price'), 2)
})
except Exception:
continue
except Exception:
pass
dashboard = {
"total_return_pct": round(total_return_pct, 2),
"balance": round_or_none(state['balance'], 2),
"account_value": round_or_none(account_value, 2),
"sharpe_ratio": round_or_none(sharpe, 3),
"positions": positions,
"active_trades": [
{
"asset": tr.get('asset'),
"is_long": tr.get('is_long'),
"amount": round_or_none(tr.get('amount'), 6),
"entry_price": round_or_none(tr.get('entry_price'), 2),
"tp_oid": tr.get('tp_oid'),
"sl_oid": tr.get('sl_oid'),
"exit_plan": tr.get('exit_plan'),
"opened_at": tr.get('opened_at')
}
for tr in active_trades
],
"open_orders": open_orders_struct,
"recent_diary": recent_diary,
"recent_fills": recent_fills_struct,
}
# Gather data for ALL assets first
market_sections = []
asset_prices = {}
for asset in args.assets:
try:
current_price = await hyperliquid.get_current_price(asset)
asset_prices[asset] = current_price
if asset not in price_history:
price_history[asset] = deque(maxlen=60)
price_history[asset].append({"t": datetime.now(timezone.utc).isoformat(), "mid": round_or_none(current_price, 2)})
oi = await hyperliquid.get_open_interest(asset)
funding = await hyperliquid.get_funding_rate(asset)
intraday_tf = "5m"
ema_series = taapi.fetch_series("ema", f"{asset}/USDT", intraday_tf, results=10, params={"period": 20}, value_key="value")
macd_series = taapi.fetch_series("macd", f"{asset}/USDT", intraday_tf, results=10, value_key="valueMACD")
rsi7_series = taapi.fetch_series("rsi", f"{asset}/USDT", intraday_tf, results=10, params={"period": 7}, value_key="value")
rsi14_series = taapi.fetch_series("rsi", f"{asset}/USDT", intraday_tf, results=10, params={"period": 14}, value_key="value")
lt_ema20 = taapi.fetch_value("ema", f"{asset}/USDT", "4h", params={"period": 20}, key="value")
lt_ema50 = taapi.fetch_value("ema", f"{asset}/USDT", "4h", params={"period": 50}, key="value")
lt_atr3 = taapi.fetch_value("atr", f"{asset}/USDT", "4h", params={"period": 3}, key="value")
lt_atr14 = taapi.fetch_value("atr", f"{asset}/USDT", "4h", params={"period": 14}, key="value")
lt_macd_series = taapi.fetch_series("macd", f"{asset}/USDT", "4h", results=10, value_key="valueMACD")
lt_rsi_series = taapi.fetch_series("rsi", f"{asset}/USDT", "4h", results=10, params={"period": 14}, value_key="value")
recent_mids = [entry["mid"] for entry in list(price_history.get(asset, []))[-10:]]
funding_annualized = round(funding * 24 * 365 * 100, 2) if funding else None
market_sections.append({
"asset": asset,
"current_price": round_or_none(current_price, 2),
"intraday": {
"ema20": round_or_none(ema_series[-1], 2) if ema_series else None,
"macd": round_or_none(macd_series[-1], 2) if macd_series else None,
"rsi7": round_or_none(rsi7_series[-1], 2) if rsi7_series else None,
"rsi14": round_or_none(rsi14_series[-1], 2) if rsi14_series else None,
"series": {
"ema20": round_series(ema_series, 2),
"macd": round_series(macd_series, 2),
"rsi7": round_series(rsi7_series, 2),
"rsi14": round_series(rsi14_series, 2)
}
},
"long_term": {
"ema20": round_or_none(lt_ema20, 2),
"ema50": round_or_none(lt_ema50, 2),
"atr3": round_or_none(lt_atr3, 2),
"atr14": round_or_none(lt_atr14, 2),
"macd_series": round_series(lt_macd_series, 2),
"rsi_series": round_series(lt_rsi_series, 2)
},
"open_interest": round_or_none(oi, 2),
"funding_rate": round_or_none(funding, 8),
"funding_annualized_pct": funding_annualized,
"recent_mid_prices": recent_mids
})
except Exception as e:
add_event(f"Data gather error {asset}: {e}")
continue
# Single LLM call with all assets
context_payload = OrderedDict([
("invocation", {
"minutes_since_start": round(minutes_since_start, 2),
"current_time": datetime.now(timezone.utc).isoformat(),
"invocation_count": invocation_count
}),
("account", dashboard),
("market_data", market_sections),
("instructions", {
"assets": args.assets,
"requirement": "Decide actions for all assets and return a strict JSON array matching the schema."
})
])
context = json.dumps(context_payload, default=json_default)
add_event(f"Combined prompt length: {len(context)} chars for {len(args.assets)} assets")
with open("prompts.log", "a") as f:
f.write(f"\n\n--- {datetime.now()} - ALL ASSETS ---\n{json.dumps(context_payload, indent=2, default=json_default)}\n")
def _is_failed_outputs(outs):
"""Return True when outputs are missing or clearly invalid."""
if not isinstance(outs, dict):
return True
decisions = outs.get("trade_decisions")
if not isinstance(decisions, list) or not decisions:
return True
try:
return all(
isinstance(o, dict)
and (o.get('action') == 'hold')
and ('parse error' in (o.get('rationale', '').lower()))
for o in decisions
)
except Exception:
return True
try:
outputs = agent.decide_trade(args.assets, context)
if not isinstance(outputs, dict):
add_event(f"Invalid output format (expected dict): {outputs}")
outputs = {}
except Exception as e:
import traceback
add_event(f"Agent error: {e}")
add_event(f"Traceback: {traceback.format_exc()}")
outputs = {}
# Retry once on failure/parse error with a stricter instruction prefix
if _is_failed_outputs(outputs):
add_event("Retrying LLM once due to invalid/parse-error output")
context_retry_payload = OrderedDict([
("retry_instruction", "Return ONLY the JSON array per schema with no prose."),
("original_context", context_payload)
])
context_retry = json.dumps(context_retry_payload, default=json_default)
try:
outputs = agent.decide_trade(args.assets, context_retry)
if not isinstance(outputs, dict):
add_event(f"Retry invalid format: {outputs}")
outputs = {}
except Exception as e:
import traceback
add_event(f"Retry agent error: {e}")
add_event(f"Retry traceback: {traceback.format_exc()}")
outputs = {}
reasoning_text = outputs.get("reasoning", "") if isinstance(outputs, dict) else ""
if reasoning_text:
add_event(f"LLM reasoning summary: {reasoning_text}")
# Execute trades for each asset
for output in outputs.get("trade_decisions", []) if isinstance(outputs, dict) else []:
try:
asset = output.get("asset")
if not asset or asset not in args.assets:
continue
action = output.get("action")
current_price = asset_prices.get(asset, 0)
action = output["action"]
rationale = output.get("rationale", "")
if rationale:
add_event(f"Decision rationale for {asset}: {rationale}")
if action in ("buy", "sell"):
is_buy = action == "buy"
alloc_usd = float(output.get("allocation_usd", 0.0))
if alloc_usd <= 0:
add_event(f"Holding {asset}: zero/negative allocation")
continue
amount = alloc_usd / current_price
order = await hyperliquid.place_buy_order(asset, amount) if is_buy else await hyperliquid.place_sell_order(asset, amount)
# Confirm by checking recent fills for this asset shortly after placing
await asyncio.sleep(1)
fills_check = await hyperliquid.get_recent_fills(limit=10)
filled = False
for fc in reversed(fills_check):
try:
if (fc.get('coin') == asset or fc.get('asset') == asset):
filled = True
break
except Exception:
continue
trade_log.append({"type": action, "price": current_price, "amount": amount, "exit_plan": output["exit_plan"], "filled": filled})
tp_oid = None
sl_oid = None
if output["tp_price"]:
tp_order = await hyperliquid.place_take_profit(asset, is_buy, amount, output["tp_price"])
tp_oids = hyperliquid.extract_oids(tp_order)
tp_oid = tp_oids[0] if tp_oids else None
add_event(f"TP placed {asset} at {output['tp_price']}")
if output["sl_price"]:
sl_order = await hyperliquid.place_stop_loss(asset, is_buy, amount, output["sl_price"])
sl_oids = hyperliquid.extract_oids(sl_order)
sl_oid = sl_oids[0] if sl_oids else None
add_event(f"SL placed {asset} at {output['sl_price']}")
# Reconcile: if opposite-side position exists or TP/SL just filled, clear stale active_trades for this asset
for existing in active_trades[:]:
if existing.get('asset') == asset:
try:
active_trades.remove(existing)
except ValueError:
pass
active_trades.append({
"asset": asset,
"is_long": is_buy,
"amount": amount,
"entry_price": current_price,
"tp_oid": tp_oid,
"sl_oid": sl_oid,
"exit_plan": output["exit_plan"],
"opened_at": datetime.now().isoformat()
})
add_event(f"{action.upper()} {asset} amount {amount:.4f} at ~{current_price}")
if rationale:
add_event(f"Post-trade rationale for {asset}: {rationale}")
# Write to diary after confirming fills status
with open(diary_path, "a") as f:
diary_entry = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"asset": asset,
"action": action,
"allocation_usd": alloc_usd,
"amount": amount,
"entry_price": current_price,
"tp_price": output.get("tp_price"),
"tp_oid": tp_oid,
"sl_price": output.get("sl_price"),
"sl_oid": sl_oid,
"exit_plan": output.get("exit_plan", ""),
"rationale": output.get("rationale", ""),
"order_result": str(order),
"opened_at": datetime.now(timezone.utc).isoformat(),
"filled": filled
}
f.write(json.dumps(diary_entry) + "\n")
else:
add_event(f"Hold {asset}: {output.get('rationale', '')}")
# Write hold to diary
with open(diary_path, "a") as f:
diary_entry = {
"timestamp": datetime.now().isoformat(),
"asset": asset,
"action": "hold",
"rationale": output.get("rationale", "")
}
f.write(json.dumps(diary_entry) + "\n")
except Exception as e:
import traceback
add_event(f"Execution error {asset}: {e}")
await asyncio.sleep(get_interval_seconds(args.interval))
async def handle_diary(request):
"""Return diary entries as JSON or newline-delimited text."""
try:
raw = request.query.get('raw')
download = request.query.get('download')
if raw or download:
if not os.path.exists(diary_path):
return web.Response(text="", content_type="text/plain")
with open(diary_path, "r") as f:
data = f.read()
headers = {}
if download:
headers["Content-Disposition"] = f"attachment; filename=diary.jsonl"
return web.Response(text=data, content_type="text/plain", headers=headers)
limit = int(request.query.get('limit', '200'))
with open(diary_path, "r") as f:
lines = f.readlines()
start = max(0, len(lines) - limit)
entries = [json.loads(l) for l in lines[start:]]
return web.json_response({"entries": entries})
except FileNotFoundError:
return web.json_response({"entries": []})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
async def handle_logs(request):
"""Stream log files with optional download or tailing behaviour."""
try:
path = request.query.get('path', 'llm_requests.log')
download = request.query.get('download')
limit_param = request.query.get('limit')
if not os.path.exists(path):
return web.Response(text="", content_type="text/plain")
with open(path, "r") as f:
data = f.read()
if download or (limit_param and (limit_param.lower() == 'all' or limit_param == '-1')):
headers = {}
if download:
headers["Content-Disposition"] = f"attachment; filename={os.path.basename(path)}"
return web.Response(text=data, content_type="text/plain", headers=headers)
limit = int(limit_param) if limit_param else 2000
return web.Response(text=data[-limit:], content_type="text/plain")
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
async def start_api(app):
"""Register HTTP endpoints for observing diary entries and logs."""
app.router.add_get('/diary', handle_diary)
app.router.add_get('/logs', handle_logs)
async def main_async():
"""Start the aiohttp server and kick off the trading loop."""
app = web.Application()
await start_api(app)
from src.config_loader import CONFIG as CFG
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, CFG.get("api_host"), int(CFG.get("api_port")))
await site.start()
await run_loop()
def calculate_total_return(state, trade_log):
"""Compute percent return relative to an assumed initial balance."""
initial = 10000
current = state['balance'] + sum(p.get('pnl', 0) for p in state.get('positions', []))
return ((current - initial) / initial) * 100 if initial else 0
def calculate_sharpe(returns):
"""Compute a naive Sharpe-like ratio from the trade log."""
if not returns:
return 0
vals = [r.get('pnl', 0) if 'pnl' in r else 0 for r in returns]
if not vals:
return 0
mean = sum(vals) / len(vals)
var = sum((v - mean) ** 2 for v in vals) / len(vals)
std = math.sqrt(var) if var > 0 else 0
return mean / std if std > 0 else 0
async def check_exit_condition(trade, taapi, hyperliquid):
"""Evaluate whether a given trade's exit plan triggers a close."""
plan = (trade.get("exit_plan") or "").lower()
if not plan:
return False
try:
if "macd" in plan and "below" in plan:
macd = taapi.get_indicators(trade["asset"], "4h")["macd"]["valueMACD"]
threshold = float(plan.split("below")[-1].strip())
return macd < threshold
if "close above ema50" in plan:
ema50 = taapi.get_historical_indicator("ema", f"{trade['asset']}/USDT", "4h", results=1, params={"period": 50})[0]["value"]
current = await hyperliquid.get_current_price(trade["asset"])
return current > ema50
except Exception:
return False
return False
asyncio.run(main_async())
if __name__ == "__main__":
main()
================================================
FILE: src/trading/__init__.py
================================================
================================================
FILE: src/trading/hyperliquid_api.py
================================================
"""High-level Hyperliquid exchange client with async retry helpers.
This module wraps the Hyperliquid `Exchange` and `Info` SDK classes to provide a
single entry point for submitting trades, managing orders, and retrieving market
state. It normalizes retry behaviour, adds logging, and caches metadata so that
the trading agent can depend on predictable, non-blocking IO.
"""
import asyncio
import logging
import aiohttp
from typing import TYPE_CHECKING
from src.config_loader import CONFIG
from hyperliquid.exchange import Exchange
from hyperliquid.info import Info
from hyperliquid.utils import constants # For MAINNET/TESTNET
from eth_account import Account as _Account
from eth_account.signers.local import LocalAccount
from websocket._exceptions import WebSocketConnectionClosedException
import socket
if TYPE_CHECKING:
# Type stubs for linter - eth_account's type stubs are incorrect
class Account:
@staticmethod
def from_key(_private_key: str) -> LocalAccount: ...
@staticmethod
def from_mnemonic(_mnemonic: str) -> LocalAccount: ...
@staticmethod
def enable_unaudited_hdwallet_features() -> None: ...
else:
Account = _Account
class HyperliquidAPI:
"""Facade around Hyperliquid SDK clients with async convenience methods.
The class owns wallet credentials, connection configuration, and provides
coroutine helpers that keep retry semantics and logging consistent across
the trading agent.
"""
def __init__(self):
"""Initialize wallet credentials and instantiate exchange clients.
Raises:
ValueError: If neither a private key nor mnemonic is present in the
configuration.
"""
self._meta_cache = None
if "hyperliquid_private_key" in CONFIG and CONFIG["hyperliquid_private_key"]:
self.wallet = Account.from_key(CONFIG["hyperliquid_private_key"])
elif "mnemonic" in CONFIG and CONFIG["mnemonic"]:
Account.enable_unaudited_hdwallet_features()
self.wallet = Account.from_mnemonic(CONFIG["mnemonic"])
else:
raise ValueError("Either HYPERLIQUID_PRIVATE_KEY/LIGHTER_PRIVATE_KEY or MNEMONIC must be provided")
# Choose base URL: allow override via env-config; fallback to network selection
network = (CONFIG.get("hyperliquid_network") or "mainnet").lower()
base_url = CONFIG.get("hyperliquid_base_url")
if not base_url:
if network == "testnet":
base_url = getattr(constants, "TESTNET_API_URL", constants.MAINNET_API_URL)
else:
base_url = constants.MAINNET_API_URL
self.base_url = base_url
self._build_clients()
def _build_clients(self):
"""Instantiate exchange and info client instances for the active base URL."""
self.info = Info(self.base_url)
self.exchange = Exchange(self.wallet, self.base_url)
def _reset_clients(self):
"""Recreate SDK clients after connection failures while logging failures."""
try:
self._build_clients()
logging.warning("Hyperliquid clients re-instantiated after connection issue")
except (ValueError, AttributeError, RuntimeError) as e:
logging.error("Failed to reset Hyperliquid clients: %s", e)
async def _retry(self, fn, *args, max_attempts: int = 3, backoff_base: float = 0.5, reset_on_fail: bool = True, to_thread: bool = True, **kwargs):
"""Retry helper with exponential backoff and optional thread offloading.
Args:
fn: Callable to invoke, either sync (supports `asyncio.to_thread`) or
async depending on ``to_thread``. The callable should raise
exceptions rather than returning sentinel values.
*args: Positional arguments forwarded to ``fn``.
max_attempts: Maximum number of attempts before surfacing the last
exception.
backoff_base: Initial delay in seconds, doubled after each failure.
reset_on_fail: Whether to rebuild Hyperliquid clients after a
failure.
to_thread: If ``True`` the callable is executed in a worker thread.
**kwargs: Keyword arguments forwarded to ``fn``.
Returns:
Result produced by ``fn``.
Raises:
Exception: Propagates any exception raised by ``fn`` after retries.
"""
last_err = None
for attempt in range(max_attempts):
try:
if to_thread:
return await asyncio.to_thread(fn, *args, **kwargs)
return await fn(*args, **kwargs)
except (WebSocketConnectionClosedException, aiohttp.ClientError, ConnectionError, TimeoutError, socket.timeout) as e:
last_err = e
logging.warning("HL call failed (attempt %s/%s): %s", attempt + 1, max_attempts, e)
if reset_on_fail:
self._reset_clients()
await asyncio.sleep(backoff_base * (2 ** attempt))
continue
except (RuntimeError, ValueError, KeyError, AttributeError) as e:
# Unknown errors: don't spin forever, but allow a quick reset once
last_err = e
logging.warning("HL call unexpected error (attempt %s/%s): %s", attempt + 1, max_attempts, e)
if reset_on_fail and attempt == 0:
self._reset_clients()
await asyncio.sleep(backoff_base)
continue
break
raise last_err if last_err else RuntimeError("Hyperliquid retry: unknown error")
def round_size(self, asset, amount):
"""Round order size to the asset precision defined by market metadata.
Args:
asset: Symbol of the market whose contract size we are rounding to.
amount: Desired contract size before rounding.
Returns:
The input ``amount`` rounded to the market's ``szDecimals`` precision.
"""
meta = self._meta_cache[0] if hasattr(self, '_meta_cache') and self._meta_cache else None
if meta:
universe = meta.get("universe", [])
asset_info = next((u for u in universe if u.get("name") == asset), None)
if asset_info:
decimals = asset_info.get("szDecimals", 8)
return round(amount, decimals)
return round(amount, 8)
async def place_buy_order(self, asset, amount, slippage=0.01):
"""Submit a market buy order with exchange-side rounding and retry logic.
Args:
asset: Market symbol to open.
amount: Contract size to open before rounding.
slippage: Maximum acceptable slippage expressed as a decimal.
Returns:
Raw SDK response from :meth:`Exchange.market_open`.
"""
amount = self.round_size(asset, amount)
return await self._retry(lambda: self.exchange.market_open(asset, True, amount, None, slippage))
async def place_sell_order(self, asset, amount, slippage=0.01):
"""Submit a market sell order with exchange-side rounding and retry logic.
Args:
asset: Market symbol to open.
amount: Contract size to open before rounding.
slippage: Maximum acceptable slippage expressed as a decimal.
Returns:
Raw SDK response from :meth:`Exchange.market_open`.
"""
amount = self.round_size(asset, amount)
return await self._retry(lambda: self.exchange.market_open(asset, False, amount, None, slippage))
async def place_take_profit(self, asset, is_buy, amount, tp_price):
"""Create a reduce-only trigger order that executes a take-profit exit.
Args:
asset: Market symbol to trade.
is_buy: ``True`` if the original position is long; dictates close
direction.
amount: Contract size to close.
tp_price: Trigger price for the take-profit order.
Returns:
Raw SDK response from `Exchange.order`.
"""
amount = self.round_size(asset, amount)
order_type = {"trigger": {"triggerPx": tp_price, "isMarket": True, "tpsl": "tp"}}
return await self._retry(lambda: self.exchange.order(asset, not is_buy, amount, tp_price, order_type, True))
async def place_stop_loss(self, asset, is_buy, amount, sl_price):
"""Create a reduce-only trigger order that executes a stop-loss exit.
Args:
asset: Market symbol to trade.
is_buy: ``True`` if the original position is long; dictates close
direction.
amount: Contract size to close.
sl_price: Trigger price for the stop-loss order.
Returns:
Raw SDK response from `Exchange.order`.
"""
amount = self.round_size(asset, amount)
order_type = {"trigger": {"triggerPx": sl_price, "isMarket": True, "tpsl": "sl"}}
return await self._retry(lambda: self.exchange.order(asset, not is_buy, amount, sl_price, order_type, True))
async def cancel_order(self, asset, oid):
"""Cancel a single order by identifier for a given asset.
Args:
asset: Market symbol associated with the order.
oid: Hyperliquid order identifier to cancel.
Returns:
Raw SDK response from :meth:`Exchange.cancel`.
"""
return await self._retry(lambda: self.exchange.cancel(asset, oid))
async def cancel_all_orders(self, asset):
"""Cancel every open order for ``asset`` owned by the configured wallet."""
try:
open_orders = await self._retry(lambda: self.info.frontend_open_orders(self.wallet.address))
for order in open_orders:
if order.get("coin") == asset:
oid = order.get("oid")
if oid:
await self.cancel_order(asset, oid)
return {"status": "ok", "cancelled_count": len([o for o in open_orders if o.get("coin") == asset])}
except (RuntimeError, ValueError, KeyError, ConnectionError) as e:
logging.error("Cancel all orders error for %s: %s", asset, e)
return {"status": "error", "message": str(e)}
async def get_open_orders(self):
"""Fetch and normalize open orders associated with the wallet.
Returns:
List of order dictionaries augmented with ``triggerPx`` when present.
"""
try:
orders = await self._retry(lambda: self.info.frontend_open_orders(self.wallet.address))
# Normalize trigger price if present in orderType
for o in orders:
try:
ot = o.get("orderType")
if isinstance(ot, dict) and "trigger" in ot:
trig = ot.get("trigger") or {}
if "triggerPx" in trig:
o["triggerPx"] = float(trig["triggerPx"])
except (ValueError, KeyError, TypeError):
continue
return orders
except (RuntimeError, ValueError, KeyError, ConnectionError) as e:
logging.error("Get open orders error: %s", e)
return []
async def get_recent_fills(self, limit: int = 50):
"""Return the most recent fills when supported by the SDK variant.
Args:
limit: Maximum number of fills to return.
Returns:
List of fill dictionaries or an empty list if unsupported.
"""
try:
# Some SDK versions expose user_fills; fall back gracefully if absent
if hasattr(self.info, 'user_fills'):
fills = await self._retry(lambda: self.info.user_fills(self.wallet.address))
elif hasattr(self.info, 'fills'):
fills = await self._retry(lambda: self.info.fills(self.wallet.address))
else:
return []
if isinstance(fills, list):
return fills[-limit:]
return []
except (RuntimeError, ValueError, KeyError, ConnectionError, AttributeError) as e:
logging.error("Get recent fills error: %s", e)
return []
def extract_oids(self, order_result):
"""Extract resting or filled order identifiers from an exchange response.
Args:
order_result: Raw order response payload returned by the exchange.
Returns:
List of order identifiers present in resting or filled status entries.
"""
oids = []
try:
statuses = order_result["response"]["data"]["statuses"]
for st in statuses:
if "resting" in st and "oid" in st["resting"]:
oids.append(st["resting"]["oid"])
if "filled" in st and "oid" in st["filled"]:
oids.append(st["filled"]["oid"])
except (KeyError, TypeError, ValueError):
pass
return oids
async def get_user_state(self):
"""Retrieve wallet state with enriched position PnL calculations.
Returns:
Dictionary with ``balance``, ``total_value``, and ``positions``.
"""
state = await self._retry(lambda: self.info.user_state(self.wallet.address))
positions = state.get("assetPositions", [])
total_value = float(state.get("accountValue", 0.0))
enriched_positions = []
for pos_wrap in positions:
pos = pos_wrap["position"]
entry_px = float(pos.get("entryPx", 0) or 0)
size = float(pos.get("szi", 0) or 0)
side = "long" if size > 0 else "short"
current_px = await self.get_current_price(pos["coin"]) if entry_px and size else 0.0
pnl = (current_px - entry_px) * abs(size) if side == "long" else (entry_px - current_px) * abs(size)
pos["pnl"] = pnl
pos["notional_entry"] = abs(size) * entry_px
enriched_positions.append(pos)
balance = float(state.get("withdrawable", 0.0))
if not total_value:
total_value = balance + sum(max(p.get("pnl", 0.0), 0.0) for p in enriched_positions)
return {"balance": balance, "total_value": total_value, "positions": enriched_positions}
async def get_current_price(self, asset):
"""Return the latest mid-price for ``asset``.
Args:
asset: Market symbol to query.
Returns:
Mid-price as a float, or ``0.0`` when unavailable.
"""
mids = await self._retry(self.info.all_mids)
return float(mids.get(asset, 0.0))
async def get_meta_and_ctxs(self):
"""Return cached meta/context information, fetching once per lifecycle.
Returns:
Cached metadata response as returned by
:meth:`Info.meta_and_asset_ctxs`.
"""
if not self._meta_cache:
response = await self._retry(self.info.meta_and_asset_ctxs)
self._meta_cache = response
return self._meta_cache
async def get_open_interest(self, asset):
"""Return open interest for ``asset`` if it exists in cached metadata.
Args:
asset: Market symbol to query.
Returns:
Rounded open interest or ``None`` if unavailable.
"""
try:
data = await self.get_meta_and_ctxs()
if isinstance(data, list) and len(data) >= 2:
meta, asset_ctxs = data[0], data[1]
universe = meta.get("universe", [])
asset_idx = next((i for i, u in enumerate(universe) if u.get("name") == asset), None)
if asset_idx is not None and asset_idx < len(asset_ctxs):
oi = asset_ctxs[asset_idx].get("openInterest")
return round(float(oi), 2) if oi else None
return None
except (RuntimeError, ValueError, KeyError, ConnectionError, TypeError) as e:
logging.error("OI fetch error for %s: %s", asset, e)
return None
async def get_funding_rate(self, asset):
"""Return the most recent funding rate for ``asset`` if available.
Args:
asset: Market symbol to query.
Returns:
Funding rate as a float or ``None`` when not present.
"""
try:
data = await self.get_meta_and_ctxs()
if isinstance(data, list) and len(data) >= 2:
meta, asset_ctxs = data[0], data[1]
universe = meta.get("universe", [])
asset_idx = next((i for i, u in enumerate(universe) if u.get("name") == asset), None)
if asset_idx is not None and asset_idx < len(asset_ctxs):
funding = asset_ctxs[asset_idx].get("funding")
return round(float(funding), 8) if funding else None
return None
except (RuntimeError, ValueError, KeyError, ConnectionError, TypeError) as e:
logging.error("Funding fetch error for %s: %s", asset, e)
return None
================================================
FILE: src/utils/__init__.py
================================================
"""Utility modules for the trading agent."""
================================================
FILE: src/utils/formatting.py
================================================
"""Utility helpers for consistently formatting numeric values."""
def format_number(value, decimals=2):
"""Round ``value`` to ``decimals`` digits when possible, otherwise return raw."""
try:
return round(float(value), decimals)
except Exception:
return value
def format_size(value):
"""Format position sizes with 6 decimal place precision."""
return format_number(value, 6)
================================================
FILE: src/utils/prompt_utils.py
================================================
"""Prompt serialization helpers shared across agent entry points."""
from __future__ import annotations
from datetime import datetime
from typing import Iterable, Any
def json_default(obj: Any) -> Any:
"""Serialize datetime and set objects for JSON dumps."""
if isinstance(obj, datetime):
return obj.isoformat()
if isinstance(obj, set):
return list(obj)
return str(obj)
def safe_float(value: Any) -> float | None:
"""Cast ``value`` to float when possible, otherwise return ``None``."""
try:
return float(value)
except (TypeError, ValueError):
return None
def round_or_none(value: Any, decimals: int = 2) -> float | None:
"""Round numeric values to ``decimals`` places, preserving ``None``."""
numeric = safe_float(value)
if numeric is None:
return None
return round(numeric, decimals)
def round_series(series: Iterable[Any] | None, decimals: int = 2) -> list[float | None]:
"""Round each entry in ``series`` to ``decimals`` places when numeric."""
if not series:
return []
rounded: list[float | None] = []
for val in series:
numeric = safe_float(val)
rounded.append(round(numeric, decimals) if numeric is not None else None)
return rounded
gitextract_gby4b4zr/
├── .env.example
├── .gitignore
├── Dockerfile
├── README.md
├── docs/
│ └── ARCHITECTURE.md
├── pyproject.toml
└── src/
├── __init__.py
├── agent/
│ ├── __init__.py
│ └── decision_maker.py
├── config_loader.py
├── indicators/
│ ├── __init__.py
│ └── taapi_client.py
├── main.py
├── trading/
│ ├── __init__.py
│ └── hyperliquid_api.py
└── utils/
├── __init__.py
├── formatting.py
└── prompt_utils.py
SYMBOL INDEX (49 symbols across 7 files)
FILE: src/agent/decision_maker.py
class TradingAgent (line 10) | class TradingAgent:
method __init__ (line 13) | def __init__(self):
method decide_trade (line 25) | def decide_trade(self, assets, context):
method _decide (line 37) | def _decide(self, context, assets):
FILE: src/config_loader.py
function _get_env (line 10) | def _get_env(name: str, default: str | None = None, required: bool = Fal...
function _get_bool (line 18) | def _get_bool(name: str, default: bool = False) -> bool:
function _get_int (line 25) | def _get_int(name: str, default: int | None = None) -> int | None:
function _get_json (line 35) | def _get_json(name: str, default: dict | None = None) -> dict | None:
function _get_list (line 48) | def _get_list(name: str, default: list[str] | None = None) -> list[str] ...
FILE: src/indicators/taapi_client.py
class TAAPIClient (line 10) | class TAAPIClient:
method __init__ (line 13) | def __init__(self):
method _get_with_retry (line 18) | def _get_with_retry(self, url, params, retries=3, backoff=0.5):
method get_indicators (line 41) | def get_indicators(self, asset, interval):
method get_historical_indicator (line 62) | def get_historical_indicator(self, indicator, symbol, interval, result...
method fetch_series (line 76) | def fetch_series(self, indicator: str, symbol: str, interval: str, res...
method fetch_value (line 107) | def fetch_value(self, indicator: str, symbol: str, interval: str, para...
FILE: src/main.py
function clear_terminal (line 27) | def clear_terminal():
function get_interval_seconds (line 32) | def get_interval_seconds(interval_str):
function main (line 43) | def main():
FILE: src/trading/hyperliquid_api.py
class Account (line 24) | class Account:
method from_key (line 26) | def from_key(_private_key: str) -> LocalAccount: ...
method from_mnemonic (line 28) | def from_mnemonic(_mnemonic: str) -> LocalAccount: ...
method enable_unaudited_hdwallet_features (line 30) | def enable_unaudited_hdwallet_features() -> None: ...
class HyperliquidAPI (line 34) | class HyperliquidAPI:
method __init__ (line 42) | def __init__(self):
method _build_clients (line 68) | def _build_clients(self):
method _reset_clients (line 73) | def _reset_clients(self):
method _retry (line 81) | async def _retry(self, fn, *args, max_attempts: int = 3, backoff_base:...
method round_size (line 127) | def round_size(self, asset, amount):
method place_buy_order (line 146) | async def place_buy_order(self, asset, amount, slippage=0.01):
method place_sell_order (line 160) | async def place_sell_order(self, asset, amount, slippage=0.01):
method place_take_profit (line 174) | async def place_take_profit(self, asset, is_buy, amount, tp_price):
method place_stop_loss (line 191) | async def place_stop_loss(self, asset, is_buy, amount, sl_price):
method cancel_order (line 208) | async def cancel_order(self, asset, oid):
method cancel_all_orders (line 220) | async def cancel_all_orders(self, asset):
method get_open_orders (line 234) | async def get_open_orders(self):
method get_recent_fills (line 257) | async def get_recent_fills(self, limit: int = 50):
method extract_oids (line 281) | def extract_oids(self, order_result):
method get_user_state (line 302) | async def get_user_state(self):
method get_current_price (line 327) | async def get_current_price(self, asset):
method get_meta_and_ctxs (line 339) | async def get_meta_and_ctxs(self):
method get_open_interest (line 351) | async def get_open_interest(self, asset):
method get_funding_rate (line 374) | async def get_funding_rate(self, asset):
FILE: src/utils/formatting.py
function format_number (line 4) | def format_number(value, decimals=2):
function format_size (line 12) | def format_size(value):
FILE: src/utils/prompt_utils.py
function json_default (line 9) | def json_default(obj: Any) -> Any:
function safe_float (line 18) | def safe_float(value: Any) -> float | None:
function round_or_none (line 26) | def round_or_none(value: Any, decimals: int = 2) -> float | None:
function round_series (line 34) | def round_series(series: Iterable[Any] | None, decimals: int = 2) -> lis...
Condensed preview — 18 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (93K chars).
[
{
"path": ".env.example",
"chars": 366,
"preview": "TAAPI_API_KEY=your_taapi_key_here # From https://taapi.io\nHYPERLIQUID_PRIVATE_KEY=0x_your_private_key_here # Wallet pr"
},
{
"path": ".gitignore",
"chars": 274,
"preview": "# Environments\n.env\n.env.*\n!.env.example\n\n# Python\n__pycache__/\n*.py[cod]\n*.pyo\n*.pyd\n*.egg-info/\n*.egg\n\n# Virtual envs\n"
},
{
"path": "Dockerfile",
"chars": 676,
"preview": "FROM python:3.12-slim\n\n# System deps\nRUN apt-get update && apt-get install -y --no-install-recommends \\\n build-essent"
},
{
"path": "README.md",
"chars": 5053,
"preview": "# Nocturne: AI Trading Agent on Hyperliquid\n\nThis project implements an AI-powered trading agent that leverages LLM mode"
},
{
"path": "docs/ARCHITECTURE.md",
"chars": 2029,
"preview": "## Trading Agent Architecture (High-Level)\n\nThis document outlines the end-to-end flow of the trading agent at a concept"
},
{
"path": "pyproject.toml",
"chars": 579,
"preview": "[project]\nname = \"trading-agent\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\n {name = \"Gajesh Naik\",email = \"26431"
},
{
"path": "src/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "src/agent/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "src/agent/decision_maker.py",
"chars": 23186,
"preview": "\"\"\"Decision-making agent that orchestrates LLM prompts and indicator lookups.\"\"\"\n\nimport requests\nfrom src.config_loader"
},
{
"path": "src/config_loader.py",
"chars": 3740,
"preview": "\"\"\"Centralized environment variable loading for the trading agent configuration.\"\"\"\n\nimport json\nimport os\nfrom dotenv i"
},
{
"path": "src/indicators/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "src/indicators/taapi_client.py",
"chars": 5226,
"preview": "\"\"\"Client helper for interacting with the TAAPI technical analysis API.\"\"\"\n\nimport requests\nimport os\nimport time\nimport"
},
{
"path": "src/main.py",
"chars": 27960,
"preview": "\"\"\"Entry-point script that wires together the trading agent, data feeds, and API.\"\"\"\n\nimport sys\nimport argparse\nimport "
},
{
"path": "src/trading/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "src/trading/hyperliquid_api.py",
"chars": 17263,
"preview": "\"\"\"High-level Hyperliquid exchange client with async retry helpers.\n\nThis module wraps the Hyperliquid `Exchange` and `I"
},
{
"path": "src/utils/__init__.py",
"chars": 47,
"preview": "\"\"\"Utility modules for the trading agent.\"\"\"\n\n\n"
},
{
"path": "src/utils/formatting.py",
"chars": 416,
"preview": "\"\"\"Utility helpers for consistently formatting numeric values.\"\"\"\n\n\ndef format_number(value, decimals=2):\n \"\"\"Round `"
},
{
"path": "src/utils/prompt_utils.py",
"chars": 1278,
"preview": "\"\"\"Prompt serialization helpers shared across agent entry points.\"\"\"\n\nfrom __future__ import annotations\n\nfrom datetime "
}
]
About this extraction
This page contains the full source code of the Gajesh2007/ai-trading-agent GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 18 files (86.0 KB), approximately 19.3k tokens, and a symbol index with 49 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.