[
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2023 Janlay Wu\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# openai-cli\nA universal cli for OpenAI, written in BASH.\n\n# Features\n- [x] Scalable architecture allows for continuous support of new APIs.\n- [x] Custom API name, version, and all relevant properties.\n- [x] Dry-run mode (without actually initiating API calls) to facilitate debugging of APIs and save costs.\n- [x] New in v3: Supports any AI services that provide OpenAI-compatible APIs.\n\nImportant changes in version 3:\n- `-v api_version` in previous versions is now removed. If you have a custom `OPENAI_API_ENDPOINT`, you need to append API version in it. The internal API version is remove as some services like DeepSeek don't have version prefix in `/v1/chat/completions`.\n- `OPENAI_CHAT_MODEL` no longer supported, use `OPENAI_API_MODEL` instead.\n- By default, the request no longer includes optional parameters `temperature` / `max_tokens`. You need to explictly add `+temperature` / `+max_tokens` to customize if necessary.\n\nAvailable APIs:\n- [x] `chat/completions` (default API)\n- [x] `models`\n- [x] `images/generations`\n- [x] `embeddings`\n- [x] `moderations`\n\nThe default API `chat/completions` provides:\n- [x] Complete pipelining to interoperate with other applications\n- [x] Allow prompts to be read from command line arguments, file, and stdin\n- [x] Support streaming\n- [x] Support multiple topics\n- [x] Support continuous conversations.\n- [ ] Token usage\n\n# Installation\n- [jq](https://stedolan.github.io/jq/) is required.\n  - Linux: `sudo apt install jq`\n  - macOS: `brew install jq`\n- Download script and mark it executable:\n  ```bash\n  curl -fsSLOJ https://go.janlay.com/openai\n  chmod +x openai\n  ```\n  You may want to add this file to a directory in `$PATH`. \n\n  Also install the manual page, e.g.:\n  ```bash\n  pandoc -s -f markdown -t man README.md > /usr/local/man/man1/openai.1\n  ```\n  <details>\n  <summary>Further reading: curl's killer feature</summary>\n  <a href=\"https://daniel.haxx.se/blog/2020/09/10/store-the-curl-output-over-there/\"><code>-OJ</code> is a killer feature</a>\n  </details>\n\nNow you can try it out!\n\n# Tips\n## Getting started\nTo begin, type `openai -h` to access the help manual.\n\n⚠️ If you run `openai` directly, it may appear to be stuck because it expects prompt content from stdin which is not yet available. To exit, simply press Ctrl+C to interrupt the process.\n\n<details>\n  <summary>Why are you so serious?</summary>\n  \nWhat happens when the `openai` command is executed without any parameters? It means that:\n- The default API used will be `chat/completions`, and the schema version will be `v1`.\n- The prompt will be read from stdin.\n- The program will wait for input while stdin remains empty.\n</details>\n\n## Quick Examples\nThe best way to understand how to use `openai` is to see various usage cases.\n- Debug API data for testing purposes  \n  `openai -n foo bar`\n- Say hello to OpenAI  \n  `openai Hello`\n- Use another model  \n  `openai +model=gpt-3.5-turbo-0301 Hello`\n- Disable streaming, allow for more variation in answer  \n  `openai +stream=false +temperature=1.1 Hello`\n- Call another available API  \n  `openai -a models`\n- Create a topic named `en2fr` with initial prompt  \n  `openai @en2fr Translate to French`\n- Use existing topic  \n  `openai @en2fr Hello, world!`\n- Read prompt from clipboard then send result to another topic  \n  `pbpaste | openai | openai @en2fr`\n\n## Providing prompt\nThere are multiple ways to obtain a prompt using `openai`:\n- Enclose the prompt in single quotes `'` or double quotes `\"`  \n  `openai \"Please help me translate '你好' into English\"`\n- Use any argument that does not begin with a minus sign `-`  \n  `openai Hello, world!`\n- Place any arguments after `--`  \n  `openai -n -- What is the purpose of the -- argument in Linux commands`\n- Input from stdin  \n  `echo 'Hello, world!' | openai`\n- Specify a file path with `-f /path/to/file`  \n  `openai -f question.txt`\n- Use `-f-` for input from stdin  \n  `cat question.txt | openai -f-`\n\nChoose any one you like :-)\n\n## OpenAI key\n`$OPENAI_API_KEY` must be available to use this tool. Prepare your OpenAI key in `~/.profile` file by adding this line:\n```bash\nexport OPENAI_API_KEY=sk-****\n```\nOr you may want to run with a temporary key for one-time use:\n```bash\nOPENAI_API_KEY=sk-**** openai hello\n```\n\nEnvironment variables can also be set in `$HOME/.openai/config`.\n\n## Working with compatible AI services\nYou can set the environment variable `OPENAI_COMPATIBLE_PROVIDER` to another service name in uppercase, such as `DEEPSEEK`. Then set the three environment variables `DEEPSEEK_API_ENDPOINT`, `DEEPSEEK_API_KEY`, and `DEEPSEEK_API_MODEL` respectively:\n```bash\nexport OPENAI_COMPATIBLE_PROVIDER=DEEPSEEK\nexport DEEPSEEK_API_ENDPOINT=https://api.deepseek.com\nexport DEEPSEEK_API_KEY=sk-***\nexport DEEPSEEK_API_MODEL=deepseek-chat\n```\n\nYou may have noticed that if you set `OPENAI_COMPATIBLE_PROVIDER` to `FOO`, you also need to set `FOO_API_ENDPOINT`, `FOO_API_KEY`, and `FOO_API_MODEL` accordingly.\n\nAfter gathering information on multiple AI providers, you can switch the AI service by setting a temporary environment variable, `OPENAI_COMPATIBLE_PROVIDER`, as shown below:\n```bash\n# Switch to DeepSeek\nOPENAI_COMPATIBLE_PROVIDER=DEEPSEEK openai\n# Switch to Qwen\nOPENAI_COMPATIBLE_PROVIDER=QWEN openai\n# Switch to the default provider (OpenAI)\nOPENAI_COMPATIBLE_PROVIDER= openai\n```\n\n## Testing your API invocations\n`openai` offers a [dry-run mode](https://en.wikipedia.org/wiki/Dry_run) that allows you to test command composition without incurring any costs. Give it a try!\n\n```bash\nopenai -n hello, world!\n\n# This would be same:\nopenai -n 'hello, world!'\n```\n\n<details>\n<summary>Command and output</summary>\n\n```\n$ openai -n hello, world!\nDry-run mode, no API calls made.\n\nRequest URL:\n--------------\nhttps://api.openai.com/v1/chat/completions\n\nAuthorization:\n--------------\nBearer sk-cfw****NYre\n\nPayload:\n--------------\n{\n  \"model\": \"gpt-3.5-turbo\",\n  \"temperature\": 0.5,\n  \"max_tokens\": 200,\n  \"stream\": true,\n  \"messages\": [\n    {\n      \"role\": \"user\",\n      \"content\": \"hello, world!\"\n    }\n  ]\n}\n```\n</details>\nWith full pipelining support, you can achieve the same functionality using alternative methods:\n\n```bash\necho 'hello, world!' | openai -n\n```\n\n<details>\n<summary>For BASH gurus</summary>\n\nThis would be same:\n```bash\necho 'hello, world!' >hello.txt\nopenai -n <hello.txt\n```\n\nEven this one:\n```bash\nopenai -n <<<'hello, world!'\n```\n\nand this:\n```bash\nopenai -n <<(echo 'hello, world!')\n```\n</details>\n\nIt seems you have understood the basic usage. Try to get real answer from OpenAI:\n\n```bash\nopenai hello, world!\n```\n\n<details>\n<summary>Command and output</summary>\n\n```\n $ openai hello, world!\nHello there! How can I assist you today?\n```\n\n</details>\n\n## Topics\nTopic starts with a `@` sign. so `openai @translate Hello, world!` means calling the specified topic `translate`.\n\nTo create new topic, like translate, with the initial prompt (system role, internally):\n```bash\nopenai @translate 'Translate, no other words: Chinese -> English, Non-Chinese -> Chinese'\n```\n\nThen you can use the topic by\n```bash\nopenai @translate 'Hello, world!'\n```\nYou should get answer like `你好，世界！`.\n\nAgain, to see what happens, use the dry-run mode by adding `-n`. You will see the payload would be sent:\n```json\n{\n  \"model\": \"gpt-3.5-turbo\",\n  \"temperature\": 0.5,\n  \"max_tokens\": 200,\n  \"stream\": true,\n  \"messages\": [\n    {\n      \"role\": \"system\",\n      \"content\": \"Translate, no other words: Chinese -> English, Non-Chinese -> Chinese\"\n    },\n    {\n      \"role\": \"user\",\n      \"content\": \"Hello, world!\"\n    }\n  ]\n}\n```\n\n## Chatting\nAll use cases above are standalone queries, not converstaions. To chat with OpenAI, use `-c`. This can also continue existing topic conversation by prepending `@topic`.\n\nPlease note that chat requests will quickly consume tokens, leading to increased costs.\n\n## Advanced\nTo be continued.\n\n## Manual\nTo be continued.\n\n# LICENSE\nThis project uses the MIT license. Please see [LICENSE](https://github.com/janlay/openai-cli/blob/master/LICENSE) for more information.\n"
  },
  {
    "path": "openai",
    "content": "#!/usr/bin/env bash\n#\n# OpenAI CLI v3.0\n#   Created by @janlay\n#\n\nset -eo pipefail\n\ndeclare _config_dir=\"${OPENAI_DATA_DIR:-$XDG_CONFIG_HOME}\"\nOPENAI_DATA_DIR=\"${_config_dir:-$HOME/.openai}\"\n\n# Read config file?\n[ -e \"$OPENAI_DATA_DIR\"/config ] && . \"$OPENAI_DATA_DIR\"/config\n\n# openai-cli accepts various exported environment variables:\n#   OPENAI_API_KEY\t\t: OpenAI's API key\n#   OPENAI_API_ENDPOINT\t: Custom API endpoint\n#\tOPENAI_API_MODEL\t: Which model to use\n#   OPENAI_DATA_DIR\t\t: Directory to store data\nOPENAI_API_ENDPOINT=\"${OPENAI_API_ENDPOINT:-https://api.openai.com/v1}\"\nOPENAI_API_KEY=\"${OPENAI_API_KEY:-}\"\nOPENAI_API_MODEL=\"${OPENAI_API_MODEL:-gpt-4o}\"\n\n# defaults\nreadonly _app_name=openai _app_version=3.0\nreadonly provider=\"${OPENAI_COMPATIBLE_PROVIDER:-OPENAI}\"\nreadonly default_api_name=chat/completions default_model=\"$OPENAI_API_MODEL\" default_topic=General\n\ndeclare -i chat_mode=0 dry_run=0\ndeclare tokens_file=\"$OPENAI_DATA_DIR/total_tokens\" api_name=$default_api_name topic=$default_topic\ndeclare dump_file dumped_file data_file temp_dir rest_args prompt_file prompt\n\ntrap cleanup EXIT\ncleanup() {\n\tif [ -d \"$temp_dir\" ]; then\n\t\trm -rf -- \"$temp_dir\"\n\tfi\n}\n\nget_env() {\n\tlocal env_key=\"${provider}_${1}\"\n\techo \"${!env_key}\"\n}\n\nraise_error() {\n\t[ \"$2\" = 0 ] || echo -n \"$_app_name: \" >&2\n\techo -e \"$1\" >&2\n\texit \"${2:-1}\"\n}\n\nload_conversation() {\n\t[ -f \"$data_file\" ] && cat \"$data_file\" || echo '{}'\n}\n\nupdate_conversation() {\n\tlocal entry=\"$2\" data\n\t[[ $entry == \\{* ]] || entry=$(jq -n --arg content \"$entry\" '{$content}')\n\tentry=$(jq --arg role \"$1\" '. += {$role}' <<<\"$entry\")\n\tdata=$(load_conversation)\n\tjq --argjson item \"$entry\" '.messages += [$item]' <<<\"$data\" >\"$data_file\"\n}\n\nsave_tokens() {\n\tlocal data num=\"$1\"\n\t[ -f \"$data_file\" ] && {\n\t\tdata=$(load_conversation)\n\t\tjq --argjson tokens \"$num\" '.total_tokens += $tokens' <<<\"$data\" >\"$data_file\"\n\t}\n\n\tdata=0\n\t[ -f \"$tokens_file\" ] && data=$(cat \"$tokens_file\")\n\techo \"$((data + num))\" >\"$tokens_file\"\n}\n\nread_prompt() {\n\t# read prompt from args first\n\tlocal word accepts_props=1 props='{}' real_prompt\n\tif [ ${#rest_args[@]} -gt 0 ]; then\n\t\t# read file $prompt_file word by word, and extract words starting with '+'\n\t\tfor word in \"${rest_args[@]}\"; do\n\t\t\tif [ $accepts_props -eq 1 ] && [ \"${word:0:1}\" = '+' ]; then\n\t\t\t\tword=\"${word:1}\"\n\t\t\t\t# determine value's type for jq\n\t\t\t\tlocal options=(--arg key \"${word%%=*}\") value=\"${word#*=}\" arg=--arg\n\t\t\t\t[[ $value =~ ^[+-]?\\ ?[0-9.]+$ || $value = true || $value = false || $value == [\\[\\{]* ]] && arg=--argjson\n\t\t\t\toptions+=(\"$arg\" value \"$value\")\n\t\t\t\tprops=$(jq \"${options[@]}\" '.[$key] = $value' <<<\"$props\")\n\t\t\telse\n\t\t\t\treal_prompt=\"$real_prompt $word\"\n\t\t\t\taccepts_props=0\n\t\t\tfi\n\t\tdone\n\t\t[ -n \"$props\" ] && echo \"$props\" >\"$temp_dir/props\"\n\tfi\n\n\tif [ -n \"$real_prompt\" ]; then\n\t\t[ -n \"$prompt_file\" ] && echo \"* Prompt file \\`$prompt_file' will be ignored as the prompt parameters are provided.\" >&2\n\t\techo -n \"${real_prompt:1}\" >\"$temp_dir/prompt\"\n\telif [ -n \"$prompt_file\" ]; then\n\t\t[ -f \"$prompt_file\" ] || raise_error \"File not found: $prompt_file.\" 3\n\t\t[[ -s $prompt_file ]] || raise_error \"Empty file: $prompt_file.\" 4\n\tfi\n}\n\nopenai_models() {\n\tcall_api | jq\n}\n\nopenai_moderations() {\n\tlocal prop_file=\"$temp_dir/props\" payload=\"{\\\"model\\\": \\\"text-moderation-latest\\\"}\"\n\n\t# overwrite default properties with user's\n\tread_prompt\n\t[ -f \"$prop_file\" ] && payload=$(jq -n --argjson payload \"$payload\" '$payload | . += input' <\"$prop_file\")\n\n\t# append user's prompt to messages\n\tlocal payload_file=\"$temp_dir/payload\" input_file=\"$temp_dir/prompt\"\n\t[ -f \"$input_file\" ] || input_file=\"${prompt_file:-/dev/stdin}\"\n\tjq -Rs -cn --argjson payload \"$payload\" '$payload | .input = input' \"$input_file\" >\"$payload_file\"\n\n\tcall_api | jq -c '.results[]'\n}\n\nopenai_images_generations() {\n\tlocal prop_file=\"$temp_dir/props\" payload=\"{\\\"n\\\": 1, \\\"size\\\": \\\"1024x1024\\\"}\"\n\n\t# overwrite default properties with user's\n\tread_prompt\n\t[ -f \"$prop_file\" ] && payload=$(jq -n --argjson payload \"$payload\" '$payload | . += input | . += {response_format: \"url\"}' <\"$prop_file\")\n\n\t# append user's prompt to messages\n\tlocal payload_file=\"$temp_dir/payload\" input_file=\"$temp_dir/prompt\"\n\t[ -f \"$input_file\" ] || input_file=\"${prompt_file:-/dev/stdin}\"\n\tjq -Rs -cn --argjson payload \"$payload\" '$payload | .prompt = input' \"$input_file\" >\"$payload_file\"\n\n\tcall_api | jq -r '.data[].url'\n}\n\nopenai_embeddings() {\n\tlocal prop_file=\"$temp_dir/props\" payload=\"{\\\"model\\\": \\\"text-embedding-ada-002\\\"}\"\n\n\t# overwrite default properties with user's\n\tread_prompt\n\t[ -f \"$prop_file\" ] && payload=$(jq -n --argjson payload \"$payload\" '$payload | . += input' <\"$prop_file\")\n\n\t# append user's prompt to messages\n\tlocal payload_file=\"$temp_dir/payload\" input_file=\"$temp_dir/prompt\"\n\t[ -f \"$input_file\" ] || input_file=\"${prompt_file:-/dev/stdin}\"\n\tjq -Rs -cn --argjson payload \"$payload\" '$payload | .input = input' \"$input_file\" >\"$payload_file\"\n\n\tcall_api | jq -c\n}\n\nopenai_chat_completions() {\n\tlocal streaming=0\n\tif [ -n \"$dumped_file\" ]; then\n\t\t# only succeeds when the dumped file is not streamed\n\t\tjq -er '.choices[0].message.content' <\"$dumped_file\" 2>/dev/null && return\n\t\tstreaming=1\n\telse\n\t\tlocal prop_file=\"$temp_dir/props\" model payload\n\t\tmodel=$(get_env API_MODEL)\n\t\tpayload=\"{\\\"model\\\": \\\"$model\\\", \\\"stream\\\": true}\"\n\n\t\t# overwrite default properties with user's\n\t\tread_prompt\n\t\t[ -f \"$prop_file\" ] && {\n\t\t\tpayload=$(jq -n --argjson payload \"$payload\" '$payload | . += input | . += {messages: []}' <\"$prop_file\")\n\t\t}\n\n\t\tlocal data\n\t\tdata=$(load_conversation | jq .messages)\n\t\t[ \"$topic\" != \"$default_topic\" ] && {\n\t\t\tif [ $chat_mode -eq 1 ]; then\n\t\t\t\t# load all messages for chat mode\n\t\t\t\tpayload=$(jq --argjson messages \"$data\" 'setpath([\"messages\"]; $messages)' <<<\"$payload\")\n\t\t\telse\n\t\t\t\t# load only first message for non-chat mode\n\t\t\t\tpayload=$(jq --argjson messages \"$data\" 'setpath([\"messages\"]; [$messages[0]])' <<<\"$payload\")\n\t\t\tfi\n\t\t}\n\t\t# append user's prompt to messages\n\t\tlocal payload_file=\"$temp_dir/payload\" input_file=\"$temp_dir/prompt\"\n\t\t[ -f \"$input_file\" ] || input_file=\"${prompt_file:-/dev/stdin}\"\n\t\tjq -Rs -cn --argjson payload \"$payload\" '$payload | .messages += [{role: \"user\", content: input}]' \"$input_file\" >\"$payload_file\"\n\n\t\tstreaming=$(jq -e 'if .stream then 1 else 0 end' <\"$payload_file\")\n\n\t\t# check o1's parameters\n\t\tjq -e 'select(.model | test(\"^o\\\\d\")) and (.temperature or .top_p or .presence_penalty or .frequency_penalty .logprobs or .top_logprobs .logit_bias)' <\"$payload_file\" &>/dev/null && raise_error 'One or more unsupported API parameters used for model o1. See https://platform.openai.com/docs/guides/reasoning#limitations for more details.' 5\n\tfi\n\n\tlocal chunk reason text role fn_name\n\tif [ $streaming -eq 1 ]; then\n\t\tcall_api | while read -r chunk; do\n\t\t\t[ -z \"$chunk\" ] && continue\n\t\t\tchunk=$(cut -d: -f2- <<<\"$chunk\" | jq '.choices[0]')\n\t\t\treason=$(jq -r '.finish_reason // empty' <<<\"$chunk\")\n\t\t\t[[ $reason = stop || $reason = function_call ]] && break\n\t\t\t[ -n \"$reason\" ] && raise_error \"API error: $reason\" 10\n\n\t\t\t# get role and function info from the first chunk\n\t\t\t[ -z \"$role\" ] && {\n\t\t\t\trole=$(jq -r '.delta.role // empty' <<<\"$chunk\")\n\t\t\t\tfn_name=$(jq -r '.delta.function_call.name // empty' <<<\"$chunk\")\n\t\t\t}\n\n\t\t\t# workaround: https://stackoverflow.com/a/15184414\n\t\t\tchunk=$(\n\t\t\t\tjq -r '.delta | .function_call.arguments // .content // empty' <<<\"$chunk\"\n\t\t\t\tprintf x\n\t\t\t)\n\t\t\t# ensure chunk is not empty\n\t\t\t[ ${#chunk} -ge 2 ] || continue\n\n\t\t\tchunk=\"${chunk:0:${#chunk}-2}\"\n\t\t\ttext=\"$text$chunk\"\n\t\t\techo -n \"$chunk\"\n\t\tdone\n\t\t[ \"$dry_run\" -eq 0 ] && echo\n\telse\n\t\ttext=$(call_api | jq -er '.choices[0].message.content')\n\t\techo \"$text\"\n\tfi\n\n\t# append response to topic file for chat mode\n\tif [ \"$chat_mode\" -eq 1 ]; then\n\t\t[ -n \"$fn_name\" ] && text=$(jq -n --arg name \"$fn_name\" --argjson arguments \"${text:-\\{\\}}\" '{function_call: {$name, $arguments}}')\n\n\t\tupdate_conversation user \"$prompt\"\n\t\tupdate_conversation \"$role\" \"$text\"\n\tfi\n}\n\n# shellcheck disable=SC2120\ncall_api() {\n\t# return dumped file if specified\n\t[ -n \"$dumped_file\" ] && {\n\t\tcat \"$dumped_file\"\n\t\treturn\n\t}\n\n\tlocal url=\"$(get_env API_ENDPOINT)/$api_name\" auth=\"Bearer $(get_env API_KEY)\"\n\n\t# dry-run mode\n\t[ \"$dry_run\" -eq 1 ] && {\n\t\techo \"Dry-run mode, no API calls made.\"\n\t\techo -e \"\\nRequest URL:\\n--------------\\n$url\"\n\t\techo -en \"\\nAuthorization:\\n--------------\\n\"\n\t\tsed -E 's/(sk-.{3}).{41}/\\1****/' <<<\"$auth\"\n\t\t[ -n \"$payload_file\" ] && {\n\t\t\techo -e \"\\nPayload:\\n--------------\"\n\t\t\tjq <\"$payload_file\"\n\t\t}\n\t\texit 0\n\t} >&2\n\n\tlocal args=(\"$url\" --no-buffer -fsSL -H 'Content-Type: application/json' -H \"Authorization: $auth\")\n\t[ -n \"$payload_file\" ] && args+=(-d @\"$payload_file\")\n\t[ $# -gt 0 ] && args+=(\"$@\")\n\n\t[ -n \"$dump_file\" ] && args+=(-o \"$dump_file\")\n\tcurl \"${args[@]}\"\n\t[ -z \"$dump_file\" ] || exit 0\n}\n\ncreate_topic() {\n\tupdate_conversation system \"${rest_args[*]}\"\n\traise_error \"Topic '$topic' created with initial prompt '${rest_args[*]}'\" 0\n}\n\nusage() {\n\traise_error \"OpenAI Client v$_app_version\n\nSYNOPSIS\n  ABSTRACT\n    $_app_name [-n] [-a api_name] [-o dump_file] [INPUT...]\n    $_app_name -i dumped_file\n\n  DEFAULT_API ($default_api_name)\n    $_app_name [-c] [+property=value...] [@TOPIC] [-f file | prompt ...]\n\tprompt\n\t\t\tPrompt string for the request to OpenAI API. This can consist of multiple\n\t\t\targuments, which are considered to be separated by spaces.\n\t-f file\n\t\t\tA file to be read as prompt. If file is - or neither this parameter nor a prompt\n\t\t\tis specified, read from standard input.\n\t-c\n\t\t\tContinues the topic, the default topic is '$default_topic'.\n\tproperty=value\n\t\t\tOverwrites default properties in payload. Prepend a plus sign '+' before property=value.\n\t\t\teg: +model=gpt-3.5-turbo-0301, +stream=false\n\n\tTOPICS\n\t\t\tTopic starts with an at sign '@'.\n\t\t\tTo create new topic, use \\`$_app_name @new_topic initial prompt'\n\n  OTHER APIS\n    $_app_name -a models\n\nGLOBAL OPTIONS\n  Global options apply to all APIs.\n  -a name\n\t\tAPI name, default is '$default_api_name'.\n  -n\n\t\tDry-run mode, don't call API.\n  -o filename\n\t\tDumps API response to a file and exits.\n  -i filename\n\t\tUses specified dumped file instead of requesting API.\n\t\tAny request-related arguments and user input are ignored.\n\n  --\n  \t\tIgnores rest of arguments, useful when unquoted prompt consists of '-'.\n\n  -h\n  \t\tShows this help\" 0\n}\n\nparse() {\n\tlocal opt\n\twhile getopts 'v:a:f:i:o:cnh' opt; do\n\t\tcase \"$opt\" in\n\t\tc)\n\t\t\tchat_mode=1\n\t\t\t;;\n\t\ta)\n\t\t\tapi_name=\"$OPTARG\"\n\t\t\t;;\n\t\tf)\n\t\t\tprompt_file=\"$OPTARG\"\n\t\t\t[ \"$prompt_file\" = - ] && prompt_file=\n\t\t\t;;\n\t\tn)\n\t\t\tdry_run=1\n\t\t\t;;\n\t\ti)\n\t\t\tdumped_file=\"$OPTARG\"\n\t\t\t;;\n\t\to)\n\t\t\tdump_file=\"$OPTARG\"\n\t\t\t;;\n\t\th | ?)\n\t\t\tusage\n\t\t\t;;\n\t\tesac\n\tdone\n\tshift \"$((OPTIND - 1))\"\n\n\t# extract the leading topic\n\t[[ \"$1\" =~ ^@ ]] && {\n\t\ttopic=\"${1#@}\"\n\t\tshift\n\t}\n\n\t[ $chat_mode -eq 0 ] || {\n\t\t[[ -n $topic && $topic != \"$default_topic\" ]] || raise_error 'Topic is required for chatting.' 2\n\t}\n\n\trest_args=(\"$@\")\n}\n\ncheck_bin() {\n\tcommand -v \"$1\" >/dev/null || raise_error \"$1 not found. Use package manager (Homebrew, apt-get etc.) to install it.\" \"${2:-1}\"\n}\n\nmain() {\n\t# show compatible provider if applicable\n\tif [ \"${SUPPRESS_PROVIDER_TIPS:-0}\" -eq 0 ] && [ -n \"$OPENAI_COMPATIBLE_PROVIDER\" ]; then\n\t\techo \"OpenAI compatible provider: $OPENAI_COMPATIBLE_PROVIDER\" >&2\n\tfi\n\n\t# check required config\n\tlocal key\n\tfor key in API_ENDPOINT API_KEY API_MODEL; do\n\t\t[ -z \"$(get_env \"$key\")\" ] && raise_error \"Missing environment variable: ${provider}_$key.\" 1\n\tdone\n\n\tparse \"$@\"\n\tcheck_bin jq 10\n\n\tmkdir -p \"$OPENAI_DATA_DIR\"\n\tdata_file=\"$OPENAI_DATA_DIR/$topic.json\"\n\ttemp_dir=$(mktemp -d)\n\n\tif [[ $topic == \"$default_topic\" || -f \"$data_file\" ]]; then\n\t\tlocal fn=\"openai_${api_name//\\//_}\"\n\t\t[ \"$(type -t \"$fn\")\" = function ] || raise_error \"API '$api_name' is not available.\" 12\n\t\t\"$fn\"\n\telse\n\t\t[ ${#rest_args[@]} -gt 0 ] || raise_error \"Prompt for new topic is required\" 13\n\t\tcreate_topic\n\tfi\n}\n\nmain \"$@\"\n"
  }
]