Full Code of peterdemin/openai-cli for AI

main 458d44b9f213 cached
26 files
58.9 KB
15.7k tokens
32 symbols
1 requests
Download .txt
Repository: peterdemin/openai-cli
Branch: main
Commit: 458d44b9f213
Files: 26
Total size: 58.9 KB

Directory structure:
gitextract_1qxrycog/

├── .editorconfig
├── .github/
│   └── workflows/
│       └── main.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .readthedocs.yaml
├── CHANGES.rst
├── LICENSE
├── Makefile
├── README.rst
├── pyproject.toml
├── pytest.ini
├── requirements/
│   ├── base.in
│   ├── base.txt
│   ├── ci.in
│   ├── ci.txt
│   ├── local.in
│   └── local.txt
├── setup.cfg
├── src/
│   └── openai_cli/
│       ├── __init__.py
│       ├── cli.py
│       ├── client.py
│       ├── config.py
│       ├── test_cli.py
│       ├── test_client.py
│       └── test_config.py
└── tox.ini

================================================
FILE CONTENTS
================================================

================================================
FILE: .editorconfig
================================================
root = true

[*]
charset = utf-8
indent_style = space
indent_size = 4
insert_final_newline = true
end_of_line = lf

[*.{yml,yaml}]
indent_size = 2


================================================
FILE: .github/workflows/main.yml
================================================
name: tests

on: [push, pull_request]

env:
  # Environment variables to support color support (jaraco/skeleton#66):
  # Request colored output from CLI tools supporting it. Different tools
  # interpret the value differently. For some, just being set is sufficient.
  # For others, it must be a non-zero integer. For yet others, being set
  # to a non-empty value is sufficient.
  FORCE_COLOR: -106
  # MyPy's color enforcement (must be a non-zero number)
  MYPY_FORCE_COLOR: -42
  # Recognized by the `py` package, dependency of `pytest` (must be "1")
  PY_COLORS: 1
  # Make tox-wrapped tools see color requests
  TOX_TESTENV_PASSENV: >-
    FORCE_COLOR
    MYPY_FORCE_COLOR
    NO_COLOR
    PY_COLORS
    PYTEST_THEME
    PYTEST_THEME_MODE

  # Suppress noisy pip warnings
  PIP_DISABLE_PIP_VERSION_CHECK: 'true'
  PIP_NO_PYTHON_VERSION_WARNING: 'true'
  PIP_NO_WARN_SCRIPT_LOCATION: 'true'

  # Disable the spinner, noise in GHA; TODO(webknjaz): Fix this upstream
  # Must be "1".
  TOX_PARALLEL_NO_SPINNER: 1


jobs:
  test:
    strategy:
      matrix:
        python:
        - "3.12"
        dev:
        - -dev
        platform:
        - ubuntu-latest
        - macos-latest
        - windows-latest
        include:
        - python: "3.12"
          platform: ubuntu-latest
    runs-on: ${{ matrix.platform }}
    steps:
      - uses: actions/checkout@v3
      - name: Setup Python
        uses: actions/setup-python@v4
        with:
          python-version: ${{ matrix.python }}${{ matrix.dev }}
      - name: Install dependencies
        run: |
          python -m pip install -r requirements/local.txt
      - name: Run tests
        run:
          pytest --cov=openai_cli src/openai_cli

  check:  # This job does nothing and is only used for the branch protection
    if: always()

    needs:
    - test

    runs-on: ubuntu-latest

    steps:
    - name: Decide whether the needed jobs succeeded or failed
      uses: re-actors/alls-green@release/v1
      with:
        jobs: ${{ toJSON(needs) }}

  release:
    needs:
    - check
    if: github.event_name == 'push' && contains(github.ref, 'refs/tags/')
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v3
      - name: Setup Python
        uses: actions/setup-python@v4
        with:
          python-version: 3.12
      - name: Install tox
        run: |
          python -m pip install tox
      - name: Release
        run: tox -e release
        env:
          TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}


================================================
FILE: .gitignore
================================================
Session.vim
*.swp
*.egg-info/
__pycache__/
/build/
/dist/
.coverage


================================================
FILE: .pre-commit-config.yaml
================================================
repos:
  - repo: local
    hooks:
    - id: isort
      name: isort
      entry: isort
      language: system
      require_serial: true
      types_or: [cython, pyi, python]
      args: ['--filter-files']

    - id: black
      name: black
      entry: black
      language: system
      require_serial: true
      types_or: [python, pyi]


================================================
FILE: .readthedocs.yaml
================================================
version: 2
python:
  install:
  - path: .
    extra_requirements:
      - docs

# workaround for readthedocs/readthedocs.org#9623
build:
  # workaround for readthedocs/readthedocs.org#9635
  os: ubuntu-22.04
  tools:
    python: "3"


================================================
FILE: CHANGES.rst
================================================
1.0.0 - Sep 4, 2024
-------------------

* Complete rewrite by `Tevfik Kadan`_, `PR #13`_.

.. _`Tevfik Kadan`: https://github.com/ktevfik

0.0.3 - Feb 15, 2023
--------------------

* Allow overriding API URL through ``OPENAI_API_URL`` environment variable.
  Thanks to `Stefano d'Antonio`_, `Issue #5`, `PR #6`_.

.. _`Stefano d'Antonio`: https://github.com/UnoSD
.. _`Issue #5`: https://github.com/peterdemin/openai-cli/issues/5
.. _`PR #6`: https://github.com/peterdemin/openai-cli/pull/6

0.0.2 - Dec 29, 2022
--------------------

* Add command line option -m/--model. Thanks to `Alex Zhuang`_, `PR #1`_.

.. _`Alex Zhuang`: https://github.com/azhx
.. _`PR #1`: https://github.com/peterdemin/openai-cli/pull/1

0.0.1 - Dec 3, 2022
-------------------

* Initial release


================================================
FILE: LICENSE
================================================
Copyright Jason R. Coombs

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.


================================================
FILE: Makefile
================================================
.DEFAULT_GOAL := help

PEX := openai
PROJ := openai_cli
PROJ_ROOT := src/$(PROJ)

define PRINT_HELP_PYSCRIPT
import re, sys

for line in sys.stdin:
	match = re.match(r'^([a-zA-Z_-]+):.*?## (.*)$$', line)
	if match:
		target, help = match.groups()
		print("%-10s %s" % (target, help))
endef
export PRINT_HELP_PYSCRIPT

.PHONY: virtual_env_set
virtual_env_set:
ifndef VIRTUAL_ENV
	$(error VIRTUAL_ENV not set)
endif

.PHONY: help
help:
	@python -c "$$PRINT_HELP_PYSCRIPT" < $(MAKEFILE_LIST)

.PHONY: clean
clean: ## remove build artifacts
	rm -rf build/ \
	       dist/ \
	       .eggs/
	rm -f $(PEX)
	find . -name '.eggs' -type d -exec rm -rf {} +
	find . -name '*.egg-info' -exec rm -rf {} +
	find . -name '*.egg' -exec rm -f {} +
	find . -name '*.pyc' -exec rm -f {} +
	find . -name '*.pyo' -exec rm -f {} +
	find . -name '__pycache__' -exec rm -fr {} +

.PHONY: dist
dist: clean ## builds source and wheel package
	python -m build -n

.PHONY: release
release: dist ## package and upload a release
	twine upload dist/*

$(PEX) pex:
	pex . -e $(PROJ).cli:cli --validate-entry-point -o $(PEX)

.PHONY: lint
lint: ## check style with pylint
	pylint $(PROJ_ROOT)
	mypy $(PROJ_ROOT)

.PHONY: test
test: ## run test suite
	pytest --cov=$(PROJ) $(PROJ_ROOT)

.PHONY: install
install: ## install the package with dev dependencies
	pip install -e . -r requirements/local.txt

.PHONY: sync
sync: ## completely sync installed packages with dev dependencies
	pip-sync requirements/local.txt
	pip install -e .

.PHONY: lock
lock: ## lock versions of third-party dependencies
	pip-compile-multi \
		--allow-unsafe \
		--use-cache \
		--no-upgrade

.PHONY: upgrade
upgrade: ## upgrade versions of third-party dependencies
	pip-compile-multi \
		--allow-unsafe \
		--use-cache

.PHONY: fmt
fmt: ## Reformat all Python files
	isort $(PROJ_ROOT)
	black $(PROJ_ROOT)

## Skeleton initialization
.PHONY: init
init: virtual_env_set install
	pre-commit install

.PHONY: rename
rename:
	@python -c "$$RENAME_PROJECT_PYSCRIPT"
	$(MAKE) init
	git add -A .
	git commit -am "Initialize the project"


================================================
FILE: README.rst
================================================
OpenAI command-line client
==========================

Installation
------------

To install OpenAI CLI in Python virtual environment, run::

    pip install openai-cli

Token authentication
--------------------

OpenAI API requires authentication token, which can be obtained on this page:
https://beta.openai.com/account/api-keys

Provide token to the CLI either through a command-line argument (``-t/--token <TOKEN>``)
or through an environment variable (``OPENAI_API_KEY``).

Usage
-----

Currently only text completion API is supported.

Example usage::

    $ echo "Are cats faster than dogs?" | openai complete -
    It depends on the breed of the cat and dog. Generally,
    cats are faster than dogs over short distances,
    but dogs are better at sustained running.

Interactive mode supported (Press Ctrl+C to exit)::

    $ openai repl
    Prompt: Can generative AI replace humans?

    No, generative AI cannot replace humans.
    While generative AI can be used to automate certain tasks,
    it cannot replace the creativity, intuition, and problem-solving
    skills that humans possess.
    Generative AI can be used to supplement human efforts,
    but it cannot replace them.

    Prompt: ^C

Run without arguments to get a short help message::

    $ openai
    Usage: openai [OPTIONS] COMMAND [ARGS]...

    Options:
      --help  Show this message and exit.

    Commands:
      complete  Return OpenAI completion for a prompt from SOURCE.
      repl      Start interactive shell session for OpenAI completion API.

Build a standalone binary using pex and move it into PATH::

    $ make openai && mv openai ~/bin/
    $ openai repl
    Prompt:

Alternative API URL
-------------------

CLI invokes https://api.openai.com/v1/completions by default.
To override the endpoint URL, set ``OPENAI_API_URL`` environment variable.

Example usage
-------------

Here's an example usage scenario, where we first create a Python module
with a Fibonacci function implementation, and then generate a unit test for it:

.. code:: bash

    $ mkdir examples
    $ touch examples/__init__.py
    $ echo "Write Python function to calculate Fibonacci numbers" | openai complete - | black - > examples/fib.py
    $ (echo 'Write unit tests for this Python module named "fib":\n'; cat examples/fib.py) | openai complete - | black - > examples/test_fib.py
    $ pytest -v examples/test_fib.py
    ============================== test session starts ==============================

    examples/test_fib.py::TestFibonacci::test_eighth_fibonacci_number PASSED                                 [ 10%]
    examples/test_fib.py::TestFibonacci::test_fifth_fibonacci_number PASSED                                  [ 20%]
    examples/test_fib.py::TestFibonacci::test_first_fibonacci_number PASSED                                  [ 30%]
    examples/test_fib.py::TestFibonacci::test_fourth_fibonacci_number PASSED                                 [ 40%]
    examples/test_fib.py::TestFibonacci::test_negative_input PASSED                                          [ 50%]
    examples/test_fib.py::TestFibonacci::test_ninth_fibonacci_number PASSED                                  [ 60%]
    examples/test_fib.py::TestFibonacci::test_second_fibonacci_number PASSED                                 [ 70%]
    examples/test_fib.py::TestFibonacci::test_seventh_fibonacci_number PASSED                                [ 80%]
    examples/test_fib.py::TestFibonacci::test_sixth_fibonacci_number PASSED                                  [ 90%]
    examples/test_fib.py::TestFibonacci::test_third_fibonacci_number PASSED                                  [100%]

    =============================== 10 passed in 0.02s ==============================

    $ cat examples/fib.py

.. code:: python

    def Fibonacci(n):
        if n < 0:
            print("Incorrect input")
        # First Fibonacci number is 0
        elif n == 1:
            return 0
        # Second Fibonacci number is 1
        elif n == 2:
            return 1
        else:
            return Fibonacci(n - 1) + Fibonacci(n - 2)

.. code:: bash

    $ cat examples/test_fib.py

.. code:: python

    import unittest
    from .fib import Fibonacci


    class TestFibonacci(unittest.TestCase):
        def test_negative_input(self):
            self.assertEqual(Fibonacci(-1), None)

        def test_first_fibonacci_number(self):
            self.assertEqual(Fibonacci(1), 0)

        def test_second_fibonacci_number(self):
            self.assertEqual(Fibonacci(2), 1)

        def test_third_fibonacci_number(self):
            self.assertEqual(Fibonacci(3), 1)

        def test_fourth_fibonacci_number(self):
            self.assertEqual(Fibonacci(4), 2)

        def test_fifth_fibonacci_number(self):
            self.assertEqual(Fibonacci(5), 3)

        def test_sixth_fibonacci_number(self):
            self.assertEqual(Fibonacci(6), 5)

        def test_seventh_fibonacci_number(self):
            self.assertEqual(Fibonacci(7), 8)

        def test_eighth_fibonacci_number(self):
            self.assertEqual(Fibonacci(8), 13)

        def test_ninth_fibonacci_number(self):
            self.assertEqual(Fibonacci(9), 21)


    if __name__ == "__main__":
        unittest.main()

.. code:: bash

    $ (echo "Add type annotations for this Python code"; cat examples/fib.py) | openai complete - | black - | tee tmp && mv tmp examples/fib.py

.. code:: python

    def Fibonacci(n: int) -> int:
        if n < 0:
            print("Incorrect input")
        # First Fibonacci number is 0
        elif n == 1:
            return 0
        # Second Fibonacci number is 1
        elif n == 2:
            return 1
        else:
            return Fibonacci(n - 1) + Fibonacci(n - 2)

.. code:: bash

    $ mypy examples/fib.py
    examples/fib.py:1: error: Missing return statement  [return]
    Found 1 error in 1 file (checked 1 source file)

.. code:: bash

    $ (echo "Fix mypy warnings in this Python code"; cat examples/fib.py; mypy examples/fib.py) | openai complete - | black - | tee tmp && mv tmp examples/fib.py

.. code:: python

    def Fibonacci(n: int) -> int:
        if n < 0:
            print("Incorrect input")
        # First Fibonacci number is 0
        elif n == 1:
            return 0
        # Second Fibonacci number is 1
        elif n == 2:
            return 1
        else:
            return Fibonacci(n - 1) + Fibonacci(n - 2)
        return None  # Added return statement

.. code:: bash

    $ mypy examples/fib.py
    examples/fib.py:12: error: Incompatible return value type (got "None", expected "int")  [return-value]
    Found 1 error in 1 file (checked 1 source file)

.. code:: bash

    $ (echo "Fix mypy warnings in this Python code"; cat examples/fib.py; mypy examples/fib.py) | openai complete - | black - | tee tmp && mv tmp examples/fib.py

.. code:: python

    def Fibonacci(n: int) -> int:
        if n < 0:
            print("Incorrect input")
        # First Fibonacci number is 0
        elif n == 1:
            return 0
        # Second Fibonacci number is 1
        elif n == 2:
            return 1
        else:
            return Fibonacci(n - 1) + Fibonacci(n - 2)
        return 0  # Changed return statement to return 0

.. code:: bash

    $ mypy examples/fib.py
    Success: no issues found in 1 source file

.. code:: bash

    $ (echo "Rewrite these tests to use pytest.parametrized"; cat examples/test_fib.py) | openai complete - | black - | tee tmp && mv tmp examples/test_fib.py

.. code:: python

    import pytest
    from .fib import Fibonacci


    @pytest.mark.parametrize(
        "n, expected",
        [(1, 0), (2, 1), (3, 1), (4, 2), (5, 3), (6, 5), (7, 8), (8, 13), (9, 21), (10, 34)],
    )
    def test_fibonacci(n, expected):
        assert Fibonacci(n) == expected


================================================
FILE: pyproject.toml
================================================
[build-system]
requires = ["setuptools>=56"]
build-backend = "setuptools.build_meta"

[tool.black]
line-length = 100
target-version = ['py312']

[tool.isort]
profile = 'black'
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
use_parentheses = true
line_length = 100

[tool.flake8]
max-line-length = 100
max-complexity = 10

[tool.pylint.main]
# Analyse import fallback blocks. This can be used to support both Python 2 and 3
# compatible code, which means that the block might have code that exists only in
# one or another interpreter, leading to false positives when analysed.
# analyse-fallback-blocks =

# Always return a 0 (non-error) status code, even if lint errors are found. This
# is primarily useful in continuous integration scripts.
# exit-zero =

# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code.
extension-pkg-allow-list = "lxml"

# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code. (This is an alternative name to extension-pkg-allow-list
# for backward compatibility.)
# extension-pkg-whitelist =

# Return non-zero exit code if any of these messages/categories are detected,
# even if score is above --fail-under value. Syntax same as enable. Messages
# specified are enabled, while categories only check already-enabled messages.
# fail-on =

# Specify a score threshold under which the program will exit with error.
fail-under = 10

# Interpret the stdin as a python script, whose filename needs to be passed as
# the module_or_package argument.
# from-stdin =

# Files or directories to be skipped. They should be base names, not paths.
ignore = ["CVS"]

# Add files or directories matching the regular expressions patterns to the
# ignore-list. The regex matches against paths and can be in Posix or Windows
# format. Because '\' represents the directory delimiter on Windows systems, it
# can't be used as an escape character.
# ignore-paths =

# Files or directories matching the regular expression patterns are skipped. The
# regex matches against base names, not paths. The default value ignores Emacs
# file locks
ignore-patterns = ["^\\.#"]

# List of module names for which member attributes should not be checked (useful
# for modules/projects where namespaces are manipulated during runtime and thus
# existing member attributes cannot be deduced by static analysis). It supports
# qualified module names, as well as Unix pattern matching.
# ignored-modules =

# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
# init-hook =

# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
# number of processors available to use, and will cap the count on Windows to
# avoid hangs.
jobs = 1

# Control the amount of potential inferred values when inferring a single object.
# This can help the performance when dealing with large functions or complex,
# nested conditions.
limit-inference-results = 100

# List of plugins (as comma separated values of python module names) to load,
# usually to register additional checkers.
# load-plugins =

# Pickle collected data for later comparisons.
persistent = true

# Minimum Python version to use for version dependent checks. Will default to the
# version used to run pylint.
py-version = "3.8"

# Discover python modules and packages in the file system subtree.
# recursive =

# When enabled, pylint would attempt to guess common misconfiguration and emit
# user-friendly hints instead of false-positive error messages.
suggestion-mode = true

# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
# unsafe-load-any-extension =

[tool.pylint.basic]
# Naming style matching correct argument names.
argument-naming-style = "snake_case"

# Regular expression matching correct argument names. Overrides argument-naming-
# style. If left empty, argument names will be checked with the set naming style.
# argument-rgx =

# Naming style matching correct attribute names.
attr-naming-style = "snake_case"

# Regular expression matching correct attribute names. Overrides attr-naming-
# style. If left empty, attribute names will be checked with the set naming
# style.
# attr-rgx =

# Bad variable names which should always be refused, separated by a comma.
bad-names = ["foo", "bar", "baz", "toto", "tutu", "tata"]

# Bad variable names regexes, separated by a comma. If names match any regex,
# they will always be refused
# bad-names-rgxs =

# Naming style matching correct class attribute names.
class-attribute-naming-style = "any"

# Regular expression matching correct class attribute names. Overrides class-
# attribute-naming-style. If left empty, class attribute names will be checked
# with the set naming style.
# class-attribute-rgx =

# Naming style matching correct class constant names.
class-const-naming-style = "UPPER_CASE"

# Regular expression matching correct class constant names. Overrides class-
# const-naming-style. If left empty, class constant names will be checked with
# the set naming style.
# class-const-rgx =

# Naming style matching correct class names.
class-naming-style = "PascalCase"

# Regular expression matching correct class names. Overrides class-naming-style.
# If left empty, class names will be checked with the set naming style.
# class-rgx =

# Naming style matching correct constant names.
const-naming-style = "UPPER_CASE"

# Regular expression matching correct constant names. Overrides const-naming-
# style. If left empty, constant names will be checked with the set naming style.
# const-rgx =

# Minimum line length for functions/classes that require docstrings, shorter ones
# are exempt.
docstring-min-length = -1

# Naming style matching correct function names.
function-naming-style = "snake_case"

# Regular expression matching correct function names. Overrides function-naming-
# style. If left empty, function names will be checked with the set naming style.
# function-rgx =

# Good variable names which should always be accepted, separated by a comma.
good-names = ["i", "j", "k", "ex", "Run", "_"]

# Good variable names regexes, separated by a comma. If names match any regex,
# they will always be accepted
# good-names-rgxs =

# Include a hint for the correct naming format with invalid-name.
# include-naming-hint =

# Naming style matching correct inline iteration names.
inlinevar-naming-style = "any"

# Regular expression matching correct inline iteration names. Overrides
# inlinevar-naming-style. If left empty, inline iteration names will be checked
# with the set naming style.
# inlinevar-rgx =

# Naming style matching correct method names.
method-naming-style = "snake_case"

# Regular expression matching correct method names. Overrides method-naming-
# style. If left empty, method names will be checked with the set naming style.
# method-rgx =

# Naming style matching correct module names.
module-naming-style = "snake_case"

# Regular expression matching correct module names. Overrides module-naming-
# style. If left empty, module names will be checked with the set naming style.
# module-rgx =

# Colon-delimited sets of names that determine each other's naming style when the
# name regexes allow several styles.
# name-group =

# Regular expression which should only match function or class names that do not
# require a docstring.
no-docstring-rgx = "^_"

# List of decorators that produce properties, such as abc.abstractproperty. Add
# to this list to register other decorators that produce valid properties. These
# decorators are taken in consideration only for invalid-name.
property-classes = ["abc.abstractproperty"]

# Regular expression matching correct type variable names. If left empty, type
# variable names will be checked with the set naming style.
# typevar-rgx =

# Naming style matching correct variable names.
variable-naming-style = "snake_case"

# Regular expression matching correct variable names. Overrides variable-naming-
# style. If left empty, variable names will be checked with the set naming style.
# variable-rgx =

[tool.pylint.classes]
# Warn about protected attribute access inside special methods
# check-protected-access-in-special-methods =

# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods = ["__init__", "__new__", "setUp", "__post_init__"]

# List of member names, which should be excluded from the protected access
# warning.
exclude-protected = ["_asdict", "_fields", "_replace", "_source", "_make"]

# List of valid names for the first argument in a class method.
valid-classmethod-first-arg = ["cls"]

# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg = ["cls"]

[tool.pylint.design]
# List of regular expressions of class ancestor names to ignore when counting
# public methods (see R0903)
# exclude-too-few-public-methods =

# List of qualified class names to ignore when counting class parents (see R0901)
# ignored-parents =

# Maximum number of arguments for function / method.
max-args = 5

# Maximum number of attributes for a class (see R0902).
max-attributes = 7

# Maximum number of boolean expressions in an if statement (see R0916).
max-bool-expr = 5

# Maximum number of branch for function / method body.
max-branches = 12

# Maximum number of locals for function / method body.
max-locals = 15

# Maximum number of parents for a class (see R0901).
max-parents = 7

# Maximum number of public methods for a class (see R0904).
max-public-methods = 20

# Maximum number of return / yield for function / method body.
max-returns = 6

# Maximum number of statements in function / method body.
max-statements = 50

# Minimum number of public methods for a class (see R0903).
min-public-methods = 2

[tool.pylint.exceptions]
# Exceptions that will emit a warning when caught.
overgeneral-exceptions = ["BaseException", "Exception"]

[tool.pylint.format]
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
# expected-line-ending-format =

# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines = "^\\s*(# )?<?https?://\\S+>?$"

# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren = 4

# String used as indentation unit. This is usually "    " (4 spaces) or "\t" (1
# tab).
indent-string = "    "

# Maximum number of characters on a single line.
max-line-length = 100

# Maximum number of lines in a module.
max-module-lines = 1000

# Allow the body of a class to be on the same line as the declaration if body
# contains single statement.
# single-line-class-stmt =

# Allow the body of an if to be on the same line as the test if there is no else.
# single-line-if-stmt =

[tool.pylint.imports]
# List of modules that can be imported at any level, not just the top level one.
# allow-any-import-level =

# Allow wildcard imports from modules that define __all__.
# allow-wildcard-with-all =

# Deprecated modules which should not be used, separated by a comma.
# deprecated-modules =

# Output a graph (.gv or any supported image format) of external dependencies to
# the given file (report RP0402 must not be disabled).
# ext-import-graph =

# Output a graph (.gv or any supported image format) of all (i.e. internal and
# external) dependencies to the given file (report RP0402 must not be disabled).
# import-graph =

# Output a graph (.gv or any supported image format) of internal dependencies to
# the given file (report RP0402 must not be disabled).
# int-import-graph =

# Force import order to recognize a module as part of the standard compatibility
# libraries.
# known-standard-library =

# Force import order to recognize a module as part of a third party library.
known-third-party = ["enchant"]

# Couples of modules and preferred modules, separated by a comma.
# preferred-modules =

[tool.pylint.logging]
# The type of string formatting that logging methods do. `old` means using %
# formatting, `new` is for `{}` formatting.
logging-format-style = "old"

# Logging modules to check that the string format arguments are in logging
# function parameter format.
logging-modules = ["logging"]

[tool.pylint."messages control"]
# Only show warnings with the listed confidence levels. Leave empty to show all.
# Valid levels: HIGH, CONTROL_FLOW, INFERENCE, INFERENCE_FAILURE, UNDEFINED.
confidence = ["HIGH", "CONTROL_FLOW", "INFERENCE", "INFERENCE_FAILURE", "UNDEFINED"]

# Disable the message, report, category or checker with the given id(s). You can
# either give multiple identifiers separated by comma (,) or put this option
# multiple times (only on the command line, not in the configuration file where
# it should appear only once). You can also use "--disable=all" to disable
# everything first and then re-enable specific checks. For example, if you want
# to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use "--disable=all --enable=classes
# --disable=W".
disable = ["raw-checker-failed", "bad-inline-option", "locally-disabled", "file-ignored", "suppressed-message", "useless-suppression", "deprecated-pragma", "use-symbolic-message-instead", "missing-function-docstring", "missing-module-docstring", "missing-class-docstring", "too-few-public-methods", "fixme", "duplicate-code"]

# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where it
# should appear only once). See also the "--disable" option for examples.
enable = ["c-extension-no-member"]

[tool.pylint.method_args]
# List of qualified names (i.e., library.method) which require a timeout
# parameter e.g. 'requests.api.get,requests.api.post'
timeout-methods = ["requests.api.delete", "requests.api.get", "requests.api.head", "requests.api.options", "requests.api.patch", "requests.api.post", "requests.api.put", "requests.api.request"]

[tool.pylint.miscellaneous]
# List of note tags to take in consideration, separated by a comma.
notes = ["FIXME", "XXX", "TODO"]

# Regular expression of note tags to take in consideration.
# notes-rgx =

[tool.pylint.refactoring]
# Maximum number of nested blocks for function / method body
max-nested-blocks = 5

# Complete name of functions that never returns. When checking for inconsistent-
# return-statements if a never returning function is called then it will be
# considered as an explicit return statement and no message will be printed.
never-returning-functions = ["sys.exit", "argparse.parse_error"]

[tool.pylint.reports]
# Python expression which should return a score less than or equal to 10. You
# have access to the variables 'fatal', 'error', 'warning', 'refactor',
# 'convention', and 'info' which contain the number of messages in each category,
# as well as 'statement' which is the total number of statements analyzed. This
# score is used by the global evaluation report (RP0004).
evaluation = "max(0, 0 if fatal else 10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10))"

# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details.
# msg-template =

# Set the output format. Available formats are text, parseable, colorized, json
# and msvs (visual studio). You can also give a reporter class, e.g.
# mypackage.mymodule.MyReporterClass.
# output-format =

# Tells whether to display a full report or only the messages.
# reports =

# Activate the evaluation score.
score = true

[tool.pylint.similarities]
# Comments are removed from the similarity computation
ignore-comments = true

# Docstrings are removed from the similarity computation
ignore-docstrings = true

# Imports are removed from the similarity computation
ignore-imports = true

# Signatures are removed from the similarity computation
ignore-signatures = true

# Minimum lines number of a similarity.
min-similarity-lines = 4

[tool.pylint.spelling]
# Limits count of emitted suggestions for spelling mistakes.
max-spelling-suggestions = 4

# Spelling dictionary name. Available dictionaries: none. To make it work,
# install the 'python-enchant' package.
# spelling-dict =

# List of comma separated words that should be considered directives if they
# appear at the beginning of a comment and should not be checked.
spelling-ignore-comment-directives = "fmt: on,fmt: off,noqa:,noqa,nosec,isort:skip,mypy:"

# List of comma separated words that should not be checked.
# spelling-ignore-words =

# A path to a file that contains the private dictionary; one word per line.
# spelling-private-dict-file =

# Tells whether to store unknown words to the private dictionary (see the
# --spelling-private-dict-file option) instead of raising a message.
# spelling-store-unknown-words =

[tool.pylint.string]
# This flag controls whether inconsistent-quotes generates a warning when the
# character used as a quote delimiter is used inconsistently within a module.
# check-quote-consistency =

# This flag controls whether the implicit-str-concat should generate a warning on
# implicit string concatenation in sequences defined over several lines.
# check-str-concat-over-line-jumps =

[tool.pylint.typecheck]
# List of decorators that produce context managers, such as
# contextlib.contextmanager. Add to this list to register other decorators that
# produce valid context managers.
contextmanager-decorators = ["contextlib.contextmanager"]

# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
# generated-members =

# Tells whether missing members accessed in mixin class should be ignored. A
# class is considered mixin if its name matches the mixin-class-rgx option.
# Tells whether to warn about missing members when the owner of the attribute is
# inferred to be None.
ignore-none = true

# This flag controls whether pylint should warn about no-member and similar
# checks whenever an opaque object is returned when inferring. The inference can
# return multiple potential results while evaluating a Python object, but some
# branches might not be evaluated, which results in partial inference. In that
# case, it might be useful to still emit no-member and other checks for the rest
# of the inferred objects.
ignore-on-opaque-inference = true

# List of symbolic message names to ignore for Mixin members.
ignored-checks-for-mixins = ["no-member", "not-async-context-manager", "not-context-manager", "attribute-defined-outside-init"]

# List of class names for which member attributes should not be checked (useful
# for classes with dynamically set attributes). This supports the use of
# qualified names.
ignored-classes = ["optparse.Values", "thread._local", "_thread._local", "argparse.Namespace"]

# Show a hint with possible names when a member name was not found. The aspect of
# finding the hint is based on edit distance.
missing-member-hint = true

# The minimum edit distance a name should have in order to be considered a
# similar match for a missing member name.
missing-member-hint-distance = 1

# The total number of similar names that should be taken in consideration when
# showing a hint for a missing member.
missing-member-max-choices = 1

# Regex pattern to define which classes are considered mixins.
mixin-class-rgx = ".*[Mm]ixin"

# List of decorators that change the signature of a decorated function.
# signature-mutators =

[tool.pylint.variables]
# List of additional names supposed to be defined in builtins. Remember that you
# should avoid defining new builtins when possible.
# additional-builtins =

# Tells whether unused global variables should be treated as a violation.
allow-global-unused-variables = true

# List of names allowed to shadow builtins
# allowed-redefined-builtins =

# List of strings which can identify a callback function by name. A callback name
# must start or end with one of those strings.
callbacks = ["cb_", "_cb"]

# A regular expression matching the name of dummy variables (i.e. expected to not
# be used).
dummy-variables-rgx = "_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_"

# Argument names that match this expression will be ignored.
ignored-argument-names = "_.*|^ignored_|^unused_"

# Tells whether we should check for unused import in __init__ files.
# init-import =

# List of qualified module names which can have objects that can redefine
# builtins.
redefining-builtins-modules = ["six.moves", "past.builtins", "future.builtins", "builtins", "io"]




================================================
FILE: pytest.ini
================================================
[pytest]
norecursedirs=dist build .tox .eggs
addopts=--doctest-modules
doctest_optionflags=ALLOW_UNICODE ELLIPSIS
filterwarnings=
	# Suppress deprecation warning in flake8
	ignore:SelectableGroups dict interface is deprecated::flake8

	# shopkeep/pytest-black#55
	ignore:<class 'pytest_black.BlackItem'> is not using a cooperative constructor:pytest.PytestDeprecationWarning
	ignore:The \(fspath. py.path.local\) argument to BlackItem is deprecated.:pytest.PytestDeprecationWarning
	ignore:BlackItem is an Item subclass and should not be a collector:pytest.PytestWarning

	# tholo/pytest-flake8#83
	ignore:<class 'pytest_flake8.Flake8Item'> is not using a cooperative constructor:pytest.PytestDeprecationWarning
	ignore:The \(fspath. py.path.local\) argument to Flake8Item is deprecated.:pytest.PytestDeprecationWarning
	ignore:Flake8Item is an Item subclass and should not be a collector:pytest.PytestWarning


================================================
FILE: requirements/base.in
================================================
requests


================================================
FILE: requirements/base.txt
================================================
# SHA1:54faec366d11efdac0f9d2da560e273f92288c2a
#
# This file is autogenerated by pip-compile-multi
# To update, run:
#
#    pip-compile-multi
#
certifi==2025.1.31
    # via requests
charset-normalizer==3.4.1
    # via requests
idna==3.10
    # via requests
requests==2.32.3
    # via -r requirements/base.in
urllib3==2.3.0
    # via requests


================================================
FILE: requirements/ci.in
================================================
-r base.in

pytest
pytest-checkdocs
pytest-cov

flake8
Flake8-pyproject

mypy
pylint
types-requests


================================================
FILE: requirements/ci.txt
================================================
# SHA1:c9e539596bc2fbcffc9073aa78e4a7d1bb185e34
#
# This file is autogenerated by pip-compile-multi
# To update, run:
#
#    pip-compile-multi
#
-r base.txt
alabaster==1.0.0
    # via sphinx
astroid==3.3.8
    # via pylint
babel==2.17.0
    # via sphinx
build[virtualenv]==1.2.2.post1
    # via jaraco-packaging
coverage[toml]==7.6.12
    # via pytest-cov
dill==0.3.9
    # via pylint
distlib==0.3.9
    # via virtualenv
docutils==0.21.2
    # via
    #   pytest-checkdocs
    #   sphinx
domdf-python-tools==3.9.0
    # via jaraco-packaging
filelock==3.17.0
    # via virtualenv
flake8==7.1.1
    # via
    #   -r requirements/ci.in
    #   flake8-pyproject
flake8-pyproject==1.2.3
    # via -r requirements/ci.in
imagesize==1.4.1
    # via sphinx
iniconfig==2.0.0
    # via pytest
isort==6.0.0
    # via pylint
jaraco-context==6.0.1
    # via jaraco-packaging
jaraco-packaging==10.2.3
    # via pytest-checkdocs
jinja2==3.1.5
    # via sphinx
markupsafe==3.0.2
    # via jinja2
mccabe==0.7.0
    # via
    #   flake8
    #   pylint
mypy==1.15.0
    # via -r requirements/ci.in
mypy-extensions==1.0.0
    # via mypy
natsort==8.4.0
    # via domdf-python-tools
packaging==24.2
    # via
    #   build
    #   pytest
    #   sphinx
platformdirs==4.3.6
    # via
    #   pylint
    #   virtualenv
pluggy==1.5.0
    # via pytest
pycodestyle==2.12.1
    # via flake8
pyflakes==3.2.0
    # via flake8
pygments==2.19.1
    # via sphinx
pylint==3.3.4
    # via -r requirements/ci.in
pyproject-hooks==1.2.0
    # via build
pytest==8.3.4
    # via
    #   -r requirements/ci.in
    #   pytest-cov
pytest-checkdocs==2.13.0
    # via -r requirements/ci.in
pytest-cov==6.0.0
    # via -r requirements/ci.in
snowballstemmer==2.2.0
    # via sphinx
sphinx==8.1.3
    # via jaraco-packaging
sphinxcontrib-applehelp==2.0.0
    # via sphinx
sphinxcontrib-devhelp==2.0.0
    # via sphinx
sphinxcontrib-htmlhelp==2.1.0
    # via sphinx
sphinxcontrib-jsmath==1.0.1
    # via sphinx
sphinxcontrib-qthelp==2.0.0
    # via sphinx
sphinxcontrib-serializinghtml==2.0.0
    # via sphinx
tomlkit==0.13.2
    # via pylint
types-requests==2.32.0.20241016
    # via -r requirements/ci.in
typing-extensions==4.12.2
    # via
    #   domdf-python-tools
    #   mypy
virtualenv==20.29.2
    # via build


================================================
FILE: requirements/local.in
================================================
-r ci.in

# fmt
isort
black
pre-commit

# packages
pip-compile-multi
# twine
pex

# debugging
ipython
ipdb


================================================
FILE: requirements/local.txt
================================================
# SHA1:344cade73658f99aab1dd0104334b3d1061922ab
#
# This file is autogenerated by pip-compile-multi
# To update, run:
#
#    pip-compile-multi
#
-r ci.txt
asttokens==3.0.0
    # via stack-data
black==25.1.0
    # via -r requirements/local.in
cfgv==3.4.0
    # via pre-commit
click==8.1.8
    # via
    #   black
    #   pip-compile-multi
    #   pip-tools
decorator==5.1.1
    # via
    #   ipdb
    #   ipython
executing==2.2.0
    # via stack-data
identify==2.6.7
    # via pre-commit
ipdb==0.13.13
    # via -r requirements/local.in
ipython==8.32.0
    # via
    #   -r requirements/local.in
    #   ipdb
jedi==0.19.2
    # via ipython
matplotlib-inline==0.1.7
    # via ipython
nodeenv==1.9.1
    # via pre-commit
parso==0.8.4
    # via jedi
pathspec==0.12.1
    # via black
pex==2.33.0
    # via -r requirements/local.in
pexpect==4.9.0
    # via ipython
pip-compile-multi==2.7.1
    # via -r requirements/local.in
pip-tools==7.4.1
    # via pip-compile-multi
pre-commit==4.1.0
    # via -r requirements/local.in
prompt-toolkit==3.0.50
    # via ipython
ptyprocess==0.7.0
    # via pexpect
pure-eval==0.2.3
    # via stack-data
pyyaml==6.0.2
    # via pre-commit
stack-data==0.6.3
    # via ipython
toposort==1.10
    # via pip-compile-multi
traitlets==5.14.3
    # via
    #   ipython
    #   matplotlib-inline
wcwidth==0.2.13
    # via prompt-toolkit
wheel==0.45.1
    # via pip-tools

# The following packages are considered to be unsafe in a requirements file:
pip==25.0.1
    # via pip-tools
setuptools==75.8.0
    # via pip-tools


================================================
FILE: setup.cfg
================================================
[metadata]
name = openai-cli
version = 1.0.1
author = Peter Demin
author_email = peterdemin@gmail.com
description = Command-line client for OpenAI API
long_description = file:README.rst
url = https://github.com/peterdemin/openai-cli
classifiers =
    Development Status :: 4 - Beta
	Intended Audience :: Developers
	License :: OSI Approved :: MIT License
	Programming Language :: Python :: 3
	Programming Language :: Python :: 3 :: Only

[options]
packages = find:
include_package_data = true
python_requires = >=3.12
install_requires =
    requests
    click

[options.packages.find]
where=src

[options.entry_points]
console_scripts =
    openai = openai_cli.cli:cli

[bdist_wheel]
universal = 1


================================================
FILE: src/openai_cli/__init__.py
================================================


================================================
FILE: src/openai_cli/cli.py
================================================
import io
from typing import Optional

import click

from openai_cli.client import OpenAIError, generate_response
from openai_cli.config import DEFAULT_MODEL, MAX_TOKENS, TEMPERATURE, set_openai_api_key


@click.group()
@click.option(
    "-m", "--model", default=DEFAULT_MODEL, help=f"OpenAI model option. (default: {DEFAULT_MODEL})"
)
@click.option(
    "-k",
    "--max-tokens",
    type=int,
    default=MAX_TOKENS,
    help=f"Maximum number of tokens in the response. (default: {MAX_TOKENS})",
)
@click.option(
    "-p",
    "--temperature",
    type=float,
    default=TEMPERATURE,
    help=f"Temperature for response generation. (default: {TEMPERATURE})",
)
@click.option("-t", "--token", help="OpenAI API token")
@click.pass_context
def cli(ctx, model: str, max_tokens: int, temperature: float, token: Optional[str]):
    """CLI for interacting with OpenAI's completion API."""
    ctx.ensure_object(dict)
    ctx.obj["model"] = model
    ctx.obj["max_tokens"] = max_tokens
    ctx.obj["temperature"] = temperature
    ctx.obj["conversation_history"] = []
    if token:
        set_openai_api_key(token)


@cli.command()
@click.argument("source", type=click.File("rt", encoding="utf-8"))
@click.pass_context
def complete(ctx, source: io.TextIOWrapper) -> None:
    """Return OpenAI completion for a prompt from SOURCE."""
    prompt = source.read()
    try:
        result = generate_response(
            prompt,
            conversation_history=ctx.obj["conversation_history"],
            model=ctx.obj["model"],
            max_tokens=ctx.obj["max_tokens"],
            temperature=ctx.obj["temperature"],
        )
        click.echo(result)
        ctx.obj["conversation_history"].extend(
            [{"role": "user", "content": prompt}, {"role": "assistant", "content": result}]
        )
    except OpenAIError as e:
        click.echo(f"An error occurred: {str(e)}", err=True)


@cli.command()
@click.pass_context
def repl(ctx) -> None:
    """Start interactive shell session for OpenAI completion API."""
    click.echo(f"Interactive shell started. Using model: {ctx.obj['model']}")
    click.echo(f"Max tokens: {ctx.obj['max_tokens']}, Temperature: {ctx.obj['temperature']}")
    click.echo("Type 'exit' or use Ctrl-D to exit.")

    while True:
        try:
            prompt = click.prompt("Prompt", type=str)
            if prompt.lower() == "exit":
                break
            result = generate_response(
                prompt,
                conversation_history=ctx.obj["conversation_history"],
                model=ctx.obj["model"],
                max_tokens=ctx.obj["max_tokens"],
                temperature=ctx.obj["temperature"],
            )
            click.echo(f"\nResponse:\n{result}\n")
            ctx.obj["conversation_history"].extend(
                [{"role": "user", "content": prompt}, {"role": "assistant", "content": result}]
            )
        except click.exceptions.Abort:
            break
        except OpenAIError as e:
            click.echo(f"An error occurred: {str(e)}", err=True)

    click.echo("Interactive shell ended.")


if __name__ == "__main__":
    cli()


================================================
FILE: src/openai_cli/client.py
================================================
import json
from typing import Any, Dict, List, Optional

import requests

from .config import (
    DEFAULT_MODEL,
    MAX_TOKENS,
    SYSTEM_MESSAGE,
    TEMPERATURE,
    get_openai_api_key,
    get_openai_api_url,
)


class OpenAIError(Exception):
    pass


def initialize_session() -> requests.Session:
    """
    Initialize a requests Session with the API key from the environment.

    Returns:
        requests.Session: Initialized session with API key in headers.

    Raises:
        OpenAIError: If no API key is found in the environment.
    """
    api_key = get_openai_api_key()
    if not api_key:
        raise OpenAIError("The API key must be set in the OPENAI_API_KEY environment variable")

    session = requests.Session()
    session.headers.update(
        {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
    )
    return session


def generate_response(
    prompt: str,
    conversation_history: Optional[List[Dict[str, str]]] = None,
    model: str = DEFAULT_MODEL,
    max_tokens: int = MAX_TOKENS,
    temperature: float = TEMPERATURE,
    system_message: str = SYSTEM_MESSAGE,
) -> str:
    """
    Generates a response from a given prompt using a specified model.

    Args:
        prompt (str): The prompt to generate a response for.
        conversation_history (Optional[List[Dict[str, str]]]): Previous conversation messages.
        model (str): The model to use for generating the response.
        max_tokens (int): The maximum number of tokens in the response.
        temperature (float): Controls randomness in the response.
        system_message (str): The system message to set the context.

    Returns:
        str: The generated response.

    Raises:
        OpenAIError: If there's an error with the OpenAI API call.
    """
    session = initialize_session()

    messages = [{"role": "system", "content": system_message}]
    if conversation_history:
        messages.extend(conversation_history)
    messages.append({"role": "user", "content": prompt})

    payload = {
        "model": model,
        "messages": messages,
        "max_tokens": max_tokens,
        "temperature": temperature,
    }

    try:
        response = session.post(get_openai_api_url(), data=json.dumps(payload))
        response.raise_for_status()
        return _extract_content(response.json())
    except requests.RequestException as e:
        raise OpenAIError(f"Error generating response: {str(e)}") from e


def _extract_content(response: Dict[str, Any]) -> str:
    """
    Extracts the content from the API response.

    Args:
        response (Dict[str, Any]): The API response object.

    Returns:
        str: The extracted content.

    Raises:
        ValueError: If the response format is unexpected.
    """
    try:
        return response["choices"][0]["message"]["content"].strip()
    except (KeyError, IndexError) as e:
        raise ValueError(f"Unexpected response format: {str(e)}") from e


================================================
FILE: src/openai_cli/config.py
================================================
import os

DEFAULT_MODEL = "gpt-4o-mini"
MAX_TOKENS = 500
TEMPERATURE = 0.23
SYSTEM_MESSAGE = "You are a helpful assistant."
DEFAULT_API_BASE_URL = "https://api.openai.com/v1/chat/completions"


def get_openai_api_key() -> str:
    """
    Retrieves the OpenAI API key from the environment.

    Returns:
        str: The OpenAI API key, or an empty string if not set.
    """
    return os.getenv("OPENAI_API_KEY", "")


def set_openai_api_key(api_key: str) -> None:
    """
    Sets the OpenAI API key in the environment.

    Args:
        api_key (str): The API key to set.
    """
    os.environ["OPENAI_API_KEY"] = api_key


def get_openai_api_url() -> str:
    """
    Retrieves the OpenAI API URL from the environment.

    Returns:
        str: The OpenAI API URL, or the default URL if not set.
    """
    return os.getenv("OPENAI_API_URL") or DEFAULT_API_BASE_URL


def set_openai_api_url(api_url: str) -> None:
    """
    Sets the OpenAI API URL in the environment.

    Args:
        api_url (str): The API URL to set.
    """
    os.environ["OPENAI_API_URL"] = api_url


================================================
FILE: src/openai_cli/test_cli.py
================================================
import unittest
from unittest.mock import MagicMock, patch

from click.testing import CliRunner

from openai_cli.cli import cli
from openai_cli.config import DEFAULT_MODEL, MAX_TOKENS, TEMPERATURE


@patch("openai_cli.client.get_openai_api_url", return_value="http://mock-api-url")
@patch("openai_cli.client.requests.Session", autospec=True)
class TestCLI(unittest.TestCase):
    def setUp(self):
        self.runner = CliRunner()

    @patch("openai_cli.cli.generate_response")
    def test_complete_command(self, mock_generate, mock_session, mock_url):
        mock_generate.return_value = "Mocked response"
        result = self.runner.invoke(cli, ["complete", "-"], input="Test prompt")
        self.assertEqual(result.exit_code, 0)
        self.assertIn("Mocked response", result.output)
        mock_generate.assert_called_once_with(
            "Test prompt",
            conversation_history=[
                {"role": "user", "content": "Test prompt"},
                {"role": "assistant", "content": "Mocked response"},
            ],
            model=DEFAULT_MODEL,
            max_tokens=MAX_TOKENS,
            temperature=TEMPERATURE,
        )

    @patch("openai_cli.cli.generate_response")
    def test_repl_command(self, mock_generate, mock_session, mock_url):
        mock_generate.return_value = "Mocked response"
        result = self.runner.invoke(cli, ["repl"], input="Test prompt\nexit\n")
        self.assertEqual(result.exit_code, 0)
        self.assertIn("Mocked response", result.output)
        self.assertIn("Interactive shell ended.", result.output)
        mock_generate.assert_called_once_with(
            "Test prompt",
            conversation_history=[
                {"role": "user", "content": "Test prompt"},
                {"role": "assistant", "content": "Mocked response"},
            ],
            model=DEFAULT_MODEL,
            max_tokens=MAX_TOKENS,
            temperature=TEMPERATURE,
        )

    @patch("openai_cli.cli.generate_response")
    def test_model_option(self, mock_generate, mock_session, mock_url):
        mock_generate.return_value = "Mocked response"
        result = self.runner.invoke(
            cli, ["-m", "gpt-3.5-turbo", "complete", "-"], input="Test prompt"
        )
        self.assertEqual(result.exit_code, 0)
        mock_generate.assert_called_once_with(
            "Test prompt",
            conversation_history=[
                {"role": "user", "content": "Test prompt"},
                {"role": "assistant", "content": "Mocked response"},
            ],
            model="gpt-3.5-turbo",
            max_tokens=MAX_TOKENS,
            temperature=TEMPERATURE,
        )

    @patch("openai_cli.cli.generate_response")
    def test_max_tokens_option(self, mock_generate, mock_session, mock_url):
        mock_generate.return_value = "Mocked response"
        result = self.runner.invoke(cli, ["-k", "100", "complete", "-"], input="Test prompt")
        self.assertEqual(result.exit_code, 0)
        mock_generate.assert_called_once_with(
            "Test prompt",
            conversation_history=[
                {"role": "user", "content": "Test prompt"},
                {"role": "assistant", "content": "Mocked response"},
            ],
            model=DEFAULT_MODEL,
            max_tokens=100,
            temperature=TEMPERATURE,
        )

    @patch("openai_cli.cli.generate_response")
    def test_temperature_option(self, mock_generate, mock_session, mock_url):
        mock_generate.return_value = "Mocked response"
        result = self.runner.invoke(cli, ["-p", "0.8", "complete", "-"], input="Test prompt")
        self.assertEqual(result.exit_code, 0)
        mock_generate.assert_called_once_with(
            "Test prompt",
            conversation_history=[
                {"role": "user", "content": "Test prompt"},
                {"role": "assistant", "content": "Mocked response"},
            ],
            model=DEFAULT_MODEL,
            max_tokens=MAX_TOKENS,
            temperature=0.8,
        )

    @patch("openai_cli.cli.set_openai_api_key")
    @patch("openai_cli.cli.generate_response")
    def test_token_option(self, mock_generate, mock_set_key, mock_session, mock_url):
        mock_generate.return_value = "Mocked response"
        result = self.runner.invoke(cli, ["-t", "test_token", "complete", "-"], input="Test prompt")
        self.assertEqual(result.exit_code, 0)
        mock_set_key.assert_called_once_with("test_token")
        mock_generate.assert_called_once_with(
            "Test prompt",
            conversation_history=[
                {"role": "user", "content": "Test prompt"},
                {"role": "assistant", "content": "Mocked response"},
            ],
            model=DEFAULT_MODEL,
            max_tokens=MAX_TOKENS,
            temperature=TEMPERATURE,
        )
        self.assertIn("Mocked response", result.output)


if __name__ == "__main__":
    unittest.main()


================================================
FILE: src/openai_cli/test_client.py
================================================
import unittest
from unittest.mock import MagicMock, PropertyMock, patch

import requests

from openai_cli.client import OpenAIError, generate_response, initialize_session
from openai_cli.config import DEFAULT_API_BASE_URL


class TestClient(unittest.TestCase):

    @patch("openai_cli.client.requests.Session.post")
    @patch("openai_cli.client.get_openai_api_url", return_value=DEFAULT_API_BASE_URL)
    @patch("openai_cli.client.get_openai_api_key", return_value="test_api_key")
    @patch(
        "openai_cli.client.requests.Session", autospec=True
    )  # Mocking the Session itself to prevent any real network interaction
    def test_generate_response_success(self, mock_session_cls, mock_get_key, mock_get_url, mock_post):
        mock_response = MagicMock()
        mock_response.json.return_value = {"choices": [{"message": {"content": "Mocked response"}}]}
        mock_response.status_code = 200
        mock_post.return_value = mock_response

        mock_session = mock_session_cls.return_value
        mock_session.post = mock_post

        type(mock_session).headers = PropertyMock(return_value={})

        response = generate_response("Test prompt")
        self.assertEqual(response, "Mocked response")
        mock_post.assert_called_once_with(
            DEFAULT_API_BASE_URL,
            data=unittest.mock.ANY
        )

    @patch("openai_cli.client.requests.Session.post")
    @patch("openai_cli.client.get_openai_api_url", return_value=DEFAULT_API_BASE_URL)
    @patch("openai_cli.client.get_openai_api_key", return_value="test_api_key")
    @patch("openai_cli.client.requests.Session", autospec=True)
    def test_generate_response_error(self, mock_session_cls, mock_get_key, mock_get_url, mock_post):
        mock_post.side_effect = requests.RequestException("API Error")

        mock_session = mock_session_cls.return_value
        mock_session.post = mock_post

        type(mock_session).headers = PropertyMock(return_value={})

        with self.assertRaises(OpenAIError):
            generate_response("Test prompt")

    @patch("openai_cli.client.get_openai_api_url", return_value="https://custom.api/v1")
    @patch("openai_cli.client.requests.Session.post")
    @patch("openai_cli.client.get_openai_api_key", return_value="test_api_key")
    @patch("openai_cli.client.requests.Session", autospec=True)
    def test_generate_response_custom_url(self, mock_session_cls, mock_get_key, mock_post, mock_get_url):
        mock_response = MagicMock()
        mock_response.json.return_value = {"choices": [{"message": {"content": "Mocked response"}}]}
        mock_response.status_code = 200
        mock_post.return_value = mock_response

        mock_session = mock_session_cls.return_value
        mock_session.post = mock_post

        type(mock_session).headers = PropertyMock(return_value={})

        response = generate_response("Test prompt")
        self.assertEqual(response, "Mocked response")
        mock_post.assert_called_once_with(
            "https://custom.api/v1",
            data=unittest.mock.ANY
        )

    @patch("openai_cli.client.get_openai_api_key")
    @patch("openai_cli.client.requests.Session", autospec=True)
    def test_initialize_session_success(self, mock_session_cls, mock_get_key):
        mock_get_key.return_value = "test_api_key"

        mock_session = mock_session_cls.return_value
        mock_session.headers = {}

        session = initialize_session()
        mock_session_cls.assert_called_once()
        self.assertEqual(session.headers["Authorization"], "Bearer test_api_key")

    @patch("openai_cli.client.get_openai_api_key")
    def test_initialize_session_no_key(self, mock_get_key):
        mock_get_key.return_value = ""

        with self.assertRaises(OpenAIError):
            initialize_session()


if __name__ == "__main__":
    unittest.main()


================================================
FILE: src/openai_cli/test_config.py
================================================
import unittest
from unittest.mock import patch

from openai_cli.config import (
    DEFAULT_API_BASE_URL,
    get_openai_api_key,
    get_openai_api_url,
    set_openai_api_key,
    set_openai_api_url,
)


class TestConfig(unittest.TestCase):
    @patch("os.getenv")
    def test_get_openai_api_key_set(self, mock_getenv):
        mock_getenv.return_value = "test_api_key"
        self.assertEqual(get_openai_api_key(), "test_api_key")

    @patch("os.getenv")
    def test_get_openai_api_key_not_set(self, mock_getenv):
        mock_getenv.return_value = ""
        self.assertEqual(get_openai_api_key(), "")

    @patch("os.environ")
    def test_set_openai_api_key(self, mock_environ):
        set_openai_api_key("new_api_key")
        mock_environ.__setitem__.assert_called_once_with("OPENAI_API_KEY", "new_api_key")

    @patch("os.getenv")
    def test_get_openai_api_url_set(self, mock_getenv):
        custom_url = "https://custom.openai.api/v1"
        mock_getenv.return_value = custom_url
        self.assertEqual(get_openai_api_url(), custom_url)

    @patch("os.getenv")
    def test_get_openai_api_url_not_set(self, mock_getenv):
        mock_getenv.return_value = None
        self.assertEqual(get_openai_api_url(), DEFAULT_API_BASE_URL)

    @patch("os.environ")
    def test_set_openai_api_url(self, mock_environ):
        custom_url = "https://custom.openai.api/v1"
        set_openai_api_url(custom_url)
        mock_environ.__setitem__.assert_called_once_with("OPENAI_API_URL", custom_url)


if __name__ == "__main__":
    unittest.main()


================================================
FILE: tox.ini
================================================
[tox]
envlist = python
minversion = 3.12
tox_pip_extensions_ext_venv_update = true
toxworkdir={env:TOX_WORK_DIR:.tox}

[testenv]
deps =
commands =
	pytest {posargs}
usedevelop = True
extras = testing
allowlist_externals =
    pytest
setenv=
    OPENAI_API_KEY='<openai-api-key>'

[testenv:docs]
extras =
	docs
	testing
changedir = docs
commands =
	python -m sphinx -W --keep-going . {toxinidir}/build/html

[testenv:release]
skip_install = True
deps =
	build
	twine>=3
passenv =
	TWINE_PASSWORD
	GITHUB_TOKEN
setenv =
	TWINE_USERNAME = {env:TWINE_USERNAME:__token__}
commands =
	python -c "import shutil; shutil.rmtree('dist', ignore_errors=True)"
	python -m build
	python -m twine upload dist/*
Download .txt
gitextract_1qxrycog/

├── .editorconfig
├── .github/
│   └── workflows/
│       └── main.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .readthedocs.yaml
├── CHANGES.rst
├── LICENSE
├── Makefile
├── README.rst
├── pyproject.toml
├── pytest.ini
├── requirements/
│   ├── base.in
│   ├── base.txt
│   ├── ci.in
│   ├── ci.txt
│   ├── local.in
│   └── local.txt
├── setup.cfg
├── src/
│   └── openai_cli/
│       ├── __init__.py
│       ├── cli.py
│       ├── client.py
│       ├── config.py
│       ├── test_cli.py
│       ├── test_client.py
│       └── test_config.py
└── tox.ini
Download .txt
SYMBOL INDEX (32 symbols across 6 files)

FILE: src/openai_cli/cli.py
  function cli (line 30) | def cli(ctx, model: str, max_tokens: int, temperature: float, token: Opt...
  function complete (line 44) | def complete(ctx, source: io.TextIOWrapper) -> None:
  function repl (line 65) | def repl(ctx) -> None:

FILE: src/openai_cli/client.py
  class OpenAIError (line 16) | class OpenAIError(Exception):
  function initialize_session (line 20) | def initialize_session() -> requests.Session:
  function generate_response (line 41) | def generate_response(
  function _extract_content (line 88) | def _extract_content(response: Dict[str, Any]) -> str:

FILE: src/openai_cli/config.py
  function get_openai_api_key (line 10) | def get_openai_api_key() -> str:
  function set_openai_api_key (line 20) | def set_openai_api_key(api_key: str) -> None:
  function get_openai_api_url (line 30) | def get_openai_api_url() -> str:
  function set_openai_api_url (line 40) | def set_openai_api_url(api_url: str) -> None:

FILE: src/openai_cli/test_cli.py
  class TestCLI (line 12) | class TestCLI(unittest.TestCase):
    method setUp (line 13) | def setUp(self):
    method test_complete_command (line 17) | def test_complete_command(self, mock_generate, mock_session, mock_url):
    method test_repl_command (line 34) | def test_repl_command(self, mock_generate, mock_session, mock_url):
    method test_model_option (line 52) | def test_model_option(self, mock_generate, mock_session, mock_url):
    method test_max_tokens_option (line 70) | def test_max_tokens_option(self, mock_generate, mock_session, mock_url):
    method test_temperature_option (line 86) | def test_temperature_option(self, mock_generate, mock_session, mock_url):
    method test_token_option (line 103) | def test_token_option(self, mock_generate, mock_set_key, mock_session,...

FILE: src/openai_cli/test_client.py
  class TestClient (line 10) | class TestClient(unittest.TestCase):
    method test_generate_response_success (line 18) | def test_generate_response_success(self, mock_session_cls, mock_get_ke...
    method test_generate_response_error (line 40) | def test_generate_response_error(self, mock_session_cls, mock_get_key,...
    method test_generate_response_custom_url (line 55) | def test_generate_response_custom_url(self, mock_session_cls, mock_get...
    method test_initialize_session_success (line 75) | def test_initialize_session_success(self, mock_session_cls, mock_get_k...
    method test_initialize_session_no_key (line 86) | def test_initialize_session_no_key(self, mock_get_key):

FILE: src/openai_cli/test_config.py
  class TestConfig (line 13) | class TestConfig(unittest.TestCase):
    method test_get_openai_api_key_set (line 15) | def test_get_openai_api_key_set(self, mock_getenv):
    method test_get_openai_api_key_not_set (line 20) | def test_get_openai_api_key_not_set(self, mock_getenv):
    method test_set_openai_api_key (line 25) | def test_set_openai_api_key(self, mock_environ):
    method test_get_openai_api_url_set (line 30) | def test_get_openai_api_url_set(self, mock_getenv):
    method test_get_openai_api_url_not_set (line 36) | def test_get_openai_api_url_not_set(self, mock_getenv):
    method test_set_openai_api_url (line 41) | def test_set_openai_api_url(self, mock_environ):
Condensed preview — 26 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (64K chars).
[
  {
    "path": ".editorconfig",
    "chars": 147,
    "preview": "root = true\n\n[*]\ncharset = utf-8\nindent_style = space\nindent_size = 4\ninsert_final_newline = true\nend_of_line = lf\n\n[*.{"
  },
  {
    "path": ".github/workflows/main.yml",
    "chars": 2552,
    "preview": "name: tests\n\non: [push, pull_request]\n\nenv:\n  # Environment variables to support color support (jaraco/skeleton#66):\n  #"
  },
  {
    "path": ".gitignore",
    "chars": 68,
    "preview": "Session.vim\n*.swp\n*.egg-info/\n__pycache__/\n/build/\n/dist/\n.coverage\n"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 340,
    "preview": "repos:\n  - repo: local\n    hooks:\n    - id: isort\n      name: isort\n      entry: isort\n      language: system\n      requ"
  },
  {
    "path": ".readthedocs.yaml",
    "chars": 233,
    "preview": "version: 2\npython:\n  install:\n  - path: .\n    extra_requirements:\n      - docs\n\n# workaround for readthedocs/readthedocs"
  },
  {
    "path": "CHANGES.rst",
    "chars": 776,
    "preview": "1.0.0 - Sep 4, 2024\n-------------------\n\n* Complete rewrite by `Tevfik Kadan`_, `PR #13`_.\n\n.. _`Tevfik Kadan`: https://"
  },
  {
    "path": "LICENSE",
    "chars": 1050,
    "preview": "Copyright Jason R. Coombs\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software"
  },
  {
    "path": "Makefile",
    "chars": 2073,
    "preview": ".DEFAULT_GOAL := help\n\nPEX := openai\nPROJ := openai_cli\nPROJ_ROOT := src/$(PROJ)\n\ndefine PRINT_HELP_PYSCRIPT\nimport re, "
  },
  {
    "path": "README.rst",
    "chars": 7811,
    "preview": "OpenAI command-line client\n==========================\n\nInstallation\n------------\n\nTo install OpenAI CLI in Python virtua"
  },
  {
    "path": "pyproject.toml",
    "chars": 21046,
    "preview": "[build-system]\nrequires = [\"setuptools>=56\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[tool.black]\nline-length = 100\ntar"
  },
  {
    "path": "pytest.ini",
    "chars": 910,
    "preview": "[pytest]\nnorecursedirs=dist build .tox .eggs\naddopts=--doctest-modules\ndoctest_optionflags=ALLOW_UNICODE ELLIPSIS\nfilter"
  },
  {
    "path": "requirements/base.in",
    "chars": 9,
    "preview": "requests\n"
  },
  {
    "path": "requirements/base.txt",
    "chars": 343,
    "preview": "# SHA1:54faec366d11efdac0f9d2da560e273f92288c2a\n#\n# This file is autogenerated by pip-compile-multi\n# To update, run:\n#\n"
  },
  {
    "path": "requirements/ci.in",
    "chars": 100,
    "preview": "-r base.in\n\npytest\npytest-checkdocs\npytest-cov\n\nflake8\nFlake8-pyproject\n\nmypy\npylint\ntypes-requests\n"
  },
  {
    "path": "requirements/ci.txt",
    "chars": 2269,
    "preview": "# SHA1:c9e539596bc2fbcffc9073aa78e4a7d1bb185e34\n#\n# This file is autogenerated by pip-compile-multi\n# To update, run:\n#\n"
  },
  {
    "path": "requirements/local.in",
    "chars": 107,
    "preview": "-r ci.in\n\n# fmt\nisort\nblack\npre-commit\n\n# packages\npip-compile-multi\n# twine\npex\n\n# debugging\nipython\nipdb\n"
  },
  {
    "path": "requirements/local.txt",
    "chars": 1540,
    "preview": "# SHA1:344cade73658f99aab1dd0104334b3d1061922ab\n#\n# This file is autogenerated by pip-compile-multi\n# To update, run:\n#\n"
  },
  {
    "path": "setup.cfg",
    "chars": 698,
    "preview": "[metadata]\nname = openai-cli\nversion = 1.0.1\nauthor = Peter Demin\nauthor_email = peterdemin@gmail.com\ndescription = Comm"
  },
  {
    "path": "src/openai_cli/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/openai_cli/cli.py",
    "chars": 3137,
    "preview": "import io\nfrom typing import Optional\n\nimport click\n\nfrom openai_cli.client import OpenAIError, generate_response\nfrom o"
  },
  {
    "path": "src/openai_cli/client.py",
    "chars": 2973,
    "preview": "import json\nfrom typing import Any, Dict, List, Optional\n\nimport requests\n\nfrom .config import (\n    DEFAULT_MODEL,\n    "
  },
  {
    "path": "src/openai_cli/config.py",
    "chars": 1085,
    "preview": "import os\n\nDEFAULT_MODEL = \"gpt-4o-mini\"\nMAX_TOKENS = 500\nTEMPERATURE = 0.23\nSYSTEM_MESSAGE = \"You are a helpful assista"
  },
  {
    "path": "src/openai_cli/test_cli.py",
    "chars": 4946,
    "preview": "import unittest\nfrom unittest.mock import MagicMock, patch\n\nfrom click.testing import CliRunner\n\nfrom openai_cli.cli imp"
  },
  {
    "path": "src/openai_cli/test_client.py",
    "chars": 3846,
    "preview": "import unittest\nfrom unittest.mock import MagicMock, PropertyMock, patch\n\nimport requests\n\nfrom openai_cli.client import"
  },
  {
    "path": "src/openai_cli/test_config.py",
    "chars": 1560,
    "preview": "import unittest\nfrom unittest.mock import patch\n\nfrom openai_cli.config import (\n    DEFAULT_API_BASE_URL,\n    get_opena"
  },
  {
    "path": "tox.ini",
    "chars": 696,
    "preview": "[tox]\nenvlist = python\nminversion = 3.12\ntox_pip_extensions_ext_venv_update = true\ntoxworkdir={env:TOX_WORK_DIR:.tox}\n\n["
  }
]

About this extraction

This page contains the full source code of the peterdemin/openai-cli GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 26 files (58.9 KB), approximately 15.7k tokens, and a symbol index with 32 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!