Full Code of mquan/api2ai for AI

main a91eb74b50e4 cached
49 files
399.7 KB
91.3k tokens
37 symbols
1 requests
Download .txt
Showing preview only (418K chars total). Download the full file or copy to clipboard to get everything.
Repository: mquan/api2ai
Branch: main
Commit: a91eb74b50e4
Files: 49
Total size: 399.7 KB

Directory structure:
gitextract_6dvov9b0/

├── .changeset/
│   ├── README.md
│   └── config.json
├── .eslintrc.cjs
├── .gitignore
├── .husky/
│   └── pre-commit
├── .npmignore
├── .npmrc
├── LICENSE
├── README.md
├── core/
│   ├── CHANGELOG.md
│   ├── README.md
│   ├── fix-ono.sh
│   ├── fixtures/
│   │   └── oases/
│   │       ├── invalid-securities.yaml
│   │       ├── no-security.yaml
│   │       └── petstore.yaml
│   ├── index.ts
│   ├── jest.config.js
│   ├── package.json
│   ├── src/
│   │   ├── ai/
│   │   │   ├── __tests__/
│   │   │   │   └── api-agent.test.ts
│   │   │   ├── api-agent.ts
│   │   │   └── tools/
│   │   │       ├── __tests__/
│   │   │       │   ├── parse-arguments.test.ts
│   │   │       │   └── select-operation.test.ts
│   │   │       ├── parse-arguments.ts
│   │   │       └── select-operation.ts
│   │   └── api/
│   │       ├── __tests__/
│   │       │   ├── oas-loader.test.ts
│   │       │   ├── operation.test.ts
│   │       │   └── security.test.ts
│   │       ├── oas-loader.ts
│   │       ├── operation.ts
│   │       └── security.ts
│   └── tsconfig.json
├── package.json
├── packages/
│   ├── eslint-config-custom/
│   │   ├── index.js
│   │   └── package.json
│   └── tsconfig/
│       ├── base.json
│       ├── nextjs.json
│       ├── package.json
│       └── react-library.json
├── server/
│   ├── README.md
│   ├── app/
│   │   ├── layout.tsx
│   │   └── page.tsx
│   ├── next-env.d.ts
│   ├── next.config.js
│   ├── oases/
│   │   └── open-ai.yaml
│   ├── package.json
│   ├── pages/
│   │   └── api/
│   │       ├── api2ai.config.ts
│   │       └── run.ts
│   └── tsconfig.json
└── turbo.json

================================================
FILE CONTENTS
================================================

================================================
FILE: .changeset/README.md
================================================
# Changesets

Hello and welcome! This folder has been automatically generated by `@changesets/cli`, a build tool that works
with multi-package repos, or single-package repos to help you version and publish your code. You can
find the full documentation for it [in our repository](https://github.com/changesets/changesets)

We have a quick list of common questions to get you started engaging with this project in
[our documentation](https://github.com/changesets/changesets/blob/main/docs/common-questions.md)

`yarn changeset`

`yarn changeset version`

`yarn changeset publish`


================================================
FILE: .changeset/config.json
================================================
{
  "$schema": "https://unpkg.com/@changesets/config@2.3.1/schema.json",
  "changelog": "@changesets/cli/changelog",
  "commit": false,
  "fixed": [],
  "linked": [],
  "access": "public",
  "baseBranch": "main",
  "updateInternalDependencies": "patch",
  "ignore": []
}


================================================
FILE: .eslintrc.cjs
================================================
module.exports = {
  root: true,
  // This tells ESLint to load the config from the package `eslint-config-custom`
  extends: ["custom"],
  settings: {
    next: {
      rootDir: ["apps/*/"],
    },
  },
};


================================================
FILE: .gitignore
================================================
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.

# dependencies
node_modules
.pnp
.pnp.js

# testing
coverage

# next.js
.next/
out/
build
dist/

# misc
.DS_Store
*.pem

# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# local env files
.env
.env.local
.env.development.local
.env.test.local
.env.production.local

# turbo
.turbo

# vercel
.vercel


================================================
FILE: .husky/pre-commit
================================================
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"

npx lint-staged


================================================
FILE: .npmignore
================================================
.env

.DS_Store
*.pem


================================================
FILE: .npmrc
================================================
auto-install-peers = true


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) [2023] [Quan Nguyen]

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

================================================
FILE: README.md
================================================
# ☁️⇨🤖🧠 api2ai

⚡ Create an API assistant from any OpenAPI Spec ⚡

<img width="680" alt="api2ai demo with multiple APIs" src="https://github.com/mquan/api2ai/assets/138784/6719fdb2-6687-4768-a599-d61d7ab454a6">

## Features

**api2ai** lets you interface with any API using plain English or any natural language.

- Automatically parses OpenAPI spec and auth schemes
- Selects endpoint and parses arguments provided in user prompt into query and body params.
- Invokes the API call and return the response
- Comes with a local API

<img width="901" alt="api2ai demo with multiple languages" src="https://github.com/mquan/api2ai/assets/138784/aead4548-7d61-4ec6-8228-7c999e182cf0">

## Installation

`npm install --save @api2ai/core`

`yarn add --save @api2ai/core`

## Quickstart

The following example uses OpenAI API, essentially creating a single interface for all OpenAI endpoints. Please check out the [api code](https://github.com/mquan/api2ai/blob/main/server/pages/api/run.ts) for more details.

```typescript
import { ApiAgent } from "@api2ai/core";

const OPEN_AI_KEY = "sk-...";

const agent = new ApiAgent({
  apiKey: OPEN_AI_KEY,
  model: "gpt-3.5-turbo-1106", // "gpt-4-1106-preview" also works
  apis: [
    {
      filename: "path/to/open-api-spec.yaml",
      auth: { token: "sk-...." },
    },
    {
      filename: "url/to/another-open-api-spec.yaml",
      auth: { username: "u$er", password: "pa$$word" },
    },
  ],
});

const result = await agent.execute({
  userPrompt: "Create an image of Waikiki beach",
  verbose: true, // default: false
});

// Sanitized output of result
{
  "userPrompt": "Create an image of Waikiki beach",
  "selectedOperation": "createImage",
  "request": {
    "url": "https://api.openai.com/v1/images/generations",
    "method": "post",
    "headers": {
      "Content-Type": "application/json",
      "Authorization": "Bearer sk-..."
    },
    "body": "{\"prompt\":\"Waikiki beach\"}"
  },
  "response": {
    "headers": {},
    "status": 200,
    "body": {
      "created": 1691253354,
      "data": [
        {
          "url": "https://oaidalleapiprodscus.blob.core.windows.net/private/org-mSgbuBJYTxIWjjopcJpDnkwh/user-.../img-ZsEtynyCxFIYTlDfWor0mTJP.png?st=2023-08-05T15%3A35%3A54Z&se=2023-08-05T17%3A35%3A54Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-08-04T18%3A09%3A06Z&ske=2023-08-05T18%3A09%3A06Z&sks=b&skv=2021-08-06&sig=ZYKOP%2BGlz60di2sCiHMWL5ssruXyGMlAUFmQx/aXmqA%3D"
        }
      ]
    }
  }
}
```

## Using the agent via the API

To run the server in your machine, please clone the repo and follow the instruction in [Development & Contributing section](#development--contributing).

We use `dotenv` to store environment variables. Please create an `.env` file in the project's root directory and add your openai key

`OPEN_AI_KEY=sk-...`

Start the server

`yarn dev`

Make an api call

```typescript
fetch("http://localhost:5555/api/run", {
  headers: { "Content-Type": "application/json" },
  method: "POST",
  body: JSON.stringify({
    userPrompt:
      "Create an image of an astronaut swimming with dolphins in clear water ocean",
  }),
});
```

Configure the `server/pages/api/api2ai.config.ts` file to add your own APIs. Follow the existing template in this file. You may add as many files as you want.

## OpenAPI Spec

**api2ai** parses valid OAS files to determine which endpoint and parameters to use. Please ensure your OAS contains descriptive parameters and requestBody schema definition. We currently support OAS version 3.0.0 and above.

Tips: We leverage the `summary` fields to determine which endpoint to use. You can tweak your prompt according to the summary text for better result.

### Authentication

Configure your API auth credentials under the `auth` key for applicable APIs:

```typescript
// server/pages/api/api2ai.config.ts
export const configs = {
  model: "gpt-3.5-turbo-1106",
  token: process.env["OPEN_AI_KEY"],
  apis: [
    {
      file: "path/to/your-open-api-spec.yaml",
      auth: { token: process.env["MY_API_KEY"] },
    },
  ],
};
```

Currently, we support the following auth schemes:

- [Bearer authentication](https://swagger.io/docs/specification/authentication/bearer-authentication/)
- [API keys](https://swagger.io/docs/specification/authentication/api-keys/)
- [Basic auth](https://swagger.io/docs/specification/authentication/basic-authentication/)

Please ensure `securitySchemes` fields are properly defined. Refer to the [Swagger doc](https://swagger.io/docs/specification/authentication/) for more details.

## Development & Contributing

We use yarn and [turbo](https://turbo.build/). Please clone the repo and install both in order to run the demo and build packages in your machine.

```
yarn install
yarn build
```

To run the server

`yarn dev`

Access the app from `http://localhost:5555/`

To run all tests

`yarn test`

Run a single test file

`turbo run test -- core/src/api/__tests__/operation.test.ts`


================================================
FILE: core/CHANGELOG.md
================================================
# @api2ai/core

## 0.6.1

### Patch Changes

- Upgrade to gpt-3.5-turbo-1106 model

## 0.6.0

### Patch Changes

- Upgrade openai version 4

## 0.5.0

### Minor Changes

- Support path parameters

## 0.4.0

### Minor Changes

- Accept multiple OAS files and auth on init

## 0.3.1

### Patch Changes

- Support multiple OAS files

## 0.3.0

### Minor Changes

- Fix ono commonjs hack problem in script

## 0.2.1

### Patch Changes

- Patch ono commonjs problem

## 0.2.0

### Minor Changes

- Exclude cjs build

## 0.1.2

### Patch Changes

- Publish with ESM option

## 0.1.1

### Patch Changes

- Create conversational AI from Open API Spec


================================================
FILE: core/README.md
================================================
# ☁️⇨🤖🧠 api2ai

⚡ Create an API assistant from any OpenAPI Spec ⚡

<img width="680" alt="api2ai demo with multiple APIs" src="https://github.com/mquan/api2ai/assets/138784/6719fdb2-6687-4768-a599-d61d7ab454a6">

## Features

**api2ai** lets you interface with any API using plain English or any natural language.

- Automatically parses OpenAPI spec and auth schemes
- Selects endpoint and parses arguments provided in user prompt into query and body params.
- Invokes the API call and return the response
- Comes with a local API

<img width="901" alt="api2ai demo with multiple languages" src="https://github.com/mquan/api2ai/assets/138784/aead4548-7d61-4ec6-8228-7c999e182cf0">

## Installation

`npm install --save @api2ai/core`

`yarn add --save @api2ai/core`

## Quickstart

The following example uses OpenAI API, essentially creating a single interface for all OpenAI endpoints. Please check out the [api code](https://github.com/mquan/api2ai/blob/main/server/pages/api/run.ts) for more details.

```typescript
import { ApiAgent } from "@api2ai/core";

const OPEN_AI_KEY = "sk-...";

const agent = new ApiAgent({
  apiKey: OPEN_AI_KEY,
  model: "gpt-3.5-turbo-1106", // "gpt-4-1106-preview" also works
  apis: [
    {
      filename: "path/to/open-api-spec.yaml",
      auth: { token: "sk-...." },
    },
    {
      filename: "url/to/another-open-api-spec.yaml",
      auth: { username: "u$er", password: "pa$$word" },
    },
  ],
});

const result = await agent.execute({
  userPrompt: "Create an image of Waikiki beach",
  verbose: true, // default: false
});

// Sanitized output of result
{
  "userPrompt": "Create an image of Waikiki beach",
  "selectedOperation": "createImage",
  "request": {
    "url": "https://api.openai.com/v1/images/generations",
    "method": "post",
    "headers": {
      "Content-Type": "application/json",
      "Authorization": "Bearer sk-..."
    },
    "body": "{\"prompt\":\"Waikiki beach\"}"
  },
  "response": {
    "headers": {},
    "status": 200,
    "body": {
      "created": 1691253354,
      "data": [
        {
          "url": "https://oaidalleapiprodscus.blob.core.windows.net/private/org-mSgbuBJYTxIWjjopcJpDnkwh/user-.../img-ZsEtynyCxFIYTlDfWor0mTJP.png?st=2023-08-05T15%3A35%3A54Z&se=2023-08-05T17%3A35%3A54Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-08-04T18%3A09%3A06Z&ske=2023-08-05T18%3A09%3A06Z&sks=b&skv=2021-08-06&sig=ZYKOP%2BGlz60di2sCiHMWL5ssruXyGMlAUFmQx/aXmqA%3D"
        }
      ]
    }
  }
}
```

## Using the agent via the API

To run the server in your machine, please clone the repo and follow the instruction in [Development & Contributing section](#development--contributing).

We use `dotenv` to store environment variables. Please create an `.env` file in the project's root directory and add your openai key

`OPEN_AI_KEY=sk-...`

Start the server

`yarn dev`

Make an api call

```typescript
fetch("http://localhost:5555/api/run", {
  headers: { "Content-Type": "application/json" },
  method: "POST",
  body: JSON.stringify({
    userPrompt:
      "Create an image of an astronaut swimming with dolphins in clear water ocean",
  }),
});
```

Configure the `server/pages/api/api2ai.config.ts` file to add your own APIs. Follow the existing template in this file. You may add as many files as you want.

## OpenAPI Spec

**api2ai** parses valid OAS files to determine which endpoint and parameters to use. Please ensure your OAS contains descriptive parameters and requestBody schema definition. We currently support OAS version 3.0.0 and above.

Tips: We leverage the `summary` fields to determine which endpoint to use. You can tweak your prompt according to the summary text for better result.

### Authentication

Configure your API auth credentials under the `auth` key for applicable APIs:

```typescript
// server/pages/api/api2ai.config.ts
export const configs = {
  model: "gpt-3.5-turbo-1106",
  token: process.env["OPEN_AI_KEY"],
  apis: [
    {
      file: "path/to/your-open-api-spec.yaml",
      auth: { token: process.env["MY_API_KEY"] },
    },
  ],
};
```

Currently, we support the following auth schemes:

- [Bearer authentication](https://swagger.io/docs/specification/authentication/bearer-authentication/)
- [API keys](https://swagger.io/docs/specification/authentication/api-keys/)
- [Basic auth](https://swagger.io/docs/specification/authentication/basic-authentication/)

Please ensure `securitySchemes` fields are properly defined. Refer to the [Swagger doc](https://swagger.io/docs/specification/authentication/) for more details.

## Development & Contributing

We use yarn and [turbo](https://turbo.build/). Please clone the repo and install both in order to run the demo and build packages in your machine.

```
yarn install
yarn build
```

To run the server

`yarn dev`

Access the app from `http://localhost:5555/`

To run all tests

`yarn test`

Run a single test file

`turbo run test -- core/src/api/__tests__/operation.test.ts`


================================================
FILE: core/fix-ono.sh
================================================
sed -i '' -e 's,typeof module === "object" && typeof module.exports === "object",typeof module === "object" \&\& typeof module.exports === "object" \&\& typeof module.exports.default === "object",g' dist/*

================================================
FILE: core/fixtures/oases/invalid-securities.yaml
================================================
openapi: "3.0.0"
info:
  version: 1.0.0
  title: Swagger Petstore
  license:
    name: MIT
servers:
  - url: http://petstore.swagger.io/v1
paths:
  /pets:
    get:
      summary: List all pets
      operationId: listPets
      security:
        - basicAuth: []
      tags:
        - pets
      parameters:
        - name: limit
          in: query
          description: How many items to return at one time (max 100)
          required: false
          schema:
            type: integer
            maximum: 100
            format: int32
      responses:
        "200":
          description: A paged array of pets
          headers:
            x-next:
              description: A link to the next page of responses
              schema:
                type: string
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Pets"
        default:
          description: unexpected error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
components:
  schemas:
    Pet:
      type: object
      required:
        - id
        - name
      properties:
        id:
          type: integer
          format: int64
        name:
          type: string
        tag:
          type: string
    Pets:
      type: array
      maxItems: 100
      items:
        $ref: "#/components/schemas/Pet"
    Error:
      type: object
      required:
        - code
        - message
      properties:
        code:
          type: integer
          format: int32
        message:
          type: string


================================================
FILE: core/fixtures/oases/no-security.yaml
================================================
openapi: "3.0.0"
info:
  version: 1.0.0
  title: Swagger Petstore
  license:
    name: MIT
servers:
  - url: http://petstore.swagger.io/v1
paths:
  /pets:
    get:
      summary: List all pets
      operationId: listPets
      tags:
        - pets
      parameters:
        - name: limit
          in: query
          description: How many items to return at one time (max 100)
          required: false
          schema:
            type: integer
            maximum: 100
            format: int32
      responses:
        "200":
          description: A paged array of pets
          headers:
            x-next:
              description: A link to the next page of responses
              schema:
                type: string
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Pets"
        default:
          description: unexpected error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
components:
  schemas:
    Pet:
      type: object
      required:
        - id
        - name
      properties:
        id:
          type: integer
          format: int64
        name:
          type: string
        tag:
          type: string
    Pets:
      type: array
      maxItems: 100
      items:
        $ref: "#/components/schemas/Pet"
    Error:
      type: object
      required:
        - code
        - message
      properties:
        code:
          type: integer
          format: int32
        message:
          type: string


================================================
FILE: core/fixtures/oases/petstore.yaml
================================================
openapi: "3.0.0"
info:
  version: 1.0.0
  title: Swagger Petstore
  license:
    name: MIT
servers:
  - url: http://petstore.swagger.io/v1
paths:
  /pets:
    get:
      summary: List all pets
      operationId: listPets
      security:
        - basicAuth: []
      tags:
        - pets
      parameters:
        - name: limit
          in: query
          description: How many items to return at one time (max 100)
          required: false
          schema:
            type: integer
            maximum: 100
            format: int32
      responses:
        "200":
          description: A paged array of pets
          headers:
            x-next:
              description: A link to the next page of responses
              schema:
                type: string
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Pets"
        default:
          description: unexpected error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
    post:
      summary: Create a pet
      description: Create a pet from a pet name.
      operationId: createPets
      requestBody:
        content:
          application/json:
            schema:
              type: object
              properties:
                name:
                  type: string
                  description: Name of the pet
                  required: true
      tags:
        - pets
      responses:
        "201":
          description: Null response
        default:
          description: unexpected error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
  /pets/{petId}:
    get:
      summary: Info for a specific pet
      operationId: showPetById
      security:
        - apiKeyAuth: []
        - bearerAuth: []
      tags:
        - pets
      parameters:
        - name: petId
          in: path
          required: true
          description: The id of the pet to retrieve
          schema:
            type: string
      responses:
        "200":
          description: Expected response to a valid request
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Pet"
        default:
          description: unexpected error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
components:
  schemas:
    Pet:
      type: object
      required:
        - id
        - name
      properties:
        id:
          type: integer
          format: int64
        name:
          type: string
        tag:
          type: string
    Pets:
      type: array
      maxItems: 100
      items:
        $ref: "#/components/schemas/Pet"
    Error:
      type: object
      required:
        - code
        - message
      properties:
        code:
          type: integer
          format: int32
        message:
          type: string

  securitySchemes:
    bearerAuth:
      type: http
      scheme: bearer
      description: Bearer Authentication
    basicAuth:
      type: http
      scheme: basic
      description: Basic Authentication
    apiKeyAuth:
      type: apiKey
      in: header
      name: X-Api-Key
security:
  - bearerAuth: []


================================================
FILE: core/index.ts
================================================
export { default as ApiAgent } from "./src/ai/api-agent";


================================================
FILE: core/jest.config.js
================================================
/** @type {import('ts-jest').JestConfigWithTsJest} */

module.exports = {
  roots: ["<rootDir>"],
  transform: {
    "^.+\\.tsx?$": "ts-jest",
  },
  testPathIgnorePatterns: ['/node_modules/'],
  moduleFileExtensions: ["ts", "tsx", "js", "jsx", "json", "node"],
  modulePathIgnorePatterns: [
    "<rootDir>/test/__fixtures__",
    "<rootDir>/node_modules",
    "<rootDir>/dist",
  ],
  testMatch: ['<rootDir>/**/__tests__/**/*.test.ts'],
  preset: "ts-jest",
  testEnvironment: 'node',
};


================================================
FILE: core/package.json
================================================
{
  "name": "@api2ai/core",
  "version": "0.6.1",
  "main": "./dist/index.js",
  "types": "./dist/index.d.ts",
  "license": "MIT",
  "keywords": [
    "Open API Spec",
    "OpenAPI",
    "ai",
    "api",
    "openai"
  ],
  "scripts": {
    "lint": "eslint \"**/*.ts*\"",
    "test": "jest",
    "build": "tsup ./index.ts --format cjs,esm --dts --clean"
  },
  "devDependencies": {
    "@changesets/cli": "^2.26.2",
    "eslint": "^7.32.0",
    "eslint-config-custom": "*",
    "@types/jest": "^26.0.22",
    "jest": "^29.6.1",
    "ts-jest": "^29.1.1",
    "tsconfig": "*",
    "tsup": "^7.1.0",
    "typescript": "^4.5.2"
  },
  "dependencies": {
    "openai": "^4.0.1",
    "swagger-parser": "^10.0.3"
  }
}


================================================
FILE: core/src/ai/__tests__/api-agent.test.ts
================================================
import path from "path";

import ApiAgent from "../api-agent";

let selectOperationResponse: any;
let parseArgsResponse: any;
let errorData: any;

let petResponse: Object;
let responseHeaders = { "X-Response-Status": "Complete" };
let responseStatus = 201;

global.fetch = jest.fn(() =>
  Promise.resolve({
    headers: responseHeaders,
    status: responseStatus,
    json: () => Promise.resolve(petResponse),
  })
) as jest.Mock;

jest.mock("openai", () => {
  return class MockedOpenAI {
    apiKey: string;

    chat: any = {
      completions: {
        create: ({ model, messages, functions }: any) => {
          if (errorData) {
            throw errorData;
          }

          if (
            messages[0].role === "system" &&
            messages[0].content.includes("Parse user input into arguments")
          ) {
            return Promise.resolve(parseArgsResponse);
          } else if (
            messages[0].content.includes("items in the following list")
          ) {
            return Promise.resolve(selectOperationResponse);
          }
        },
      },
    };

    constructor({ apiKey }: any) {
      this.apiKey = apiKey;
    }
  };
});

describe("ApiAgent", () => {
  const filename: string = path.join(
    __dirname,
    "../../../fixtures/oases/petstore.yaml"
  );
  let context: Object;
  let userPrompt: string;

  describe("#execute", () => {
    let agent: ApiAgent;

    beforeEach(async () => {
      userPrompt = "add new pet named Skip";

      context = { token: "my-token" };

      const openAIKey = "openai-api-key";
      agent = new ApiAgent({
        apiKey: openAIKey,
        model: "gpt-3.5-turbo-1106",
        apis: [{ filename }],
      });

      // Mocked data
      parseArgsResponse = {
        choices: [
          {
            message: { function_call: { arguments: '{ "name": "Sticky" }' } },
          },
        ],
      };

      selectOperationResponse = {
        choices: [{ message: { content: "Create a pet." } }],
      };

      petResponse = { id: 1, name: "Sticky" };
    });

    describe("when not verbose", () => {
      test("using a prompt that matches one of the operations", async () => {
        const result = await agent.execute({
          userPrompt,
          context,
        });

        expect(result).toEqual({
          userPrompt,
          selectedOperation: "createPets",
          response: {
            headers: responseHeaders,
            status: responseStatus,
            body: petResponse,
          },
        });
      });
    });

    describe("when verbose", () => {
      test("using a prompt that matches one of the operations", async () => {
        const result = await agent.execute({
          userPrompt,
          context,
          verbose: true,
        });

        expect(result).toEqual({
          userPrompt,
          selectedOperation: "createPets",
          request: {
            url: "http://petstore.swagger.io/v1/pets",
            method: "post",
            headers: {
              Authorization: "Bearer my-token",
              "Content-Type": "application/json",
            },
            body: JSON.stringify({ name: "Sticky" }),
          },
          response: {
            headers: responseHeaders,
            status: responseStatus,
            body: petResponse,
          },
        });
      });
    });
  });
});


================================================
FILE: core/src/ai/api-agent.ts
================================================
import Operation from "../api/operation";
import { parse } from "../api/oas-loader";
import { selectOperation } from "./tools/select-operation";
import { parseArguments } from "./tools/parse-arguments";

const DEFAULT_CHAT_MODEL = "gpt-3.5-turbo-1106";

interface ApiInput {
  filename: string;
  auth?: any;
}

interface AgentInput {
  apiKey: string;
  model: string;
  apis: ApiInput[];
}

export default class ApiAgent {
  apiKey: string;
  model: string = DEFAULT_CHAT_MODEL;
  apis: ApiInput[] = [];
  operations: Operation[] = [];

  constructor({ apiKey, model, apis }: AgentInput) {
    this.apiKey = apiKey;
    this.model = model;
    this.apis = apis;
  }

  /*
    Perform the command in two AI calls because:
      1. OpenAI currently only supports only 64 functions, doesn't work for large OAS.
      2. Cost saving: including all functions + args defintions uses up a lot of tokens
    Strategy:
    Step 1: Select an operation based on user prompt text
    Step 2: Invoke function calling for the matched operation, leverage AI to parse user input into args in the same call.
    Step 3: Make the API call
  */
  async execute({
    userPrompt,
    context,
    verbose = false,
  }: {
    userPrompt: string;
    context?: any;
    verbose?: boolean;
  }) {
    await this._loadOperations();

    const operation = await selectOperation({
      userPrompt,
      operations: this.operations,
      model: this.model,
      openaiApiKey: this.apiKey,
    });

    if (operation) {
      const parsedParams = await parseArguments({
        userPrompt,
        model: this.model,
        openaiApiKey: this.apiKey,
        functionSpec: operation.toFunction(),
      });

      const apiResult = await operation.sendRequest({
        parsedParams,
        headers: context?.headers || {},
        authData: context,
      });

      return {
        userPrompt,
        selectedOperation: operation.operationId(),
        ...(verbose ? apiResult : { response: apiResult.response }),
      };
    } else {
      throw new Error(`Cannot find API for '${userPrompt}'`);
    }
  }

  async _loadOperations() {
    if (this.operations.length) {
      return;
    }

    const apiCollections = await Promise.all(
      this.apis.map((api) => parse(api))
    );
    this.operations = apiCollections.flat();
  }
}


================================================
FILE: core/src/ai/tools/__tests__/parse-arguments.test.ts
================================================
import path from "path";

import { parseArguments } from "../parse-arguments";

import Operation from "../../../api/operation";
import { parse } from "../../../api/oas-loader";

let parseArgsResponse: any;
let errorData: any;

jest.mock("openai", () => {
  return class MockedOpenAI {
    apiKey: string;

    chat: any = {
      completions: {
        create: ({ model, messages, functions }: any) => {
          if (errorData) {
            throw errorData;
          }

          if (
            messages[0].role === "system" &&
            messages[0].content.includes("Parse user input into arguments")
          ) {
            return Promise.resolve(parseArgsResponse);
          }
        },
      },
    };

    constructor({ apiKey }: any) {
      this.apiKey = apiKey;
    }
  };
});

describe("parseArguments", () => {
  const filename: string = path.join(
    __dirname,
    "../../../../fixtures/oases/petstore.yaml"
  );
  let operations: Operation[];
  let operation: Operation;
  let functionSpec: any;

  beforeEach(async () => {
    operations = await parse({ filename });
    operation = operations[1];
    functionSpec = operation.toFunction();
    errorData = null;
  });

  test("when arguments can be parsed successfully", async () => {
    parseArgsResponse = {
      choices: [
        { message: { function_call: { arguments: '{ "name": "Sticky" }' } } },
      ],
    };

    const result = await parseArguments({
      userPrompt: "Add a new pet named Sticky",
      openaiApiKey: "secretKey",
      model: "gpt-3.5-turbo-1106",
      functionSpec,
    });

    expect(result).toEqual({ name: "Sticky" });
  });

  test("when there are no arguments", async () => {
    parseArgsResponse = {
      choices: [{ message: { function_call: { arguments: "{}" } } }],
    };

    const result = await parseArguments({
      userPrompt: "Add a new pet named Sticky",
      openaiApiKey: "secretKey",
      model: "gpt-3.5-turbo-1106",
      functionSpec,
    });

    expect(result).toEqual({});
  });

  describe("when operation does not have any parameters", () => {
    beforeEach(() => {
      operation = operations[0];
      functionSpec = operation.toFunction();
    });

    test("does not hit AI and return empty object", async () => {
      const result = await parseArguments({
        userPrompt: "Add a new pet named Sticky",
        openaiApiKey: "secretKey",
        model: "gpt-3.5-turbo-1106",
        functionSpec,
      });

      expect(result).toEqual({});
    });
  });

  test("When there is an error with the request", async () => {
    errorData = new Error("The model `gpt-3.5-turbo-06139` does not exist");

    await expect(
      parseArguments({
        userPrompt: "Add a new pet named Sticky",
        openaiApiKey: "secretKey",
        model: "gpt-3.5-turbo-1106",
        functionSpec,
      })
    ).rejects.toThrow(
      "There's an error parsing arguments: The model `gpt-3.5-turbo-06139` does not exist"
    );
  });
});


================================================
FILE: core/src/ai/tools/__tests__/select-operation.test.ts
================================================
import path from "path";

import { selectOperation } from "../select-operation";
import { parse } from "../../../api/oas-loader";

let selectOperationResponse: any;
let errorData: any;

jest.mock("openai", () => {
  return class MockedOpenAI {
    apiKey: string;

    chat: any = {
      completions: {
        create: ({ model, messages, functions }: any) => {
          if (errorData) {
            throw errorData;
          }

          if (messages[0].content.includes("items in the following list")) {
            return Promise.resolve(selectOperationResponse);
          }
        },
      },
    };

    constructor({ apiKey }: any) {
      this.apiKey = apiKey;
    }
  };
});

describe("selectOperation", () => {
  const filename: string = path.join(
    __dirname,
    "../../../../fixtures/oases/petstore.yaml"
  );
  let operations: any;

  beforeEach(async () => {
    selectOperationResponse = {};
    errorData = null;
    operations = await parse({ filename });
  });

  test("When an operation is found", async () => {
    selectOperationResponse = {
      choices: [{ message: { content: "Create a pet." } }],
    };
    const result = await selectOperation({
      userPrompt: "Add a new pet named Sticky",
      openaiApiKey: "secretKey",
      model: "gpt-3.5-turbo-1106",
      operations,
    });

    expect(result?.summary()).toEqual("Create a pet");
  });

  test("When the operation cannot be found", async () => {
    selectOperationResponse = { choices: [] };
    const result = await selectOperation({
      userPrompt: "Add a new pet named Sticky",
      openaiApiKey: "secretKey",
      model: "gpt-3.5-turbo-1106",
      operations,
    });

    expect(result).toEqual(null);
  });

  test("When AI hallucinates", async () => {
    selectOperationResponse = {
      choices: [{ message: { content: "Visit a zoo" } }],
    };

    const result = await selectOperation({
      userPrompt: "Add a new pet named Sticky",
      openaiApiKey: "secretKey",
      model: "gpt-3.5-turbo-1106",
      operations,
    });

    expect(result).toEqual(null);
  });

  test("When there is an error with the request", async () => {
    errorData = new Error("The model `gpt-3.5-turbo-06139` does not exist");

    await expect(
      selectOperation({
        userPrompt: "Add a new pet named Sticky",
        openaiApiKey: "secretKey",
        model: "gpt-3.5-turbo-1106",
        operations,
      })
    ).rejects.toThrow(
      "There's an error selecting operation: The model `gpt-3.5-turbo-06139` does not exist"
    );
  });
});


================================================
FILE: core/src/ai/tools/parse-arguments.ts
================================================
import OpenAI from "openai";

const SYSTEM_PROMPT =
  "Parse user input into arguments. Leave missing parameters blank. Do not make up any information not in user input.";

interface ParseArgumentsInput {
  userPrompt: string;
  openaiApiKey: string;
  model: string;
  functionSpec: any;
}

export const parseArguments = async ({
  userPrompt,
  model,
  openaiApiKey,
  functionSpec,
}: ParseArgumentsInput) => {
  // Skip parsing args if there's none.
  if (Object.keys(functionSpec.parameters).length === 0) {
    return {};
  }

  try {
    const openai = new OpenAI({ apiKey: openaiApiKey });

    const chatCompletion: any = await openai.chat.completions.create({
      model,
      messages: [
        { role: "system", content: SYSTEM_PROMPT },
        { role: "user", content: userPrompt },
      ],
      functions: [functionSpec],
    });

    const args = chatCompletion.choices[0]?.message?.function_call?.arguments;
    return args ? JSON.parse(args) : null;
  } catch (error: any) {
    throw new Error(`There's an error parsing arguments: ${error.message}`);
  }
};


================================================
FILE: core/src/ai/tools/select-operation.ts
================================================
import OpenAI from "openai";

import Operation from "../../api/operation";

const selectOperationPrompt = ({
  operations,
  userPrompt,
}: {
  operations: Operation[];
  userPrompt: string;
}) => {
  const list = operations.map((op: Operation) => op.summary());
  return `You must respond with one of the items in the following list: ${JSON.stringify(
    list
  )}. Do not return anything if there's no match. Do not make up any information not provided in the list. Which item is described by '${userPrompt}'?`;
};

interface SelectOperationInput {
  userPrompt: string;
  openaiApiKey: string;
  model: string;
  operations: Operation[];
}

export const selectOperation = async ({
  userPrompt,
  operations,
  openaiApiKey,
  model,
}: SelectOperationInput) => {
  const openai = new OpenAI({ apiKey: openaiApiKey });

  const prompt = selectOperationPrompt({ operations, userPrompt });

  try {
    const chatCompletion: any = await openai.chat.completions.create({
      model,
      messages: [{ role: "user", content: prompt }],
    });

    if (chatCompletion.choices?.length) {
      const matchedSummary = chatCompletion.choices[0].message.content.replace(
        /\.$/,
        ""
      );
      return (
        operations.find((op: Operation) => op.summary() === matchedSummary) ||
        null
      );
    } else {
      return null;
    }
  } catch (error: any) {
    throw new Error(`There's an error selecting operation: ${error.message}`);
  }
};


================================================
FILE: core/src/api/__tests__/oas-loader.test.ts
================================================
import path from "path";
import { parse } from "../oas-loader";

describe("#parse", () => {
  const filename: string = path.join(
    __dirname,
    "../../../fixtures/oases/petstore.yaml"
  );

  test("parsing open api spec into operations", async () => {
    const operations = await parse({ filename });

    expect(operations.map((o) => o.summary())).toEqual([
      "List all pets",
      "Create a pet",
      "Info for a specific pet",
    ]);
  });

  describe("having auth data", () => {
    test("adding auth to every operation", async () => {
      const auth = { token: "foobar" };
      const operations = await parse({ filename, auth });

      operations.forEach((op) => {
        expect(op.auth).toEqual(auth);
      });
    });
  });

  describe("securities", () => {
    test("parses security specifications", async () => {
      const operations = await parse({ filename });

      // When operation specifies security
      expect(operations[0].securities.length).toEqual(1);
      expect(operations[0].securities[0].type).toEqual("http");
      expect(operations[0].securities[0].scheme).toEqual("basic");

      // When operation does not specify security
      expect(operations[1].securities.length).toEqual(1);
      expect(operations[1].securities[0].type).toEqual("http");
      expect(operations[1].securities[0].scheme).toEqual("bearer");

      // When operation supports multiple security schemes
      const multipleSecurities = operations[2].securities;
      expect(multipleSecurities.length).toEqual(2);
      expect(multipleSecurities[0].type).toEqual("apiKey");
      expect(multipleSecurities[0].inKey).toEqual("header");
      expect(multipleSecurities[0].name).toEqual("X-Api-Key");
      expect(multipleSecurities[1].type).toEqual("http");
      expect(multipleSecurities[1].scheme).toEqual("bearer");
    });

    describe("when no security schemes defined", () => {
      test("parses yaml without security", async () => {
        console.warn = jest.fn();

        const noSecurityFile: string = path.join(
          __dirname,
          "../../../fixtures/oases/no-security.yaml"
        );
        const operations = await parse({ filename: noSecurityFile });

        expect(console.warn).toHaveBeenCalledWith(
          "No `securitySchemes` found in this API spec."
        );
        expect(operations.map((op) => op.securities)).toEqual([[]]);
      });
    });

    describe("when reference an invalid security", () => {
      test("throws an error", async () => {
        const invalidSecurityFile: string = path.join(
          __dirname,
          "../../../fixtures/oases/invalid-securities.yaml"
        );

        await expect(parse({ filename: invalidSecurityFile })).rejects.toThrow(
          "Invalid security 'basicAuth' reference."
        );
      });
    });
  });
});


================================================
FILE: core/src/api/__tests__/operation.test.ts
================================================
import Operation from "../operation";
import Security from "../security";

const group = "petstore";
const httpMethod = "post";
const baseUrl = "http://petstore.swagger.io/v1";
const path = "/pets";
const details = {
  summary: "Create a pet",
  description: "Create a pet from a pet name.",
  operationId: "createPets",
  requestBody: {
    content: {
      "application/json": {
        schema: {
          type: "object",
          properties: {
            name: {
              type: "string",
              description: "Name of the pet",
              required: true,
            },
          },
        },
      },
    },
  },
  tags: ["pets"],
  responses: {
    "201": { description: "Null response" },
    default: {
      description: "unexpected error",
      content: {
        "application/json": {
          schema: {
            type: "object",
            required: ["code", "message"],
            properties: {
              code: { type: "integer", format: "int32" },
              message: { type: "string" },
            },
          },
        },
      },
    },
  },
};
let securities = [new Security({ type: "http", scheme: "basic" })];

let responseHeaders = { "X-Response-Status": "Complete" };
let responseStatus = 201;

const createOperation = () => {
  return new Operation({
    group,
    httpMethod,
    baseUrl,
    path,
    details,
    securities,
  });
};

let petResponse: Object;
global.fetch = jest.fn(() =>
  Promise.resolve({
    headers: responseHeaders,
    status: responseStatus,
    json: () => Promise.resolve(petResponse),
  })
) as jest.Mock;

describe("Operation", () => {
  let operation: Operation;

  beforeEach(() => {
    securities = [new Security({ type: "http", scheme: "basic" })];
    operation = createOperation();
  });

  describe("#url", () => {
    test("returns operation full URL", () => {
      expect(operation.url()).toEqual("http://petstore.swagger.io/v1/pets");
    });

    describe("when url contains path params", () => {
      beforeEach(() => {
        operation = new Operation({
          group,
          securities,
          httpMethod: "get",
          baseUrl,
          path: "/pets/{petId}",
          details: {
            summary: "Info for a specific pet",
            operationId: "showPetById",
            parameters: [
              {
                name: "petId",
                in: "path",
                required: true,
                description: "The id of the pet to retrieve",
                schema: {
                  type: "string",
                },
              },
            ],
          },
        });
      });

      test("returns URL with replaced path params", () => {
        expect(operation.url({ petId: "sticky" })).toEqual(
          "http://petstore.swagger.io/v1/pets/sticky"
        );
      });
    });
  });

  describe("#group", () => {
    test("returns API group name", () => {
      expect(operation.group).toEqual("petstore");
    });
  });

  describe("#summary", () => {
    test("returns operation summary", () => {
      expect(operation.summary()).toEqual("Create a pet");
    });

    describe("when operation has a period at the end", () => {
      beforeEach(() => {
        const details2 = {
          summary: "Create a pet.",
          description: "Create a pet from a pet name.",
          operationId: "createPets",
          requestBody: {
            content: {
              "application/json": {
                schema: {
                  type: "object",
                  properties: {
                    name: {
                      type: "string",
                      description: "Name of the pet",
                      required: true,
                    },
                  },
                },
              },
            },
          },
        };

        operation = new Operation({
          group,
          httpMethod,
          baseUrl,
          path,
          details: details2,
          securities,
        });
      });

      test("returns summary without the period", () => {
        expect(operation.summary()).toEqual("Create a pet");
      });
    });
  });

  describe("#description", () => {
    test("returns description", () => {
      expect(operation.description()).toEqual("Create a pet from a pet name.");
    });
  });

  describe("#toFunction", () => {
    describe("when requestBody is present", () => {
      test("returns function with parameters", () => {
        expect(operation.toFunction()).toEqual({
          name: "createPets",
          description: "Create a pet",
          parameters: {
            type: "object",
            properties: {
              name: {
                type: "string",
                description: "Name of the pet",
              },
            },
            required: ["name"],
          },
        });
      });
    });

    describe("when parameter contains `required` field", () => {
      beforeEach(() => {
        const details2 = {
          summary: "Create a pet.",
          description: "Create a pet from a pet name.",
          operationId: "createPets",
          requestBody: {
            content: {
              "application/json": {
                schema: {
                  type: "object",
                  properties: {
                    name: {
                      type: "string",
                      description: "Name of the pet",
                      required: true,
                    },
                  },
                },
              },
            },
          },
        };

        operation = new Operation({
          group,
          httpMethod,
          baseUrl,
          path,
          details: details2,
          securities,
        });
      });

      test("returns function with parameters", () => {
        expect(operation.toFunction()).toEqual({
          name: "createPets",
          description: "Create a pet",
          parameters: {
            type: "object",
            properties: {
              name: {
                type: "string",
                description: "Name of the pet",
              },
            },
            required: ["name"],
          },
        });
      });
    });

    describe("when parameters contain an array", () => {
      beforeEach(() => {
        const details2 = {
          summary: "Create a chat completion",
          description: "Create a chat completion given user prompt",
          operationId: "createChatCompletion",
          requestBody: {
            content: {
              "application/json": {
                schema: {
                  type: "object",
                  properties: {
                    messages: {
                      type: "array",
                      minItems: 1,
                      description: "List of chat messages",
                      items: {
                        type: "object",
                        properties: {
                          role: {
                            type: "string",
                            enum: ["system", "user", "assistant", "function"],
                          },
                          content: {
                            type: "string",
                            nullable: true,
                          },
                        },
                        required: ["role"],
                      },
                      required: true,
                    },
                  },
                },
              },
            },
          },
        };

        operation = new Operation({
          group,
          httpMethod,
          baseUrl,
          path,
          details: details2,
          securities,
        });
      });

      test("returns function with parsed parameters", () => {
        expect(operation.toFunction()).toEqual({
          name: "createChatCompletion",
          description: "Create a chat completion",
          parameters: {
            type: "object",
            properties: {
              messages: {
                type: "array",
                minItems: 1,
                description: "List of chat messages",
                items: {
                  type: "object",
                  properties: {
                    role: {
                      type: "string",
                      enum: ["system", "user", "assistant", "function"],
                    },
                    content: {
                      type: "string",
                      nullable: true,
                    },
                  },
                  required: ["role"],
                },
              },
            },
            required: ["messages"],
          },
        });
      });
    });

    describe("when requestBody is empty", () => {
      beforeEach(() => {
        const detailsWithoutResponseBody = {
          summary: "Create a pet",
          description: "Create a pet from a pet name.",
          operationId: "createPets",
          tags: ["pets"],
          requestBody: {
            content: {
              "application/xml": {},
            },
          },
          responses: {
            "201": { description: "Null response" },
            default: {
              description: "unexpected error",
              content: {
                "application/json": {
                  schema: {
                    type: "object",
                    required: ["code", "message"],
                    properties: {
                      code: { type: "integer", format: "int32" },
                      message: { type: "string" },
                    },
                  },
                },
              },
            },
          },
        };

        operation = new Operation({
          group,
          httpMethod,
          baseUrl,
          path,
          details: detailsWithoutResponseBody,
          securities,
        });
      });

      test("returns empty parameters", () => {
        expect(operation.toFunction()).toEqual({
          name: "createPets",
          description: "Create a pet",
          parameters: {},
        });
      });
    });

    describe("when requestBody is not defined", () => {
      beforeEach(() => {
        const detailsWithoutResponseBody = {
          summary: "Create a pet",
          description: "Create a pet from a pet name.",
          operationId: "createPets",
          tags: ["pets"],
          responses: {
            "201": { description: "Null response" },
            default: {
              description: "unexpected error",
              content: {
                "application/json": {
                  schema: {
                    type: "object",
                    required: ["code", "message"],
                    properties: {
                      code: { type: "integer", format: "int32" },
                      message: { type: "string" },
                    },
                  },
                },
              },
            },
          },
        };

        operation = new Operation({
          group,
          httpMethod,
          baseUrl,
          path,
          details: detailsWithoutResponseBody,
          securities,
        });
      });

      test("returns empty parameters", () => {
        expect(operation.toFunction()).toEqual({
          name: "createPets",
          description: "Create a pet",
          parameters: {},
        });
      });
    });
  });

  describe("#sendRequest", () => {
    let headers: Object;
    let parsedParams: any;
    let body: Object;
    let expectedBody: any;
    let authData: any;

    beforeEach(() => {
      headers = { "X-Content-Medata": "foobar" };
      body = { name: "Sticky" };
      expectedBody = JSON.stringify(body);
      parsedParams = { ...body, foo: "bar", a: 1 };
      authData = { username: "u$er", password: "Pa$$word" };
      petResponse = { id: 1, name: "Sticky" };
    });

    test("makes a request", async () => {
      const result = await operation.sendRequest({
        headers,
        parsedParams,
        authData,
      });

      expect(fetch).toHaveBeenCalledWith("http://petstore.swagger.io/v1/pets", {
        method: "post",
        body: expectedBody,
        headers: {
          Authorization: "Basic dSRlcjpQYSQkd29yZA==",
          "Content-Type": "application/json",
          "X-Content-Medata": "foobar",
        },
      });

      expect(result).toEqual({
        request: {
          url: "http://petstore.swagger.io/v1/pets",
          method: "post",
          headers: {
            Authorization: "Basic dSRlcjpQYSQkd29yZA==",
            "Content-Type": "application/json",
            "X-Content-Medata": "foobar",
          },
          body: expectedBody,
        },
        response: {
          headers: responseHeaders,
          status: responseStatus,
          body: { id: 1, name: "Sticky" },
        },
      });
    });

    describe("when request does not have body", () => {
      beforeEach(() => {
        operation = new Operation({
          group,
          httpMethod: "get",
          baseUrl,
          path,
          details,
          securities,
        });
      });

      test("makes a request without body", async () => {
        const result = await operation.sendRequest({
          headers,
          parsedParams,
          authData,
        });

        expect(fetch).toHaveBeenCalledWith(
          "http://petstore.swagger.io/v1/pets",
          {
            method: "get",
            headers: {
              Authorization: "Basic dSRlcjpQYSQkd29yZA==",
              "Content-Type": "application/json",
              "X-Content-Medata": "foobar",
            },
          }
        );

        expect(result).toEqual({
          request: {
            url: "http://petstore.swagger.io/v1/pets",
            method: "get",
            headers: {
              Authorization: "Basic dSRlcjpQYSQkd29yZA==",
              "Content-Type": "application/json",
              "X-Content-Medata": "foobar",
            },
          },
          response: {
            headers: responseHeaders,
            status: responseStatus,
            body: { id: 1, name: "Sticky" },
          },
        });
      });
    });

    describe("when auth data is not present", () => {
      beforeEach(() => {
        authData = undefined;
      });

      describe("when basic auth is required", () => {
        test("throws error", async () => {
          await expect(
            operation.sendRequest({ headers, parsedParams, authData })
          ).rejects.toThrow(
            "`username` and `password` are required for basic auth"
          );
        });
      });

      describe("when no securities required", () => {
        beforeEach(() => {
          securities = [];
          operation = createOperation();
        });

        test("makes request without authorization data", async () => {
          const result = await operation.sendRequest({
            headers,
            parsedParams,
            authData,
          });

          expect(fetch).toHaveBeenCalledWith(
            "http://petstore.swagger.io/v1/pets",
            {
              method: "post",
              body: expectedBody,
              headers: {
                "Content-Type": "application/json",
                "X-Content-Medata": "foobar",
              },
            }
          );

          expect(result).toEqual({
            request: {
              url: "http://petstore.swagger.io/v1/pets",
              method: "post",
              headers: {
                "Content-Type": "application/json",
                "X-Content-Medata": "foobar",
              },
              body: expectedBody,
            },
            response: {
              headers: responseHeaders,
              status: responseStatus,
              body: { id: 1, name: "Sticky" },
            },
          });
        });
      });

      describe("when operation is initialized with auth data", () => {
        beforeEach(() => {
          operation = new Operation({
            group,
            httpMethod,
            baseUrl,
            path,
            details,
            securities,
            auth: { username: "u$er", password: "Pa$$word" },
          });
        });

        test("makes a request with auth data", async () => {
          const result = await operation.sendRequest({
            headers,
            parsedParams,
            authData,
          });

          expect(fetch).toHaveBeenCalledWith(
            "http://petstore.swagger.io/v1/pets",
            {
              method: "post",
              body: expectedBody,
              headers: {
                Authorization: "Basic dSRlcjpQYSQkd29yZA==",
                "Content-Type": "application/json",
                "X-Content-Medata": "foobar",
              },
            }
          );

          expect(result).toEqual({
            request: {
              url: "http://petstore.swagger.io/v1/pets",
              method: "post",
              headers: {
                Authorization: "Basic dSRlcjpQYSQkd29yZA==",
                "Content-Type": "application/json",
                "X-Content-Medata": "foobar",
              },
              body: expectedBody,
            },
            response: {
              headers: responseHeaders,
              status: responseStatus,
              body: { id: 1, name: "Sticky" },
            },
          });
        });
      });
    });
  });
});


================================================
FILE: core/src/api/__tests__/security.test.ts
================================================
import Security from "../security";

describe("Security", () => {
  describe("#constructor", () => {
    describe("valid basic auth", () => {
      test("initializes a basic auth instance", () => {
        const security = new Security({ type: "http", scheme: "basic" });
        expect(security.scheme).toEqual("basic");
      });
    });

    describe("valid bearer auth", () => {
      test("initializes a bearer auth instance", () => {
        const security = new Security({ type: "http", scheme: "bearer" });
        expect(security.scheme).toEqual("bearer");
      });
    });

    describe("valid apiKey auth", () => {
      test("initializes a bearer auth instance", () => {
        const security = new Security({ type: "apiKey", name: "X-Api-Key" });
        expect(security.type).toEqual("apiKey");
      });

      describe("missing name for apiKey", () => {
        test("throws error", () => {
          expect(() => {
            new Security({ type: "apiKey", apiKey: "X-Api-Key" });
          }).toThrow("Security type apiKey requires `name`.");
        });
      });
    });

    describe("invalid type", () => {
      test("throws error", () => {
        expect(() => {
          new Security({ type: "foo", scheme: "basic" });
        }).toThrow("Security type 'foo' is not supported.");
      });
    });

    describe("invalid scheme", () => {
      test("throws error", () => {
        expect(() => {
          new Security({ type: "http", scheme: "complex" });
        }).toThrow("Security scheme 'complex' is not supported.");
      });
    });
  });

  describe("#authData", () => {
    let data: any;
    let authInput: any;
    let securityInstance: Security;

    describe("Basic auth", () => {
      beforeEach(() => {
        authInput = {
          type: "http",
          scheme: "basic",
        };

        securityInstance = new Security(authInput);
      });

      describe("when username and password are provided", () => {
        test("returns auth data", () => {
          data = { username: "user", password: "Pa$$word" };
          const encoded = Buffer.from("user:Pa$$word").toString("base64");

          expect(securityInstance.authData(data)).toEqual({
            Authorization: `Basic ${encoded}`,
          });
        });
      });

      describe("when password is not provided", () => {
        test("throws error", () => {
          data = { username: "user" };

          expect(() => {
            securityInstance.authData(data);
          }).toThrow("`username` and `password` are required for basic auth");
        });
      });

      describe("when password is not provided", () => {
        test("throws error", () => {
          data = { password: "Pa$$word" };

          expect(() => {
            securityInstance.authData(data);
          }).toThrow("`username` and `password` are required for basic auth");
        });
      });

      describe("when auth input is not defined", () => {
        test("throws error", () => {
          data = undefined;

          expect(() => {
            securityInstance.authData(data);
          }).toThrow("`username` and `password` are required for basic auth");
        });
      });
    });

    describe("Bearer auth", () => {
      beforeEach(() => {
        authInput = {
          type: "http",
          scheme: "bearer",
        };

        securityInstance = new Security(authInput);
      });

      describe("when token is provided", () => {
        test("returns auth data", () => {
          data = { token: "my-token" };
          expect(securityInstance.authData(data)).toEqual({
            Authorization: "Bearer my-token",
          });
        });
      });

      describe("when token is not provided", () => {
        test("throws error", () => {
          data = { apiKey: "foobar" };

          expect(() => {
            securityInstance.authData(data);
          }).toThrow("`token` is required for bearer auth");
        });
      });

      describe("when auth data is undefined", () => {
        test("throws error", () => {
          data = undefined;

          expect(() => {
            securityInstance.authData(data);
          }).toThrow("`token` is required for bearer auth");
        });
      });
    });

    describe("API key auth", () => {
      beforeEach(() => {
        authInput = {
          type: "apiKey",
          name: "X-Api-Key",
          in: "header",
        };

        securityInstance = new Security(authInput);
      });

      describe("when api key is provided", () => {
        test("returns auth data", () => {
          data = { "X-Api-Key": "abcdefg-1234" };
          expect(securityInstance.authData(data)).toEqual({
            "X-Api-Key": "abcdefg-1234",
          });
        });
      });

      describe("when api key is not provided", () => {
        test("throws errors", () => {
          data = { apiKey: "abcdefg-1234" };

          expect(() => {
            securityInstance.authData(data);
          }).toThrow('"X-Api-Key" is required for API key auth');
        });
      });

      describe("when auth data is undefined", () => {
        test("throws errors", () => {
          data = undefined;

          expect(() => {
            securityInstance.authData(data);
          }).toThrow('"X-Api-Key" is required for API key auth');
        });
      });
    });
  });
});


================================================
FILE: core/src/api/oas-loader.ts
================================================
import SwaggerParser from "@apidevtools/swagger-parser";
import Operation from "./operation";
import Security from "./security";

const parseSecurities = (api: any) => {
  if (!api.components?.securitySchemes) {
    console.warn("No `securitySchemes` found in this API spec.");
    return {};
  }

  const securities: any = {};

  for (const [name, data] of Object.entries(api.components?.securitySchemes)) {
    securities[name] = new Security(data);
  }

  return securities;
};

const selectSecurities = ({
  details,
  securities,
  api,
}: {
  details: any;
  securities: any;
  api: any;
}) => {
  /* Notes:
    - there can be multiple security schemes per endpoint/api
    - use the default api definition if endpoint does not override
    - fallback to empty array b/c it's possible an endpoint does not require auth.
  */
  const definedSecurities = details.security || api.security || [];

  return definedSecurities.map((rawSecurity: object) => {
    const name = Object.keys(rawSecurity)[0];
    if (!securities[name]) {
      throw new Error(`Invalid security '${name}' reference.`);
    }
    return securities[name];
  });
};

export const parse = async ({
  filename,
  auth,
}: {
  filename: string;
  auth?: object;
}) => {
  const api = await SwaggerParser.dereference(filename);
  const securities = parseSecurities(api);

  const operations: Operation[] = [];

  for (let path in api.paths) {
    for (let httpMethod in api.paths[path]) {
      const details = api.paths[path][httpMethod];

      const selectedSecurities = selectSecurities({ details, securities, api });

      operations.push(
        new Operation({
          group: api.info?.title,
          httpMethod,
          path,
          baseUrl: api.servers[0].url, // TODO: allow picking baseUrl
          details,
          securities: selectedSecurities,
          auth: auth,
        })
      );
    }
  }

  return operations;
};


================================================
FILE: core/src/api/operation.ts
================================================
import Security from "./security";

interface OperationInput {
  group: string;
  httpMethod: string;
  baseUrl: string;
  path: string;
  details: any;
  securities: Security[];
  auth?: any;
}

const EMPTY_ARGUMENT: object = {};

export default class Operation {
  group: string;
  httpMethod: string = "get";
  baseUrl: string;
  path: string;
  details: any;
  securities: Security[];
  auth: any;

  constructor({
    group,
    httpMethod,
    baseUrl,
    path,
    details,
    securities,
    auth,
  }: OperationInput) {
    this.group = group;
    this.httpMethod = httpMethod.toLowerCase();
    this.baseUrl = baseUrl;
    this.path = path;
    this.details = details;
    this.securities = securities;
    this.auth = auth;
  }

  operationId(): string {
    return this.details["operationId"];
  }

  summary(): string {
    return this.details["summary"].replace(/\.$/, "");
  }

  description(): string {
    return this.details["description"];
  }

  url(parsedParams?: any): string {
    const fullUrl: string = [
      this.baseUrl.replace(/\/$/, ""),
      this.path.replace(/^\//, ""),
    ].join("/");

    // Replace path params
    const urlParams: any = this._urlParams();

    if (Object.keys(urlParams).length === 0) {
      return fullUrl;
    }

    const selectedParams = this._selectParams({
      target: urlParams.properties,
      allParams: parsedParams,
    });

    let url: string = fullUrl;
    // TODO: raise an error if a required param is missing.
    for (let param in selectedParams) {
      url = url.replace(`{${param}}`, selectedParams[param]);
    }

    // TODO: add query param
    return url;
  }

  // TODO: accept context data for use in body and url params.
  async sendRequest({ headers, parsedParams, authData }: any) {
    // TODO: handle auth that's not in the headers.
    const auth = this._computeAuth(authData || this.auth);
    const requestHeaders = {
      ...this._requestContentType(),
      ...auth,
      ...(headers || {}),
    };
    const url = this.url(parsedParams);

    const body = this._selectParams({
      target: this._bodyParams()?.properties,
      allParams: parsedParams,
    });

    const requestBody = ["get", "head"].includes(this.httpMethod)
      ? {}
      : { body: JSON.stringify(body) };

    const response = await fetch(url, {
      method: this.httpMethod,
      headers: requestHeaders,
      ...requestBody,
    });
    const responseBody = await response.json();

    return {
      request: {
        url,
        method: this.httpMethod,
        headers: requestHeaders,
        ...requestBody,
      },
      response: {
        headers: response.headers,
        status: response.status,
        body: responseBody,
      },
    };
  }

  toFunction() {
    return {
      name: this.details["operationId"].replaceAll(/[^\w\_]/g, "_"),
      description: this.summary(),
      parameters: this._allParams(),
    };
  }

  _allParams() {
    const bodyParams: any = this._bodyParams();
    const urlParams: any = this._urlParams();

    const allParams = {
      ...(urlParams?.properties || {}),
      ...(bodyParams?.properties || {}),
    };

    const allRequired = bodyParams?.required || [];

    if (Object.keys(allParams).length) {
      return {
        type: "object",
        required: allRequired,
        properties: allParams,
      };
    } else {
      return EMPTY_ARGUMENT;
    }
  }

  _bodyParams() {
    const schema =
      this.details?.requestBody?.content["application/json"]?.schema;

    if (schema && Object.keys(schema).length) {
      return this._computeBodyParameters(schema);
    } else {
      return null;
    }
  }

  _urlParams() {
    const definedParams: any = this.details?.parameters || [];

    let params: any = {};
    let requiredItems: string[] = [];

    definedParams.forEach((param: any) => {
      // TODO: save the in key (path vs. query)
      // see if openai accept it

      params[param.name] = {
        type: param.schema?.type,
        description: param.description,
      };

      if (param.required) {
        requiredItems.push(param.name);
      }
    });

    if (Object.keys(params).length) {
      return {
        type: "object",
        properties: params,
        required: requiredItems,
      };
    } else {
      return EMPTY_ARGUMENT;
    }
  }

  _selectParams({ target, allParams }: any) {
    if (!target) {
      return {};
    }

    let result: any = {};

    for (let param in allParams) {
      if (target[param] && allParams[param]) {
        result[param] = allParams[param];
      }
    }

    return result;
  }

  _computeBodyParameters(schema: any) {
    let requiredItems: string[] = [];
    const properties: any = {};

    for (let propName in schema.properties) {
      const { required: isRequired, ...remainingProperty } =
        schema.properties[propName];

      properties[propName] = remainingProperty;

      if (isRequired) {
        requiredItems.push(propName);
      }
    }

    if (schema.required) {
      requiredItems = requiredItems.concat(schema.required);
    }

    const requiredSet = new Set(requiredItems);

    return {
      type: "object",
      properties,
      required: Array.from(requiredSet),
    };
  }

  _requestContentType() {
    if (
      !this.details?.requestBody ||
      this.details?.requestBody?.content["application/json"]
    ) {
      return { "Content-Type": "application/json" };
    } else if (this.details?.requestBody) {
      throw new Error(
        'Only "application/json" requestBody type is currently supported.'
      );
    }
  }

  _computeAuth(data: any) {
    if (this.securities?.length === 0) {
      return {};
    }

    const errors: string[] = [];
    let result: any;

    this.securities.forEach((security) => {
      try {
        result = security.authData(data);
        if (result && Object.keys(result).length) {
          return;
        }
      } catch (error: any) {
        errors.push(error);
      }
    });

    if (result) {
      return result;
    } else if (errors.length) {
      throw errors[0];
    } else {
      return {};
    }
  }
}


================================================
FILE: core/src/api/security.ts
================================================
export default class Security {
  type: string;
  scheme?: string;
  name?: string;
  inKey?: string;

  constructor(authInput: any) {
    this.type = authInput.type;
    this.scheme = authInput.scheme;
    this.name = authInput.name;
    this.inKey = authInput.in;

    this.validateInput();
  }

  validateInput() {
    if (this.type === "http") {
      if (this.scheme !== "bearer" && this.scheme !== "basic") {
        throw new Error(`Security scheme '${this.scheme}' is not supported.`);
      }
    } else if (this.type === "apiKey") {
      if (!this.name) {
        throw new Error("Security type apiKey requires `name`.");
      }
    } else {
      throw new Error(`Security type '${this.type}' is not supported.`);
    }
  }

  authData(data: any) {
    const val: any = {};

    if (this.type === "http") {
      if (this.scheme === "bearer") {
        if (!data?.token) {
          throw new Error("`token` is required for bearer auth");
        }
        val["Authorization"] = `Bearer ${data.token}`;
      } else if (this.scheme === "basic") {
        if (!data?.username || !data?.password) {
          throw new Error(
            "`username` and `password` are required for basic auth"
          );
        }

        val["Authorization"] = `Basic ${Buffer.from(
          `${data.username}:${data.password}`
        ).toString("base64")}`;
      } else {
        throw new Error(`Security scheme '${this.scheme}' is not supported.`);
      }
    } else if (this.type === "apiKey") {
      if (this.name && (!data || !data[this.name])) {
        throw new Error(`"${this.name}" is required for API key auth`);
      }

      if (this.name && data[this.name]) {
        val[this.name] = data[this.name];
      }
    }

    return val;
  }
}


================================================
FILE: core/tsconfig.json
================================================
{
  "include": ["."],
  "exclude": ["dist", "build", "node_modules"],
  "compilerOptions": {
    "lib": ["DOM", "es2020"],
    "moduleResolution": "node",
    "target": "es2020",
    "module": "commonjs",
    "strict": true,
    "sourceMap": true,
    "declaration": true,
    "esModuleInterop": true,
    "allowSyntheticDefaultImports": true,
    "paths": {
      "@moduleSrc/*": ["./src/*"]
    }
  }
}


================================================
FILE: package.json
================================================
{
  "private": true,
  "version": "0.6.0",
  "license": "MIT",
  "type": "module",
  "scripts": {
    "build": "turbo run build && cp ./README.md core/README.md",
    "publish-packages": "turbo run build lint test && ./fix-ono.sh && changeset version && changeset publish",
    "dev": "turbo run dev",
    "lint": "turbo run lint",
    "format": "prettier --write \"**/*.{ts,tsx,md}\"",
    "test": "turbo run test",
    "prepare": "husky install"
  },
  "devDependencies": {
    "changesets": "^1.0.2",
    "@turbo/gen": "^1.9.7",
    "@types/jest": "^29.5.3",
    "eslint": "^7.32.0",
    "eslint-config-custom": "*",
    "husky": "^8.0.3",
    "lint-staged": "^13.2.2",
    "prettier": "^2.5.1",
    "turbo": "latest"
  },
  "name": "@api2ai/repo",
  "packageManager": "yarn@1.22.19",
  "workspaces": [
    "core",
    "server",
    "packages/*"
  ],
  "lint-staged": {
    "**/*.{ts,tsx}": [
      "yarn run format"
    ],
    "*.md": "prettier --write"
  },
  "jest": {
    "projects": ["<rootDir>/core/jest.config.js"]
  }
}


================================================
FILE: packages/eslint-config-custom/index.js
================================================
module.exports = {
  extends: ["next", "turbo", "prettier"],
  rules: {
    "@next/next/no-html-link-for-pages": "off",
  },
  parserOptions: {
    babelOptions: {
      presets: [require.resolve("next/babel")],
    },
  },
};


================================================
FILE: packages/eslint-config-custom/package.json
================================================
{
  "name": "eslint-config-custom",
  "version": "0.0.0",
  "main": "index.js",
  "license": "MIT",
  "dependencies": {
    "eslint-config-next": "^13.4.1",
    "eslint-config-prettier": "^8.3.0",
    "eslint-plugin-react": "7.28.0",
    "eslint-config-turbo": "^1.9.3"
  },
  "publishConfig": {
    "access": "public"
  }
}


================================================
FILE: packages/tsconfig/base.json
================================================
{
  "$schema": "https://json.schemastore.org/tsconfig",
  "display": "Default",
  "compilerOptions": {
    "composite": false,
    "declaration": true,
    "declarationMap": true,
    "esModuleInterop": true,
    "forceConsistentCasingInFileNames": true,
    "inlineSources": false,
    "isolatedModules": true,
    "moduleResolution": "node",
    "noUnusedLocals": false,
    "noUnusedParameters": false,
    "preserveWatchOutput": true,
    "skipLibCheck": true,
    "strict": true,
    "baseUrl": ".",
    "paths": {
      "@core/*": ["../../core/src/*"]
    },
  },
  "exclude": ["node_modules"]
}


================================================
FILE: packages/tsconfig/nextjs.json
================================================
{
  "$schema": "https://json.schemastore.org/tsconfig",
  "display": "Next.js",
  "extends": "./base.json",
  "compilerOptions": {
    "plugins": [{ "name": "next" }],
    "allowJs": true,
    "declaration": false,
    "declarationMap": false,
    "incremental": true,
    "jsx": "preserve",
    "lib": ["dom", "dom.iterable", "esnext"],
    "module": "esnext",
    "noEmit": true,
    "resolveJsonModule": true,
    "strict": false,
    "target": "es5"
  },
  "include": ["src", "next-env.d.ts"],
  "exclude": ["node_modules"]
}


================================================
FILE: packages/tsconfig/package.json
================================================
{
  "name": "tsconfig",
  "version": "0.0.0",
  "private": true,
  "license": "MIT",
  "publishConfig": {
    "access": "public"
  }
}


================================================
FILE: packages/tsconfig/react-library.json
================================================
{
  "$schema": "https://json.schemastore.org/tsconfig",
  "display": "React Library",
  "extends": "./base.json",
  "compilerOptions": {
    "jsx": "react-jsx",
    "lib": ["ES2015", "DOM"],
    "module": "ESNext",
    "target": "es6"
  }
}


================================================
FILE: server/README.md
================================================
## Getting Started

First, run the development server:

```bash
yarn dev
```

Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.

You can start editing the page by modifying `pages/index.js`. The page auto-updates as you edit the file.

[API routes](https://nextjs.org/docs/api-routes/introduction) can be accessed on [http://localhost:3000/api/hello](http://localhost:3000/api/hello). This endpoint can be edited in `pages/api/hello.js`.

The `pages/api` directory is mapped to `/api/*`. Files in this directory are treated as [API routes](https://nextjs.org/docs/api-routes/introduction) instead of React pages.

## Learn More

To learn more about Next.js, take a look at the following resources:

- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn/foundations/about-nextjs) - an interactive Next.js tutorial.

You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js/) - your feedback and contributions are welcome!

## Deploy on Vercel

The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_source=github.com&utm_medium=referral&utm_campaign=turborepo-readme) from the creators of Next.js.

Check out our [Next.js deployment documentation](https://nextjs.org/docs/deployment) for more details.


================================================
FILE: server/app/layout.tsx
================================================
import "bootstrap/dist/css/bootstrap.css";

export default function RootLayout({
  children,
}: {
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body>
        <div className="container">{children}</div>
      </body>
    </html>
  );
}


================================================
FILE: server/app/page.tsx
================================================
"use client";

import { useState, useRef } from "react";
const URL = "/api/run";

const renderContent = (data) => {
  if (data.selectedOperation === "createImage") {
    const imageUrl = data.response.body.data[0].url;

    return (
      <div>
        <p>
          <strong>
            API used: {data.request.method.toUpperCase()} {data.request.url}
          </strong>
        </p>
        <a href={imageUrl} target="_blank">
          <img src={imageUrl} height={300} />
        </a>
      </div>
    );
  } else if (data.request && data.response) {
    return (
      <div>
        <p>
          <strong>
            API used: {data.request.method.toUpperCase()} {data.request.url}
          </strong>
        </p>
        <pre>
          <code>{JSON.stringify(data.response, null, 2)}</code>
        </pre>
      </div>
    );
  } else {
    return (
      <div>
        <pre>
          <code>{JSON.stringify(data, null, 2)}</code>
        </pre>
      </div>
    );
  }
};

export default function Page() {
  const [state, setState] = useState({
    messages: [
      { id: `${Date.now()}`, role: "AI", content: <div>How can I help?</div> },
    ],
    prompt: "",
  });
  const stateRef = useRef(state);
  stateRef.current = state;

  const postToAi = async ({ userPrompt, messageId }) => {
    const resp = await fetch(URL, {
      headers: {
        Accept: "application/json",
        "Content-Type": "application/json",
      },
      method: "POST",
      body: JSON.stringify({ userPrompt }),
    });
    const data = await resp.json();

    stateRef.current.messages = stateRef.current.messages.map((message) => {
      if (message.id === messageId) {
        message.content = renderContent(data);
      }

      return message;
    });

    setState({
      prompt: stateRef.current.prompt,
      messages: stateRef.current.messages,
    });
  };

  const sendMessage = (event) => {
    event.preventDefault();

    if (!state.prompt?.length) {
      return;
    }

    const currentTime = Date.now();
    const userMessageId = `${currentTime}0`;
    const aiMessageId = `${currentTime}1`;

    postToAi({ userPrompt: state.prompt, messageId: aiMessageId });

    stateRef.current = {
      prompt: "",
      messages: [
        ...stateRef.current.messages,
        { id: userMessageId, role: "User", content: <div>{state.prompt}</div> },
        {
          id: aiMessageId,
          role: "AI",
          content: (
            <div>
              Processing your message{" "}
              <span
                className="spinner-border spinner-border-sm"
                role="status"
              ></span>
            </div>
          ),
        },
      ],
    };
    setState(stateRef.current);
  };

  return (
    <div className="row py-lg-5">
      <h2>
        <a href="https://github.com/mquan/api2ai" target="_blank">
          api2ai
        </a>{" "}
        demo
      </h2>
      <div
        id="chat-log-container"
        className="mt-5 mb-5"
        style={{ height: 600, overflowY: "scroll" }}
      >
        <table id="chat-log" className="table table-borderless table-striped">
          <tbody id="chat-log-body">
            {state.messages.map((message) => {
              return (
                <tr key={message.id}>
                  <th style={{ width: 50 }}>{message.role}</th>
                  <td>{message.content}</td>
                </tr>
              );
            })}
          </tbody>
        </table>
      </div>

      <form id="chat-form" action="/api/run" method="post">
        <div className="row">
          <div className="col-8">
            <textarea
              value={state.prompt}
              className="form-control"
              id="chat-message"
              placeholder="Input message..."
              onChange={(e) => {
                stateRef.current.prompt = e.target.value;
                setState({
                  prompt: stateRef.current.prompt,
                  messages: stateRef.current.messages,
                });
              }}
            ></textarea>
          </div>
          <div className="col-2">
            <button
              type="submit"
              onClick={sendMessage}
              className="btn btn-primary mb-3"
            >
              Send
            </button>
          </div>
        </div>
      </form>
    </div>
  );
}


================================================
FILE: server/next-env.d.ts
================================================
/// <reference types="next" />
/// <reference types="next/image-types/global" />
/// <reference types="next/navigation-types/compat/navigation" />

// NOTE: This file should not be edited
// see https://nextjs.org/docs/basic-features/typescript for more information.


================================================
FILE: server/next.config.js
================================================
module.exports = {
  reactStrictMode: true,
  transpilePackages: ["core"],
};


================================================
FILE: server/oases/open-ai.yaml
================================================
openapi: 3.0.0
info:
  title: OpenAI API
  description: The OpenAI REST API. Please see https://platform.openai.com/docs/api-reference for more details.
  version: "2.0.0"
  termsOfService: https://openai.com/policies/terms-of-use
  contact:
    name: OpenAI Support
    url: https://help.openai.com/
  license:
    name: MIT
    url: https://github.com/openai/openai-openapi/blob/master/LICENSE
servers:
  - url: https://api.openai.com/v1
tags:
  - name: Assistants
    description: Build Assistants that can call models and use tools.
  - name: Audio
    description: Learn how to turn audio into text or text into audio.
  - name: Chat
    description: Given a list of messages comprising a conversation, the model will return a response.
  - name: Completions
    description: Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.
  - name: Embeddings
    description: Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
  - name: Fine-tuning
    description: Manage fine-tuning jobs to tailor a model to your specific training data.
  - name: Files
    description: Files are used to upload documents that can be used with features like Assistants and Fine-tuning.
  - name: Images
    description: Given a prompt and/or an input image, the model will generate a new image.
  - name: Models
    description: List and describe the various models available in the API.
  - name: Moderations
    description: Given a input text, outputs if the model classifies it as violating OpenAI's content policy.
  - name: Fine-tunes
    description: Manage legacy fine-tuning jobs to tailor a model to your specific training data.
  - name: Edits
    description: Given a prompt and an instruction, the model will return an edited version of the prompt.
paths:
  # Note: When adding an endpoint, make sure you also add it in the `groups` section, in the end of this file,
  # under the appropriate group
  /chat/completions:
    post:
      operationId: createChatCompletion
      tags:
        - Chat
      summary: Creates a model response for the given chat conversation.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateChatCompletionRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/CreateChatCompletionResponse"

      x-oaiMeta:
        name: Create chat completion
        group: chat
        returns: |
          Returns a [chat completion](/docs/api-reference/chat/object) object, or a streamed sequence of [chat completion chunk](/docs/api-reference/chat/streaming) objects if the request is streamed.
        path: create
        examples:
          - title: Default
            request:
              curl: |
                curl https://api.openai.com/v1/chat/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "model": "VAR_model_id",
                    "messages": [
                      {
                        "role": "system",
                        "content": "You are a helpful assistant."
                      },
                      {
                        "role": "user",
                        "content": "Hello!"
                      }
                    ]
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                completion = client.chat.completions.create(
                  model="VAR_model_id",
                  messages=[
                    {"role": "system", "content": "You are a helpful assistant."},
                    {"role": "user", "content": "Hello!"}
                  ]
                )

                print(completion.choices[0].message)
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const completion = await openai.chat.completions.create({
                    messages: [{ role: "system", content: "You are a helpful assistant." }],
                    model: "VAR_model_id",
                  });

                  console.log(completion.choices[0]);
                }

                main();
            response: &chat_completion_example |
              {
                "id": "chatcmpl-123",
                "object": "chat.completion",
                "created": 1677652288,
                "model": "gpt-3.5-turbo-0613",
                "system_fingerprint": "fp_44709d6fcb",
                "choices": [{
                  "index": 0,
                  "message": {
                    "role": "assistant",
                    "content": "\n\nHello there, how may I assist you today?",
                  },
                  "finish_reason": "stop"
                }],
                "usage": {
                  "prompt_tokens": 9,
                  "completion_tokens": 12,
                  "total_tokens": 21
                }
              }
          - title: Image input
            request:
              curl: |
                curl https://api.openai.com/v1/chat/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "model": "gpt-4-vision-preview",
                    "messages": [
                      {
                        "role": "user",
                        "content": [
                          {
                            "type": "text",
                            "text": "What’s in this image?"
                          },
                          {
                            "type": "image_url",
                            "image_url": {
                              "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
                            }
                          }
                        ]
                      }
                    ],
                    "max_tokens": 300
                  }'
              python: |
                from openai import OpenAI

                client = OpenAI()

                response = client.chat.completions.create(
                    model="gpt-4-vision-preview",
                    messages=[
                        {
                            "role": "user",
                            "content": [
                                {"type": "text", "text": "What’s in this image?"},
                                {
                                    "type": "image_url",
                                    "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                                },
                            ],
                        }
                    ],
                    max_tokens=300,
                )

                print(response.choices[0])
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const response = await openai.chat.completions.create({
                    model: "gpt-4-vision-preview",
                    messages: [
                      {
                        role: "user",
                        content: [
                          { type: "text", text: "What’s in this image?" },
                          {
                            type: "image_url",
                            image_url:
                              "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                          },
                        ],
                      },
                    ],
                  });
                  console.log(response.choices[0]);
                }
                main();
            response: &chat_completion_image_example |
              {
                "id": "chatcmpl-123",
                "object": "chat.completion",
                "created": 1677652288,
                "model": "gpt-3.5-turbo-0613",
                "system_fingerprint": "fp_44709d6fcb",
                "choices": [{
                  "index": 0,
                  "message": {
                    "role": "assistant",
                    "content": "\n\nHello there, how may I assist you today?",
                  },
                  "finish_reason": "stop"
                }],
                "usage": {
                  "prompt_tokens": 9,
                  "completion_tokens": 12,
                  "total_tokens": 21
                }
              }
          - title: Streaming
            request:
              curl: |
                curl https://api.openai.com/v1/chat/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "model": "VAR_model_id",
                    "messages": [
                      {
                        "role": "system",
                        "content": "You are a helpful assistant."
                      },
                      {
                        "role": "user",
                        "content": "Hello!"
                      }
                    ],
                    "stream": true
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                completion = client.chat.completions.create(
                  model="VAR_model_id",
                  messages=[
                    {"role": "system", "content": "You are a helpful assistant."},
                    {"role": "user", "content": "Hello!"}
                  ],
                  stream=True
                )

                for chunk in completion:
                  print(chunk.choices[0].delta)

              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const completion = await openai.chat.completions.create({
                    model: "VAR_model_id",
                    messages: [
                      {"role": "system", "content": "You are a helpful assistant."},
                      {"role": "user", "content": "Hello!"}
                    ],
                    stream: true,
                  });

                  for await (const chunk of completion) {
                    console.log(chunk.choices[0].delta.content);
                  }
                }

                main();
            response: &chat_completion_chunk_example |
              {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}

              {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

              {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}

              ....

              {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":" today"},"finish_reason":null}]}

              {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"?"},"finish_reason":null}]}

              {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
          - title: Function calling
            request:
              curl: |
                curl https://api.openai.com/v1/chat/completions \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -d '{
                  "model": "gpt-3.5-turbo",
                  "messages": [
                    {
                      "role": "user",
                      "content": "What is the weather like in Boston?"
                    }
                  ],
                  "functions": [
                    {
                      "name": "get_current_weather",
                      "description": "Get the current weather in a given location",
                      "parameters": {
                        "type": "object",
                        "properties": {
                          "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA"
                          },
                          "unit": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"]
                          }
                        },
                        "required": ["location"]
                      }
                    }
                  ],
                  "function_call": "auto"
                }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                functions = [
                  {
                    "name": "get_current_weather",
                    "description": "Get the current weather in a given location",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "location": {
                                "type": "string",
                                "description": "The city and state, e.g. San Francisco, CA",
                            },
                            "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                        },
                        "required": ["location"],
                    },
                  }
                ]
                messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
                completion = client.chat.completions.create(
                  model="VAR_model_id",
                  messages=messages,
                  functions=functions,
                  function_call="auto"
                )

                print(completion)
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const messages = [{"role": "user", "content": "What's the weather like in Boston today?"}];
                  const functions = [
                      {
                          "name": "get_current_weather",
                          "description": "Get the current weather in a given location",
                          "parameters": {
                              "type": "object",
                              "properties": {
                                  "location": {
                                      "type": "string",
                                      "description": "The city and state, e.g. San Francisco, CA",
                                  },
                                  "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                              },
                              "required": ["location"],
                          },
                      }
                  ];

                  const response = await openai.chat.completions.create({
                      model: "gpt-3.5-turbo",
                      messages: messages,
                      functions: functions,
                      function_call: "auto",  // auto is default, but we'll be explicit
                  });

                  console.log(response);
                }

                main();
            response: &chat_completion_function_example |
              {
                "choices": [
                  {
                    "finish_reason": "function_call",
                    "index": 0,
                    "message": {
                      "content": null,
                      "function_call": {
                        "arguments": "{\n  \"location\": \"Boston, MA\"\n}",
                        "name": "get_current_weather"
                      },
                      "role": "assistant"
                    }
                  }
                ],
                "created": 1694028367,
                "model": "gpt-3.5-turbo-0613",
                "system_fingerprint": "fp_44709d6fcb",
                "object": "chat.completion",
                "usage": {
                  "completion_tokens": 18,
                  "prompt_tokens": 82,
                  "total_tokens": 100
                }
              }
  /completions:
    post:
      operationId: createCompletion
      tags:
        - Completions
      summary: Creates a completion for the provided prompt and parameters.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateCompletionRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/CreateCompletionResponse"
      x-oaiMeta:
        name: Create completion
        returns: |
          Returns a [completion](/docs/api-reference/completions/object) object, or a sequence of completion objects if the request is streamed.
        legacy: true
        examples:
          - title: No streaming
            request:
              curl: |
                curl https://api.openai.com/v1/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "model": "VAR_model_id",
                    "prompt": "Say this is a test",
                    "max_tokens": 7,
                    "temperature": 0
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                client.completions.create(
                  model="VAR_model_id",
                  prompt="Say this is a test",
                  max_tokens=7,
                  temperature=0
                )
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const completion = await openai.completions.create({
                    model: "VAR_model_id",
                    prompt: "Say this is a test.",
                    max_tokens: 7,
                    temperature: 0,
                  });

                  console.log(completion);
                }
                main();
            response: |
              {
                "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
                "object": "text_completion",
                "created": 1589478378,
                "model": "VAR_model_id",
                "system_fingerprint": "fp_44709d6fcb",
                "choices": [
                  {
                    "text": "\n\nThis is indeed a test",
                    "index": 0,
                    "logprobs": null,
                    "finish_reason": "length"
                  }
                ],
                "usage": {
                  "prompt_tokens": 5,
                  "completion_tokens": 7,
                  "total_tokens": 12
                }
              }
          - title: Streaming
            request:
              curl: |
                curl https://api.openai.com/v1/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "model": "VAR_model_id",
                    "prompt": "Say this is a test",
                    "max_tokens": 7,
                    "temperature": 0,
                    "stream": true
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                for chunk in client.completions.create(
                  model="VAR_model_id",
                  prompt="Say this is a test",
                  max_tokens=7,
                  temperature=0,
                  stream=True
                ):
                  print(chunk.choices[0].text)
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const stream = await openai.completions.create({
                    model: "VAR_model_id",
                    prompt: "Say this is a test.",
                    stream: true,
                  });

                  for await (const chunk of stream) {
                    console.log(chunk.choices[0].text)
                  }
                }
                main();
            response: |
              {
                "id": "cmpl-7iA7iJjj8V2zOkCGvWF2hAkDWBQZe",
                "object": "text_completion",
                "created": 1690759702,
                "choices": [
                  {
                    "text": "This",
                    "index": 0,
                    "logprobs": null,
                    "finish_reason": null
                  }
                ],
                "model": "gpt-3.5-turbo-instruct"
                "system_fingerprint": "fp_44709d6fcb",
              }
  /edits:
    post:
      operationId: createEdit
      deprecated: true
      tags:
        - Edits
      summary: Creates a new edit for the provided input, instruction, and parameters.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateEditRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/CreateEditResponse"
      x-oaiMeta:
        name: Create edit
        returns: |
          Returns an [edit](/docs/api-reference/edits/object) object.
        group: edits
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/edits \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -d '{
                  "model": "VAR_model_id",
                  "input": "What day of the wek is it?",
                  "instruction": "Fix the spelling mistakes"
                }'
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.edits.create(
                model="VAR_model_id",
                input="What day of the wek is it?",
                instruction="Fix the spelling mistakes"
              )
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const edit = await openai.edits.create({
                  model: "VAR_model_id",
                  input: "What day of the wek is it?",
                  instruction: "Fix the spelling mistakes.",
                });

                console.log(edit);
              }

              main();
          response: &edit_example |
            {
              "object": "edit",
              "created": 1589478378,
              "choices": [
                {
                  "text": "What day of the week is it?",
                  "index": 0,
                }
              ],
              "usage": {
                "prompt_tokens": 25,
                "completion_tokens": 32,
                "total_tokens": 57
              }
            }

  /images/generations:
    post:
      operationId: createImage
      tags:
        - Images
      summary: Creates an image given a prompt.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateImageRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ImagesResponse"
      x-oaiMeta:
        name: Create image
        returns: Returns a list of [image](/docs/api-reference/images/object) objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/images/generations \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -d '{
                  "model": "dall-e-3",
                  "prompt": "A cute baby sea otter",
                  "n": 1,
                  "size": "1024x1024"
                }'
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.images.generate(
                model="dall-e-3",
                prompt="A cute baby sea otter",
                n=1,
                size="1024x1024"
              )
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const image = await openai.images.generate({ model: "dall-e-3", prompt: "A cute baby sea otter" });

                console.log(image.data);
              }
              main();
          response: |
            {
              "created": 1589478378,
              "data": [
                {
                  "url": "https://..."
                },
                {
                  "url": "https://..."
                }
              ]
            }
  /images/edits:
    post:
      operationId: createImageEdit
      tags:
        - Images
      summary: Creates an edited or extended image given an original image and a prompt.
      requestBody:
        required: true
        content:
          multipart/form-data:
            schema:
              $ref: "#/components/schemas/CreateImageEditRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ImagesResponse"
      x-oaiMeta:
        name: Create image edit
        returns: Returns a list of [image](/docs/api-reference/images/object) objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/images/edits \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -F image="@otter.png" \
                -F mask="@mask.png" \
                -F prompt="A cute baby sea otter wearing a beret" \
                -F n=2 \
                -F size="1024x1024"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.images.edit(
                image=open("otter.png", "rb"),
                mask=open("mask.png", "rb"),
                prompt="A cute baby sea otter wearing a beret",
                n=2,
                size="1024x1024"
              )
            node.js: |-
              import fs from "fs";
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const image = await openai.images.edit({
                  image: fs.createReadStream("otter.png"),
                  mask: fs.createReadStream("mask.png"),
                  prompt: "A cute baby sea otter wearing a beret",
                });

                console.log(image.data);
              }
              main();
          response: |
            {
              "created": 1589478378,
              "data": [
                {
                  "url": "https://..."
                },
                {
                  "url": "https://..."
                }
              ]
            }
  /images/variations:
    post:
      operationId: createImageVariation
      tags:
        - Images
      summary: Creates a variation of a given image.
      requestBody:
        required: true
        content:
          multipart/form-data:
            schema:
              $ref: "#/components/schemas/CreateImageVariationRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ImagesResponse"
      x-oaiMeta:
        name: Create image variation
        returns: Returns a list of [image](/docs/api-reference/images/object) objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/images/variations \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -F image="@otter.png" \
                -F n=2 \
                -F size="1024x1024"
            python: |
              from openai import OpenAI
              client = OpenAI()

              response = client.images.create_variation(
                image=open("image_edit_original.png", "rb"),
                n=2,
                size="1024x1024"
              )
            node.js: |-
              import fs from "fs";
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const image = await openai.images.createVariation({
                  image: fs.createReadStream("otter.png"),
                });

                console.log(image.data);
              }
              main();
          response: |
            {
              "created": 1589478378,
              "data": [
                {
                  "url": "https://..."
                },
                {
                  "url": "https://..."
                }
              ]
            }

  /embeddings:
    post:
      operationId: createEmbedding
      tags:
        - Embeddings
      summary: Creates an embedding vector representing the input text.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateEmbeddingRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/CreateEmbeddingResponse"
      x-oaiMeta:
        name: Create embeddings
        returns: A list of [embedding](/docs/api-reference/embeddings/object) objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/embeddings \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "Content-Type: application/json" \
                -d '{
                  "input": "The food was delicious and the waiter...",
                  "model": "text-embedding-ada-002",
                  "encoding_format": "float"
                }'
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.embeddings.create(
                model="text-embedding-ada-002",
                input="The food was delicious and the waiter...",
                encoding_format="float"
              )
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const embedding = await openai.embeddings.create({
                  model: "text-embedding-ada-002",
                  input: "The quick brown fox jumped over the lazy dog",
                  encoding_format: "float",
                });

                console.log(embedding);
              }

              main();
          response: |
            {
              "object": "list",
              "data": [
                {
                  "object": "embedding",
                  "embedding": [
                    0.0023064255,
                    -0.009327292,
                    .... (1536 floats total for ada-002)
                    -0.0028842222,
                  ],
                  "index": 0
                }
              ],
              "model": "text-embedding-ada-002",
              "usage": {
                "prompt_tokens": 8,
                "total_tokens": 8
              }
            }

  /audio/speech:
    post:
      operationId: createSpeech
      tags:
        - Audio
      summary: Generates audio from the input text.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateSpeechRequest"
      responses:
        "200":
          description: OK
          headers:
            Transfer-Encoding:
              schema:
                type: string
              description: chunked
          content:
            application/octet-stream:
              schema:
                type: string
                format: binary
      x-oaiMeta:
        name: Create speech
        returns: The audio file content.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/audio/speech \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "Content-Type: application/json" \
                -d '{
                  "model": "tts-1",
                  "input": "The quick brown fox jumped over the lazy dog.",
                  "voice": "alloy"
                }' \
                --output speech.mp3
            python: |
              from pathlib import Path
              import openai

              speech_file_path = Path(__file__).parent / "speech.mp3"
              response = openai.audio.speech.create(
                model="tts-1",
                voice="alloy",
                input="The quick brown fox jumped over the lazy dog."
              )
              response.stream_to_file(speech_file_path)
            node: |
              import fs from "fs";
              import path from "path";
              import OpenAI from "openai";

              const openai = new OpenAI();

              const speechFile = path.resolve("./speech.mp3");

              async function main() {
                const mp3 = await openai.audio.speech.create({
                  model: "tts-1",
                  voice: "alloy",
                  input: "Today is a wonderful day to build something people love!",
                });
                console.log(speechFile);
                const buffer = Buffer.from(await mp3.arrayBuffer());
                await fs.promises.writeFile(speechFile, buffer);
              }
              main();
  /audio/transcriptions:
    post:
      operationId: createTranscription
      tags:
        - Audio
      summary: Transcribes audio into the input language.
      requestBody:
        required: true
        content:
          multipart/form-data:
            schema:
              $ref: "#/components/schemas/CreateTranscriptionRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/CreateTranscriptionResponse"
      x-oaiMeta:
        name: Create transcription
        returns: The transcribed text.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/audio/transcriptions \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "Content-Type: multipart/form-data" \
                -F file="@/path/to/file/audio.mp3" \
                -F model="whisper-1"
            python: |
              from openai import OpenAI
              client = OpenAI()

              audio_file = open("speech.mp3", "rb")
              transcript = client.audio.transcriptions.create(
                model="whisper-1", 
                file=audio_file
              )
            node: |
              import fs from "fs";
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const transcription = await openai.audio.transcriptions.create({
                  file: fs.createReadStream("audio.mp3"),
                  model: "whisper-1",
                });

                console.log(transcription.text);
              }
              main();
          response: |
            {
              "text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
            }
  /audio/translations:
    post:
      operationId: createTranslation
      tags:
        - Audio
      summary: Translates audio into English.
      requestBody:
        required: true
        content:
          multipart/form-data:
            schema:
              $ref: "#/components/schemas/CreateTranslationRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/CreateTranslationResponse"
      x-oaiMeta:
        name: Create translation
        returns: The translated text.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/audio/translations \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "Content-Type: multipart/form-data" \
                -F file="@/path/to/file/german.m4a" \
                -F model="whisper-1"
            python: |
              from openai import OpenAI
              client = OpenAI()

              audio_file = open("speech.mp3", "rb")
              transcript = client.audio.translations.create(
                model="whisper-1", 
                file=audio_file
              )
            node: |
              const { Configuration, OpenAIApi } = require("openai");
              const configuration = new Configuration({
                apiKey: process.env.OPENAI_API_KEY,
              });
              const openai = new OpenAIApi(configuration);
              const resp = await openai.createTranslation(
                fs.createReadStream("audio.mp3"),
                "whisper-1"
              );
          response: |
            {
              "text": "Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"
            }

  /files:
    get:
      operationId: listFiles
      tags:
        - Files
      summary: Returns a list of files that belong to the user's organization.
      parameters:
        - in: query
          name: purpose
          required: false
          schema:
            type: string
          description: Only return files with the given purpose.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ListFilesResponse"
      x-oaiMeta:
        name: List files
        returns: A list of [File](/docs/api-reference/files/object) objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/files \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.files.list()
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const list = await openai.files.list();

                for await (const file of list) {
                  console.log(file);
                }
              }

              main();
          response: |
            {
              "data": [
                {
                  "id": "file-abc123",
                  "object": "file",
                  "bytes": 175,
                  "created_at": 1613677385,
                  "filename": "salesOverview.pdf",
                  "purpose": "assistants",
                },
                {
                  "id": "file-abc123",
                  "object": "file",
                  "bytes": 140,
                  "created_at": 1613779121,
                  "filename": "puppy.jsonl",
                  "purpose": "fine-tune",
                }
              ],
              "object": "list"
            }
    post:
      operationId: createFile
      tags:
        - Files
      summary: |
        Upload a file that can be used across various endpoints/features. The size of all the files uploaded by one organization can be up to 100 GB.

        The size of individual files for can be a maximum of 512MB. See the [Assistants Tools guide](/docs/assistants/tools) to learn more about the types of files supported. The Fine-tuning API only supports `.jsonl` files.

        Please [contact us](https://help.openai.com/) if you need to increase these storage limits.
      requestBody:
        required: true
        content:
          multipart/form-data:
            schema:
              $ref: "#/components/schemas/CreateFileRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/OpenAIFile"
      x-oaiMeta:
        name: Upload file
        returns: The uploaded [File](/docs/api-reference/files/object) object.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/files \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -F purpose="fine-tune" \
                -F file="@mydata.jsonl"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.files.create(
                file=open("mydata.jsonl", "rb"),
                purpose="fine-tune"
              )
            node.js: |-
              import fs from "fs";
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const file = await openai.files.create({
                  file: fs.createReadStream("mydata.jsonl"),
                  purpose: "fine-tune",
                });

                console.log(file);
              }

              main();
          response: |
            {
              "id": "file-BK7bzQj3FfZFXr7DbL6xJwfo",
              "object": "file",
              "bytes": 120000,
              "created_at": 1677610602,
              "filename": "mydata.jsonl",
              "purpose": "fine-tune",
            }
  /files/{file_id}:
    delete:
      operationId: deleteFile
      tags:
        - Files
      summary: Delete a file.
      parameters:
        - in: path
          name: file_id
          required: true
          schema:
            type: string
          description: The ID of the file to use for this request.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/DeleteFileResponse"
      x-oaiMeta:
        name: Delete file
        returns: Deletion status.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/files/file-abc123 \
                -X DELETE \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.files.delete("file-oaG6vwLtV3v3mWpvxexWDKxq")
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const file = await openai.files.del("file-abc123");

                console.log(file);
              }

              main();
          response: |
            {
              "id": "file-abc123",
              "object": "file",
              "deleted": true
            }
    get:
      operationId: retrieveFile
      tags:
        - Files
      summary: Returns information about a specific file.
      parameters:
        - in: path
          name: file_id
          required: true
          schema:
            type: string
          description: The ID of the file to use for this request.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/OpenAIFile"
      x-oaiMeta:
        name: Retrieve file
        returns: The [File](/docs/api-reference/files/object) object matching the specified ID.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/files/file-BK7bzQj3FfZFXr7DbL6xJwfo \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.files.retrieve("file-BK7bzQj3FfZFXr7DbL6xJwfo")
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const file = await openai.files.retrieve("file-BK7bzQj3FfZFXr7DbL6xJwfo");

                console.log(file);
              }

              main();
          response: |
            {
              "id": "file-BK7bzQj3FfZFXr7DbL6xJwfo",
              "object": "file",
              "bytes": 120000,
              "created_at": 1677610602,
              "filename": "mydata.jsonl",
              "purpose": "fine-tune",
            }
  /files/{file_id}/content:
    get:
      operationId: downloadFile
      tags:
        - Files
      summary: Returns the contents of the specified file.
      parameters:
        - in: path
          name: file_id
          required: true
          schema:
            type: string
          description: The ID of the file to use for this request.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                type: string
      x-oaiMeta:
        name: Retrieve file content
        returns: The file content.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/files/file-BK7bzQj3FfZFXr7DbL6xJwfo/content \
                -H "Authorization: Bearer $OPENAI_API_KEY" > file.jsonl
            python: |
              from openai import OpenAI
              client = OpenAI()

              content = client.files.retrieve_content("file-BK7bzQj3FfZFXr7DbL6xJwfo")
            node.js: |
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const file = await openai.files.retrieveContent("file-BK7bzQj3FfZFXr7DbL6xJwfo");

                console.log(file);
              }

              main();

  /fine_tuning/jobs:
    post:
      operationId: createFineTuningJob
      tags:
        - Fine-tuning
      summary: |
        Creates a job that fine-tunes a specified model from a given dataset.

        Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.

        [Learn more about fine-tuning](/docs/guides/fine-tuning)
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateFineTuningJobRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/FineTuningJob"
      x-oaiMeta:
        name: Create fine-tuning job
        returns: A [fine-tuning.job](/docs/api-reference/fine-tuning/object) object.
        examples:
          - title: No hyperparameters
            request:
              curl: |
                curl https://api.openai.com/v1/fine_tuning/jobs \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "training_file": "file-BK7bzQj3FfZFXr7DbL6xJwfo",
                    "model": "gpt-3.5-turbo"
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                client.fine_tuning.jobs.create(
                  training_file="file-abc123", 
                  model="gpt-3.5-turbo"
                )
              node.js: |
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const fineTune = await openai.fineTuning.jobs.create({
                    training_file: "file-abc123"
                  });

                  console.log(fineTune);
                }

                main();
            response: |
              {
                "object": "fine_tuning.job",
                "id": "ftjob-abc123",
                "model": "gpt-3.5-turbo-0613",
                "created_at": 1614807352,
                "fine_tuned_model": null,
                "organization_id": "org-123",
                "result_files": [],
                "status": "queued",
                "validation_file": null,
                "training_file": "file-abc123",
              }
          - title: Hyperparameters
            request:
              curl: |
                curl https://api.openai.com/v1/fine_tuning/jobs \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "training_file": "file-abc123",
                    "model": "gpt-3.5-turbo",
                    "hyperparameters": {
                      "n_epochs": 2
                    }
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                client.fine_tuning.jobs.create(
                  training_file="file-abc123", 
                  model="gpt-3.5-turbo", 
                  hyperparameters={
                    "n_epochs":2
                  }
                )
              node.js: |
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const fineTune = await openai.fineTuning.jobs.create({
                    training_file: "file-abc123",
                    model: "gpt-3.5-turbo",
                    hyperparameters: { n_epochs: 2 }
                  });

                  console.log(fineTune);
                }

                main();
            response: |
              {
                "object": "fine_tuning.job",
                "id": "ftjob-abc123",
                "model": "gpt-3.5-turbo-0613",
                "created_at": 1614807352,
                "fine_tuned_model": null,
                "organization_id": "org-123",
                "result_files": [],
                "status": "queued",
                "validation_file": null,
                "training_file": "file-abc123",
                "hyperparameters": {"n_epochs": 2},
              }
          - title: Validation file
            request:
              curl: |
                curl https://api.openai.com/v1/fine_tuning/jobs \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "training_file": "file-abc123",
                    "validation_file": "file-abc123",
                    "model": "gpt-3.5-turbo"
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                client.fine_tuning.jobs.create(
                  training_file="file-abc123", 
                  validation_file="file-def456", 
                  model="gpt-3.5-turbo"
                )
              node.js: |
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const fineTune = await openai.fineTuning.jobs.create({
                    training_file: "file-abc123",
                    validation_file: "file-abc123"
                  });

                  console.log(fineTune);
                }

                main();
            response: |
              {
                "object": "fine_tuning.job",
                "id": "ftjob-abc123",
                "model": "gpt-3.5-turbo-0613",
                "created_at": 1614807352,
                "fine_tuned_model": null,
                "organization_id": "org-123",
                "result_files": [],
                "status": "queued",
                "validation_file": "file-abc123",
                "training_file": "file-abc123",
              }
    get:
      operationId: listPaginatedFineTuningJobs
      tags:
        - Fine-tuning
      summary: |
        List your organization's fine-tuning jobs
      parameters:
        - name: after
          in: query
          description: Identifier for the last job from the previous pagination request.
          required: false
          schema:
            type: string
        - name: limit
          in: query
          description: Number of fine-tuning jobs to retrieve.
          required: false
          schema:
            type: integer
            default: 20
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ListPaginatedFineTuningJobsResponse"
      x-oaiMeta:
        name: List fine-tuning jobs
        returns: A list of paginated [fine-tuning job](/docs/api-reference/fine-tuning/object) objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/fine_tuning/jobs?limit=2 \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.fine_tuning.jobs.list()
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const list = await openai.fineTuning.jobs.list();

                for await (const fineTune of list) {
                  console.log(fineTune);
                }
              }

              main();
          response: |
            {
              "object": "list",
              "data": [
                {
                  "object": "fine_tuning.job.event",
                  "id": "ft-event-TjX0lMfOniCZX64t9PUQT5hn",
                  "created_at": 1689813489,
                  "level": "warn",
                  "message": "Fine tuning process stopping due to job cancellation",
                  "data": null,
                  "type": "message"
                },
                { ... },
                { ... }
              ], "has_more": true
            }
  /fine_tuning/jobs/{fine_tuning_job_id}:
    get:
      operationId: retrieveFineTuningJob
      tags:
        - Fine-tuning
      summary: |
        Get info about a fine-tuning job.

        [Learn more about fine-tuning](/docs/guides/fine-tuning)
      parameters:
        - in: path
          name: fine_tuning_job_id
          required: true
          schema:
            type: string
            example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
          description: |
            The ID of the fine-tuning job.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/FineTuningJob"
      x-oaiMeta:
        name: Retrieve fine-tuning job
        returns: The [fine-tuning](/docs/api-reference/fine-tunes/object) object with the given ID.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/fine_tuning/jobs/ft-AF1WoRqd3aJAHsqc9NY7iL8F \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.fine_tuning.jobs.retrieve("ftjob-abc123")
            node.js: |
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const fineTune = await openai.fineTuning.jobs.retrieve("ftjob-abc123");

                console.log(fineTune);
              }

              main();
          response: &fine_tuning_example |
            {
              "object": "fine_tuning.job",
              "id": "ftjob-abc123",
              "model": "davinci-002",
              "created_at": 1692661014,
              "finished_at": 1692661190,
              "fine_tuned_model": "ft:davinci-002:my-org:custom_suffix:7q8mpxmy",
              "organization_id": "org-123",
              "result_files": [
                  "file-abc123"
              ],
              "status": "succeeded",
              "validation_file": null,
              "training_file": "file-abc123",
              "hyperparameters": {
                  "n_epochs": 4,
              },
              "trained_tokens": 5768
            }
  /fine_tuning/jobs/{fine_tuning_job_id}/events:
    get:
      operationId: listFineTuningEvents
      tags:
        - Fine-tuning
      summary: |
        Get status updates for a fine-tuning job.
      parameters:
        - in: path
          name: fine_tuning_job_id
          required: true
          schema:
            type: string
            example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
          description: |
            The ID of the fine-tuning job to get events for.
        - name: after
          in: query
          description: Identifier for the last event from the previous pagination request.
          required: false
          schema:
            type: string
        - name: limit
          in: query
          description: Number of events to retrieve.
          required: false
          schema:
            type: integer
            default: 20
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ListFineTuningJobEventsResponse"
      x-oaiMeta:
        name: List fine-tuning events
        returns: A list of fine-tuning event objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/fine_tuning/jobs/ftjob-abc123/events \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.fine_tuning.jobs.list_events(
                fine_tuning_job_id="ftjob-abc123", 
                limit=2
              )
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const list = await openai.fineTuning.list_events(id="ftjob-abc123", limit=2);

                for await (const fineTune of list) {
                  console.log(fineTune);
                }
              }

              main();
          response: |
            {
              "object": "list",
              "data": [
                {
                  "object": "fine_tuning.job.event",
                  "id": "ft-event-ddTJfwuMVpfLXseO0Am0Gqjm",
                  "created_at": 1692407401,
                  "level": "info",
                  "message": "Fine tuning job successfully completed",
                  "data": null,
                  "type": "message"
                },
                {
                  "object": "fine_tuning.job.event",
                  "id": "ft-event-tyiGuB72evQncpH87xe505Sv",
                  "created_at": 1692407400,
                  "level": "info",
                  "message": "New fine-tuned model created: ft:gpt-3.5-turbo:openai::7p4lURel",
                  "data": null,
                  "type": "message"
                }
              ],
              "has_more": true
            }
  /fine_tuning/jobs/{fine_tuning_job_id}/cancel:
    post:
      operationId: cancelFineTuningJob
      tags:
        - Fine-tuning
      summary: |
        Immediately cancel a fine-tune job.
      parameters:
        - in: path
          name: fine_tuning_job_id
          required: true
          schema:
            type: string
            example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
          description: |
            The ID of the fine-tuning job to cancel.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/FineTuningJob"
      x-oaiMeta:
        name: Cancel fine-tuning
        returns: The cancelled [fine-tuning](/docs/api-reference/fine-tuning/object) object.
        examples:
          request:
            curl: |
              curl -X POST https://api.openai.com/v1/fine_tuning/jobs/ftjob-abc123/cancel \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.fine_tuning.jobs.cancel("ftjob-abc123")
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const fineTune = await openai.fineTuning.jobs.cancel("ftjob-abc123");

                console.log(fineTune);
              }
              main();
          response: |
            {
              "object": "fine_tuning.job",
              "id": "ftjob-abc123",
              "model": "gpt-3.5-turbo-0613",
              "created_at": 1689376978,
              "fine_tuned_model": null,
              "organization_id": "org-123",
              "result_files": [],
              "hyperparameters": {
                "n_epochs":  "auto"
              },
              "status": "cancelled",
              "validation_file": "file-abc123",
              "training_file": "file-abc123"
            }

  /fine-tunes:
    post:
      operationId: createFineTune
      deprecated: true
      tags:
        - Fine-tunes
      summary: |
        Creates a job that fine-tunes a specified model from a given dataset.

        Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.

        [Learn more about fine-tuning](/docs/guides/legacy-fine-tuning)
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateFineTuneRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/FineTune"
      x-oaiMeta:
        name: Create fine-tune
        returns: A [fine-tune](/docs/api-reference/fine-tunes/object) object.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/fine-tunes \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -d '{
                  "training_file": "file-abc123"
                }'
            python: |
              # deprecated
            node.js: |
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const fineTune = await openai.fineTunes.create({
                  training_file: "file-abc123"
                });

                console.log(fineTune);
              }

              main();
          response: |
            {
              "id": "ft-AF1WoRqd3aJAHsqc9NY7iL8F",
              "object": "fine-tune",
              "model": "curie",
              "created_at": 1614807352,
              "events": [
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807352,
                  "level": "info",
                  "message": "Job enqueued. Waiting for jobs ahead to complete. Queue number: 0."
                }
              ],
              "fine_tuned_model": null,
              "hyperparams": {
                "batch_size": 4,
                "learning_rate_multiplier": 0.1,
                "n_epochs": 4,
                "prompt_loss_weight": 0.1,
              },
              "organization_id": "org-123",
              "result_files": [],
              "status": "pending",
              "validation_files": [],
              "training_files": [
                {
                  "id": "file-abc123",
                  "object": "file",
                  "bytes": 1547276,
                  "created_at": 1610062281,
                  "filename": "my-data-train.jsonl",
                  "purpose": "fine-tune-results"
                }
              ],
              "updated_at": 1614807352,
            }
    get:
      operationId: listFineTunes
      deprecated: true
      tags:
        - Fine-tunes
      summary: |
        List your organization's fine-tuning jobs
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ListFineTunesResponse"
      x-oaiMeta:
        name: List fine-tunes
        returns: A list of [fine-tune](/docs/api-reference/fine-tunes/object) objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/fine-tunes \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              # deprecated
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const list = await openai.fineTunes.list();

                for await (const fineTune of list) {
                  console.log(fineTune);
                }
              }

              main();
          response: |
            {
              "object": "list",
              "data": [
                {
                  "id": "ft-AF1WoRqd3aJAHsqc9NY7iL8F",
                  "object": "fine-tune",
                  "model": "curie",
                  "created_at": 1614807352,
                  "fine_tuned_model": null,
                  "hyperparams": { ... },
                  "organization_id": "org-123",
                  "result_files": [],
                  "status": "pending",
                  "validation_files": [],
                  "training_files": [ { ... } ],
                  "updated_at": 1614807352,
                },
                { ... },
                { ... }
              ]
            }
  /fine-tunes/{fine_tune_id}:
    get:
      operationId: retrieveFineTune
      deprecated: true
      tags:
        - Fine-tunes
      summary: |
        Gets info about the fine-tune job.

        [Learn more about fine-tuning](/docs/guides/legacy-fine-tuning)
      parameters:
        - in: path
          name: fine_tune_id
          required: true
          schema:
            type: string
            example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
          description: |
            The ID of the fine-tune job
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/FineTune"
      x-oaiMeta:
        name: Retrieve fine-tune
        returns: The [fine-tune](/docs/api-reference/fine-tunes/object) object with the given ID.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/fine-tunes/ft-AF1WoRqd3aJAHsqc9NY7iL8F \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              # deprecated
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const fineTune = await openai.fineTunes.retrieve("ft-AF1WoRqd3aJAHsqc9NY7iL8F");

                console.log(fineTune);
              }

              main();
          response: &fine_tune_example |
            {
              "id": "ft-AF1WoRqd3aJAHsqc9NY7iL8F",
              "object": "fine-tune",
              "model": "curie",
              "created_at": 1614807352,
              "events": [
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807352,
                  "level": "info",
                  "message": "Job enqueued. Waiting for jobs ahead to complete. Queue number: 0."
                },
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807356,
                  "level": "info",
                  "message": "Job started."
                },
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807861,
                  "level": "info",
                  "message": "Uploaded snapshot: curie:ft-acmeco-2021-03-03-21-44-20."
                },
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807864,
                  "level": "info",
                  "message": "Uploaded result files: file-abc123."
                },
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807864,
                  "level": "info",
                  "message": "Job succeeded."
                }
              ],
              "fine_tuned_model": "curie:ft-acmeco-2021-03-03-21-44-20",
              "hyperparams": {
                "batch_size": 4,
                "learning_rate_multiplier": 0.1,
                "n_epochs": 4,
                "prompt_loss_weight": 0.1,
              },
              "organization_id": "org-123",
              "result_files": [
                {
                  "id": "file-abc123",
                  "object": "file",
                  "bytes": 81509,
                  "created_at": 1614807863,
                  "filename": "compiled_results.csv",
                  "purpose": "fine-tune-results"
                }
              ],
              "status": "succeeded",
              "validation_files": [],
              "training_files": [
                {
                  "id": "file-abc123",
                  "object": "file",
                  "bytes": 1547276,
                  "created_at": 1610062281,
                  "filename": "my-data-train.jsonl",
                  "purpose": "fine-tune"
                }
              ],
              "updated_at": 1614807865,
            }
  /fine-tunes/{fine_tune_id}/cancel:
    post:
      operationId: cancelFineTune
      deprecated: true
      tags:
        - Fine-tunes
      summary: |
        Immediately cancel a fine-tune job.
      parameters:
        - in: path
          name: fine_tune_id
          required: true
          schema:
            type: string
            example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
          description: |
            The ID of the fine-tune job to cancel
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/FineTune"
      x-oaiMeta:
        name: Cancel fine-tune
        returns: The cancelled [fine-tune](/docs/api-reference/fine-tunes/object) object.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/fine-tunes/ft-AF1WoRqd3aJAHsqc9NY7iL8F/cancel \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              # deprecated
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const fineTune = await openai.fineTunes.cancel("ft-AF1WoRqd3aJAHsqc9NY7iL8F");

                console.log(fineTune);
              }
              main();
          response: |
            {
              "id": "ft-xhrpBbvVUzYGo8oUO1FY4nI7",
              "object": "fine-tune",
              "model": "curie",
              "created_at": 1614807770,
              "events": [ { ... } ],
              "fine_tuned_model": null,
              "hyperparams": { ... },
              "organization_id": "org-123",
              "result_files": [],
              "status": "cancelled",
              "validation_files": [],
              "training_files": [
                {
                  "id": "file-abc123",
                  "object": "file",
                  "bytes": 1547276,
                  "created_at": 1610062281,
                  "filename": "my-data-train.jsonl",
                  "purpose": "fine-tune"
                }
              ],
              "updated_at": 1614807789,
            }
  /fine-tunes/{fine_tune_id}/events:
    get:
      operationId: listFineTuneEvents
      deprecated: true
      tags:
        - Fine-tunes
      summary: |
        Get fine-grained status updates for a fine-tune job.
      parameters:
        - in: path
          name: fine_tune_id
          required: true
          schema:
            type: string
            example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
          description: |
            The ID of the fine-tune job to get events for.
        - in: query
          name: stream
          required: false
          schema:
            type: boolean
            default: false
          description: |
            Whether to stream events for the fine-tune job. If set to true,
            events will be sent as data-only
            [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
            as they become available. The stream will terminate with a
            `data: [DONE]` message when the job is finished (succeeded, cancelled,
            or failed).

            If set to false, only events generated so far will be returned.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ListFineTuneEventsResponse"
      x-oaiMeta:
        name: List fine-tune events
        returns: A list of fine-tune event objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/fine-tunes/ft-AF1WoRqd3aJAHsqc9NY7iL8F/events \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              # deprecated
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const fineTune = await openai.fineTunes.listEvents("ft-AF1WoRqd3aJAHsqc9NY7iL8F");

                console.log(fineTune);
              }
              main();
          response: |
            {
              "object": "list",
              "data": [
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807352,
                  "level": "info",
                  "message": "Job enqueued. Waiting for jobs ahead to complete. Queue number: 0."
                },
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807356,
                  "level": "info",
                  "message": "Job started."
                },
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807861,
                  "level": "info",
                  "message": "Uploaded snapshot: curie:ft-acmeco-2021-03-03-21-44-20."
                },
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807864,
                  "level": "info",
                  "message": "Uploaded result files: file-abc123"
                },
                {
                  "object": "fine-tune-event",
                  "created_at": 1614807864,
                  "level": "info",
                  "message": "Job succeeded."
                }
              ]
            }

  /models:
    get:
      operationId: listModels
      tags:
        - Models
      summary: Lists the currently available models, and provides basic information about each one such as the owner and availability.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ListModelsResponse"
      x-oaiMeta:
        name: List models
        returns: A list of [model](/docs/api-reference/models/object) objects.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/models \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.models.list()
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const list = await openai.models.list();

                for await (const model of list) {
                  console.log(model);
                }
              }
              main();
          response: |
            {
              "object": "list",
              "data": [
                {
                  "id": "model-id-0",
                  "object": "model",
                  "created": 1686935002,
                  "owned_by": "organization-owner"
                },
                {
                  "id": "model-id-1",
                  "object": "model",
                  "created": 1686935002,
                  "owned_by": "organization-owner",
                },
                {
                  "id": "model-id-2",
                  "object": "model",
                  "created": 1686935002,
                  "owned_by": "openai"
                },
              ],
              "object": "list"
            }
  /models/{model}:
    get:
      operationId: retrieveModel
      tags:
        - Models
      summary: Retrieves a model instance, providing basic information about the model such as the owner and permissioning.
      parameters:
        - in: path
          name: model
          required: true
          schema:
            type: string
            # ideally this will be an actual ID, so this will always work from browser
            example: gpt-3.5-turbo
          description: The ID of the model to use for this request
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Model"
      x-oaiMeta:
        name: Retrieve model
        returns: The [model](/docs/api-reference/models/object) object matching the specified ID.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/models/VAR_model_id \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.models.retrieve("VAR_model_id")
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const model = await openai.models.retrieve("gpt-3.5-turbo");

                console.log(model);
              }

              main();
          response: &retrieve_model_response |
            {
              "id": "VAR_model_id",
              "object": "model",
              "created": 1686935002,
              "owned_by": "openai"
            }
    delete:
      operationId: deleteModel
      tags:
        - Models
      summary: Delete a fine-tuned model. You must have the Owner role in your organization to delete a model.
      parameters:
        - in: path
          name: model
          required: true
          schema:
            type: string
            example: ft:gpt-3.5-turbo:acemeco:suffix:abc123
          description: The model to delete
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/DeleteModelResponse"
      x-oaiMeta:
        name: Delete fine-tune model
        returns: Deletion status.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/models/ft:gpt-3.5-turbo:acemeco:suffix:abc123 \
                -X DELETE \
                -H "Authorization: Bearer $OPENAI_API_KEY"
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.models.delete("ft:gpt-3.5-turbo:acemeco:suffix:abc123")
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const model = await openai.models.del("ft:gpt-3.5-turbo:acemeco:suffix:abc123");

                console.log(model);
              }
              main();
          response: |
            {
              "id": "ft:gpt-3.5-turbo:acemeco:suffix:abc123",
              "object": "model",
              "deleted": true
            }

  /moderations:
    post:
      operationId: createModeration
      tags:
        - Moderations
      summary: Classifies if text violates OpenAI's Content Policy
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateModerationRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/CreateModerationResponse"
      x-oaiMeta:
        name: Create moderation
        returns: A [moderation](/docs/api-reference/moderations/object) object.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/moderations \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -d '{
                  "input": "I want to kill them."
                }'
            python: |
              from openai import OpenAI
              client = OpenAI()

              client.moderations.create(input="I want to kill them.")
            node.js: |
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const moderation = await openai.moderations.create({ input: "I want to kill them." });

                console.log(moderation);
              }
              main();
          response: &moderation_example |
            {
              "id": "modr-XXXXX",
              "model": "text-moderation-005",
              "results": [
                {
                  "flagged": true,
                  "categories": {
                    "sexual": false,
                    "hate": false,
                    "harassment": false,
                    "self-harm": false,
                    "sexual/minors": false,
                    "hate/threatening": false,
                    "violence/graphic": false,
                    "self-harm/intent": false,
                    "self-harm/instructions": false,
                    "harassment/threatening": true,
                    "violence": true,
                  },
                  "category_scores": {
                    "sexual": 1.2282071e-06,
                    "hate": 0.010696256,
                    "harassment": 0.29842457,
                    "self-harm": 1.5236925e-08,
                    "sexual/minors": 5.7246268e-08,
                    "hate/threatening": 0.0060676364,
                    "violence/graphic": 4.435014e-06,
                    "self-harm/intent": 8.098441e-10,
                    "self-harm/instructions": 2.8498655e-11,
                    "harassment/threatening": 0.63055265,
                    "violence": 0.99011886,
                  }
                }
              ]
            }

  # Assistants
  /assistants:
    get:
      operationId: listAssistants
      tags:
        - Assistants
      summary: Returns a list of assistants.
      parameters:
        - name: limit
          in: query
          description: &pagination_limit_param_description |
            A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.
          required: false
          schema:
            type: integer
            default: 20
        - name: order
          in: query
          description: &pagination_order_param_description |
            Sort order by the `created_at` timestamp of the objects. `asc` for ascending order and `desc` for descending order.
          schema:
            type: string
            default: desc
            enum: ["asc", "desc"]
        - name: after
          in: query
          description: &pagination_after_param_description |
            A cursor for use in pagination. `after` is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.
          schema:
            type: string
        - name: before
          in: query
          description: &pagination_before_param_description |
            A cursor for use in pagination. `before` is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.
          schema:
            type: string
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ListAssistantsResponse"
      x-oaiMeta:
        name: List assistants
        beta: true
        returns: A list of [assistant](/docs/api-reference/assistants/object) objects.
        examples:
          request:
            curl: |
              curl "https://api.openai.com/v1/assistants?order=desc&limit=20" \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "OpenAI-Beta: assistants=v1"
            python: |
              from openai import OpenAI
              client = OpenAI()

              my_assistants = client.beta.assistants.list(
                  order="desc",
                  limit="20",
              )
              print(my_assistants.data)
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const myAssistants = await openai.beta.assistants.list({
                  order: "desc",
                  limit: "20",
                });

                console.log(myAssistants.data);
              }

              main();
          response: &list_assistants_example |
            {
              "object": "list",
              "data": [
                {
                  "id": "asst_abc123",
                  "object": "assistant",
                  "created_at": 1698982736,
                  "name": "Coding Tutor",
                  "description": null,
                  "model": "gpt-4",
                  "instructions": "You are a helpful assistant designed to make me better at coding!",
                  "tools": [],
                  "file_ids": [],
                  "metadata": {}
                },
                {
                  "id": "asst_abc456",
                  "object": "assistant",
                  "created_at": 1698982718,
                  "name": "My Assistant",
                  "description": null,
                  "model": "gpt-4",
                  "instructions": "You are a helpful assistant designed to make me better at coding!",
                  "tools": [],
                  "file_ids": [],
                  "metadata": {}
                },
                {
                  "id": "asst_abc789",
                  "object": "assistant",
                  "created_at": 1698982643,
                  "name": null,
                  "description": null,
                  "model": "gpt-4",
                  "instructions": null,
                  "tools": [],
                  "file_ids": [],
                  "metadata": {}
                }
              ],
              "first_id": "asst_abc123",
              "last_id": "asst_abc789",
              "has_more": false
            }
    post:
      operationId: createAssistant
      tags:
        - Assistants
      summary: Create an assistant with a model and instructions.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateAssistantRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/AssistantObject"
      x-oaiMeta:
        name: Create assistant
        beta: true
        returns: An [assistant](/docs/api-reference/assistants/object) object.
        examples:
          - title: Code Interpreter
            request:
              curl: |
                curl "https://api.openai.com/v1/assistants" \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -H "OpenAI-Beta: assistants=v1" \
                  -d '{
                    "instructions": "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
                    "name": "Math Tutor"
                    "tools": [{"type": "code_interpreter"}],
                    "model": "gpt-4"
                  }'

              python: |
                from openai import OpenAI
                client = OpenAI()

                my_assistant = client.beta.assistants.create(
                    instructions="You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
                    name="Math Tutor",
                    tools=[{"type": "code_interpreter"}],
                    model="gpt-4",
                )
                print(my_assistant)
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const myAssistant = await openai.beta.assistants.create({
                    instructions:
                      "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
                    name: "Math Tutor",
                    tools: [{ type: "code_interpreter" }],
                    model: "gpt-4",
                  });

                  console.log(myAssistant);
                }

                main();
            response: &create_assistants_example |
              {
                "id": "asst_abc123",
                "object": "assistant",
                "created_at": 1698984975,
                "name": "Math Tutor",
                "description": null,
                "model": "gpt-4",
                "instructions": "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
                "tools": [
                  {
                    "type": "code_interpreter"
                  }
                ],
                "file_ids": [],
                "metadata": {}
              }
          - title: Files
            request:
              curl: |
                curl https://api.openai.com/v1/assistants \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -H "OpenAI-Beta: assistants=v1" \
                  -d '{
                    "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies.",
                    "tools": [{"type": "retrieval"}],
                    "model": "gpt-4",
                    "file_ids": ["file-abc123"]
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                my_assistant = client.beta.assistants.create(
                    instructions="You are an HR bot, and you have access to files to answer employee questions about company policies.",
                    name="HR Helper",
                    tools=[{"type": "retrieval"}],
                    model="gpt-4",
                    file_ids=["file-abc123"],
                )
                print(my_assistant)
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const myAssistant = await openai.beta.assistants.create({
                    instructions:
                      "You are an HR bot, and you have access to files to answer employee questions about company policies.",
                    name: "HR Helper",
                    tools: [{ type: "retrieval" }],
                    model: "gpt-4",
                    file_ids: ["file-abc123"],
                  });

                  console.log(myAssistant);
                }

                main();
            response: |
              {
                "id": "asst_abc123",
                "object": "assistant",
                "created_at": 1699009403,
                "name": "HR Helper",
                "description": null,
                "model": "gpt-4",
                "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies.",
                "tools": [
                  {
                    "type": "retrieval"
                  }
                ],
                "file_ids": [
                  "file-abc123"
                ],
                "metadata": {}
              }

  /assistants/{assistant_id}:
    get:
      operationId: getAssistant
      tags:
        - Assistants
      summary: Retrieves an assistant.
      parameters:
        - in: path
          name: assistant_id
          required: true
          schema:
            type: string
          description: The ID of the assistant to retrieve.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/AssistantObject"
      x-oaiMeta:
        name: Retrieve assistant
        beta: true
        returns: The [assistant](/docs/api-reference/assistants/object) object matching the specified ID.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/assistants/asst_abc123 \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "OpenAI-Beta: assistants=v1"
            python: |
              from openai import OpenAI
              client = OpenAI()

              my_assistant = client.beta.assistants.retrieve("asst_abc123")
              print(my_assistant)
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const myAssistant = await openai.beta.assistants.retrieve(
                  "asst_abc123"
                );

                console.log(myAssistant);
              }

              main();
          response: |
            {
              "id": "asst_abc123",
              "object": "assistant",
              "created_at": 1699009709,
              "name": "HR Helper",
              "description": null,
              "model": "gpt-4",
              "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies.",
              "tools": [
                {
                  "type": "retrieval"
                }
              ],
              "file_ids": [
                "file-abc123"
              ],
              "metadata": {}
            }
    post:
      operationId: modifyAssistant
      tags:
        - Assistant
      summary: Modifies an assistant.
      parameters:
        - in: path
          name: assistant_id
          required: true
          schema:
            type: string
          description: The ID of the assistant to modify.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/ModifyAssistantRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/AssistantObject"
      x-oaiMeta:
        name: Modify assistant
        beta: true
        returns: The modified [assistant](/docs/api-reference/assistants/object) object.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/assistants/asst_abc123 \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "OpenAI-Beta: assistants=v1" \
                -d '{
                    "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
                    "tools": [{"type": "retrieval"}],
                    "model": "gpt-4",
                    "file_ids": ["file-abc123", "file-abc456"]
                  }'
            python: |
              from openai import OpenAI
              client = OpenAI()

              my_updated_assistant = client.beta.assistants.update(
                "asst_abc123",
                instructions="You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
                name="HR Helper",
                tools=[{"type": "retrieval"}],
                model="gpt-4",
                file_ids=["file-abc123", "file-abc456"],
              )

              print(my_updated_assistant)
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const myUpdatedAssistant = await openai.beta.assistants.update(
                  "asst_abc123",
                  {
                    instructions:
                      "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
                    name: "HR Helper",
                    tools: [{ type: "retrieval" }],
                    model: "gpt-4",
                    file_ids: [
                      "file-abc123",
                      "file-abc456",
                    ],
                  }
                );

                console.log(myUpdatedAssistant);
              }

              main();
          response: |
            {
              "id": "asst_abc123",
              "object": "assistant",
              "created_at": 1699009709,
              "name": "HR Helper",
              "description": null,
              "model": "gpt-4",
              "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
              "tools": [
                {
                  "type": "retrieval"
                }
              ],
              "file_ids": [
                "file-abc123",
                "file-abc456"
              ],
              "metadata": {}
            }
    delete:
      operationId: deleteAssistant
      tags:
        - Assistants
      summary: Delete an assistant.
      parameters:
        - in: path
          name: assistant_id
          required: true
          schema:
            type: string
          description: The ID of the assistant to delete.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/DeleteAssistantResponse"
      x-oaiMeta:
        name: Delete assistant
        beta: true
        returns: Deletion status
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/assistants/asst_abc123 \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "OpenAI-Beta: assistants=v1" \
                -X DELETE
            python: |
              from openai import OpenAI
              client = OpenAI()

              response = client.beta.assistants.delete("asst_QLoItBbqwyAJEzlTy4y9kOMM")
              print(response)
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const response = await openai.beta.assistants.del("asst_QLoItBbqwyAJEzlTy4y9kOMM");

                console.log(response);
              }
              main();
          response: |
            {
              "id": "asst_abc123",
              "object": "assistant.deleted",
              "deleted": true
            }

  /threads:
    post:
      operationId: createThread
      tags:
        - Assistants
      summary: Create a thread.
      requestBody:
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateThreadRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ThreadObject"
      x-oaiMeta:
        name: Create thread
        beta: true
        returns: A [thread](/docs/api-reference/threads) object.
        examples:
          - title: Empty
            request:
              curl: |
                curl https://api.openai.com/v1/threads \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -H "OpenAI-Beta: assistants=v1" \
                  -d ''
              python: |
                from openai import OpenAI
                client = OpenAI()

                empty_thread = client.beta.threads.create()
                print(empty_thread)
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const emptyThread = await openai.beta.threads.create();

                  console.log(emptyThread);
                }

                main();
            response: |
              {
                "id": "thread_abc123",
                "object": "thread",
                "created_at": 1699012949,
                "metadata": {}
              }
          - title: Messages
            request:
              curl: |
                curl https://api.openai.com/v1/threads \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "OpenAI-Beta: assistants=v1" \
                -d '{
                    "messages": [{
                      "role": "user",
                      "content": "Hello, what is AI?",
                      "file_ids": ["file-abc123"]
                    }, {
                      "role": "user",
                      "content": "How does AI work? Explain it in simple terms."
                    }]
                  }'
              python: |
                from openai import OpenAI
                client = OpenAI()

                message_thread = client.beta.threads.create(
                  messages=[
                    {
                      "role": "user",
                      "content": "Hello, what is AI?",
                      "file_ids": ["file-abc123"],
                    },
                    {
                      "role": "user",
                      "content": "How does AI work? Explain it in simple terms."
                    },
                  ]
                )

                print(message_thread)
              node.js: |-
                import OpenAI from "openai";

                const openai = new OpenAI();

                async function main() {
                  const messageThread = await openai.beta.threads.create({
                    messages: [
                      {
                        role: "user",
                        content: "Hello, what is AI?",
                        file_ids: ["file-abc123"],
                      },
                      {
                        role: "user",
                        content: "How does AI work? Explain it in simple terms.",
                      },
                    ],
                  });

                  console.log(messageThread);
                }

                main();
            response: |
              {
                id: 'thread_abc123',
                object: 'thread',
                created_at: 1699014083,
                metadata: {}
              }

  /threads/{thread_id}:
    get:
      operationId: getThread
      tags:
        - Assistants
      summary: Retrieves a thread.
      parameters:
        - in: path
          name: thread_id
          required: true
          schema:
            type: string
          description: The ID of the thread to retrieve.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ThreadObject"
      x-oaiMeta:
        name: Retrieve thread
        beta: true
        returns: The [thread](/docs/api-reference/threads/object) object matching the specified ID.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/threads/thread_abc123 \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "OpenAI-Beta: assistants=v1"
            python: |
              from openai import OpenAI
              client = OpenAI()

              my_thread = client.beta.threads.retrieve("thread_abc123")
              print(my_thread)
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const myThread = await openai.beta.threads.retrieve(
                  "thread_abc123"
                );

                console.log(myThread);
              }

              main();
          response: |
            {
              "id": "thread_abc123",
              "object": "thread",
              "created_at": 1699014083,
              "metadata": {}
            }
    post:
      operationId: modifyThread
      tags:
        - Assistants
      summary: Modifies a thread.
      parameters:
        - in: path
          name: thread_id
          required: true
          schema:
            type: string
          description: The ID of the thread to modify. Only the `metadata` can be modified.
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/ModifyThreadRequest"
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/ThreadObject"
      x-oaiMeta:
        name: Modify thread
        beta: true
        returns: The modified [thread](/docs/api-reference/threads/object) object matching the specified ID.
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/threads/thread_abc123 \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "OpenAI-Beta: assistants=v1" \
                -d '{
                    "metadata": {
                      "modified": "true",
                      "user": "abc123"
                    }
                  }'
            python: |
              from openai import OpenAI
              client = OpenAI()

              my_updated_thread = client.beta.threads.update(
                "thread_abc123", 
                metadata={
                  "modified": "true", 
                  "user": "abc123"
                }
              )
              print(my_updated_thread)
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const updatedThread = await openai.beta.threads.update(
                  "thread_abc123",
                  {
                    metadata: { modified: "true", user: "abc123" },
                  }
                );

                console.log(updatedThread);
              }

              main();
          response: |
            {
              "id": "thread_abc123",
              "object": "thread",
              "created_at": 1699014083,
              "metadata": {
                "modified": "true",
                "user": "abc123"
              }
            }
    delete:
      operationId: deleteThread
      tags:
        - Assistants
      summary: Delete a thread.
      parameters:
        - in: path
          name: thread_id
          required: true
          schema:
            type: string
          description: The ID of the thread to delete.
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/DeleteThreadResponse"
      x-oaiMeta:
        name: Delete thread
        beta: true
        returns: Deletion status
        examples:
          request:
            curl: |
              curl https://api.openai.com/v1/threads/thread_abc123 \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer $OPENAI_API_KEY" \
                -H "OpenAI-Beta: assistants=v1" \
                -X DELETE
            python: |
              from openai import OpenAI
              client = OpenAI()

              response = client.beta.threads.delete("thread_abc123")
              print(response)
            node.js: |-
              import OpenAI from "openai";

              const openai = new OpenAI();

              async function main() {
                const response = await openai.beta.threads.del("thread_abc123");

                console.log(response);
              }
              main();
          response: |
            {
              "id": "thread_abc123",
              "object": "thread.deleted",
              "deleted": true
            }

  /threads/{thread_id}/messages:
    get:
      operationId: listMessages
      tags:
        - Assistants
      summary: Returns a list of messages for a given thread.
      parameters:
        - in: path
          name: thread_id
          required: true
          schema:
            type: string
          description: The ID of the
Download .txt
gitextract_6dvov9b0/

├── .changeset/
│   ├── README.md
│   └── config.json
├── .eslintrc.cjs
├── .gitignore
├── .husky/
│   └── pre-commit
├── .npmignore
├── .npmrc
├── LICENSE
├── README.md
├── core/
│   ├── CHANGELOG.md
│   ├── README.md
│   ├── fix-ono.sh
│   ├── fixtures/
│   │   └── oases/
│   │       ├── invalid-securities.yaml
│   │       ├── no-security.yaml
│   │       └── petstore.yaml
│   ├── index.ts
│   ├── jest.config.js
│   ├── package.json
│   ├── src/
│   │   ├── ai/
│   │   │   ├── __tests__/
│   │   │   │   └── api-agent.test.ts
│   │   │   ├── api-agent.ts
│   │   │   └── tools/
│   │   │       ├── __tests__/
│   │   │       │   ├── parse-arguments.test.ts
│   │   │       │   └── select-operation.test.ts
│   │   │       ├── parse-arguments.ts
│   │   │       └── select-operation.ts
│   │   └── api/
│   │       ├── __tests__/
│   │       │   ├── oas-loader.test.ts
│   │       │   ├── operation.test.ts
│   │       │   └── security.test.ts
│   │       ├── oas-loader.ts
│   │       ├── operation.ts
│   │       └── security.ts
│   └── tsconfig.json
├── package.json
├── packages/
│   ├── eslint-config-custom/
│   │   ├── index.js
│   │   └── package.json
│   └── tsconfig/
│       ├── base.json
│       ├── nextjs.json
│       ├── package.json
│       └── react-library.json
├── server/
│   ├── README.md
│   ├── app/
│   │   ├── layout.tsx
│   │   └── page.tsx
│   ├── next-env.d.ts
│   ├── next.config.js
│   ├── oases/
│   │   └── open-ai.yaml
│   ├── package.json
│   ├── pages/
│   │   └── api/
│   │       ├── api2ai.config.ts
│   │       └── run.ts
│   └── tsconfig.json
└── turbo.json
Download .txt
SYMBOL INDEX (37 symbols across 10 files)

FILE: core/src/ai/__tests__/api-agent.test.ts
  method constructor (line 46) | constructor({ apiKey }: any) {

FILE: core/src/ai/api-agent.ts
  constant DEFAULT_CHAT_MODEL (line 6) | const DEFAULT_CHAT_MODEL = "gpt-3.5-turbo-1106";
  type ApiInput (line 8) | interface ApiInput {
  type AgentInput (line 13) | interface AgentInput {
  class ApiAgent (line 19) | class ApiAgent {
    method constructor (line 25) | constructor({ apiKey, model, apis }: AgentInput) {
    method execute (line 40) | async execute({
    method _loadOperations (line 82) | async _loadOperations() {

FILE: core/src/ai/tools/__tests__/parse-arguments.test.ts
  method constructor (line 32) | constructor({ apiKey }: any) {

FILE: core/src/ai/tools/__tests__/select-operation.test.ts
  method constructor (line 27) | constructor({ apiKey }: any) {

FILE: core/src/ai/tools/parse-arguments.ts
  constant SYSTEM_PROMPT (line 3) | const SYSTEM_PROMPT =
  type ParseArgumentsInput (line 6) | interface ParseArgumentsInput {

FILE: core/src/ai/tools/select-operation.ts
  type SelectOperationInput (line 18) | interface SelectOperationInput {

FILE: core/src/api/operation.ts
  type OperationInput (line 3) | interface OperationInput {
  constant EMPTY_ARGUMENT (line 13) | const EMPTY_ARGUMENT: object = {};
  class Operation (line 15) | class Operation {
    method constructor (line 24) | constructor({
    method operationId (line 42) | operationId(): string {
    method summary (line 46) | summary(): string {
    method description (line 50) | description(): string {
    method url (line 54) | url(parsedParams?: any): string {
    method sendRequest (line 83) | async sendRequest({ headers, parsedParams, authData }: any) {
    method toFunction (line 124) | toFunction() {
    method _allParams (line 132) | _allParams() {
    method _bodyParams (line 154) | _bodyParams() {
    method _urlParams (line 165) | _urlParams() {
    method _selectParams (line 196) | _selectParams({ target, allParams }: any) {
    method _computeBodyParameters (line 212) | _computeBodyParameters(schema: any) {
    method _requestContentType (line 240) | _requestContentType() {
    method _computeAuth (line 253) | _computeAuth(data: any) {

FILE: core/src/api/security.ts
  class Security (line 1) | class Security {
    method constructor (line 7) | constructor(authInput: any) {
    method validateInput (line 16) | validateInput() {
    method authData (line 30) | authData(data: any) {

FILE: server/app/layout.tsx
  function RootLayout (line 3) | function RootLayout({

FILE: server/app/page.tsx
  constant URL (line 4) | const URL = "/api/run";
  function Page (line 46) | function Page() {
Condensed preview — 49 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (432K chars).
[
  {
    "path": ".changeset/README.md",
    "chars": 580,
    "preview": "# Changesets\n\nHello and welcome! This folder has been automatically generated by `@changesets/cli`, a build tool that wo"
  },
  {
    "path": ".changeset/config.json",
    "chars": 271,
    "preview": "{\n  \"$schema\": \"https://unpkg.com/@changesets/config@2.3.1/schema.json\",\n  \"changelog\": \"@changesets/cli/changelog\",\n  \""
  },
  {
    "path": ".eslintrc.cjs",
    "chars": 207,
    "preview": "module.exports = {\n  root: true,\n  // This tells ESLint to load the config from the package `eslint-config-custom`\n  ext"
  },
  {
    "path": ".gitignore",
    "chars": 393,
    "preview": "# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.\n\n# dependencies\nnode_modules\n.pnp\n"
  },
  {
    "path": ".husky/pre-commit",
    "chars": 69,
    "preview": "#!/usr/bin/env sh\n. \"$(dirname -- \"$0\")/_/husky.sh\"\n\nnpx lint-staged\n"
  },
  {
    "path": ".npmignore",
    "chars": 22,
    "preview": ".env\n\n.DS_Store\n*.pem\n"
  },
  {
    "path": ".npmrc",
    "chars": 26,
    "preview": "auto-install-peers = true\n"
  },
  {
    "path": "LICENSE",
    "chars": 1071,
    "preview": "MIT License\n\nCopyright (c) [2023] [Quan Nguyen]\n\nPermission is hereby granted, free of charge, to any person obtaining a"
  },
  {
    "path": "README.md",
    "chars": 5050,
    "preview": "# ☁️⇨🤖🧠 api2ai\n\n⚡ Create an API assistant from any OpenAPI Spec ⚡\n\n<img width=\"680\" alt=\"api2ai demo with multiple APIs\""
  },
  {
    "path": "core/CHANGELOG.md",
    "chars": 643,
    "preview": "# @api2ai/core\n\n## 0.6.1\n\n### Patch Changes\n\n- Upgrade to gpt-3.5-turbo-1106 model\n\n## 0.6.0\n\n### Patch Changes\n\n- Upgra"
  },
  {
    "path": "core/README.md",
    "chars": 5050,
    "preview": "# ☁️⇨🤖🧠 api2ai\n\n⚡ Create an API assistant from any OpenAPI Spec ⚡\n\n<img width=\"680\" alt=\"api2ai demo with multiple APIs\""
  },
  {
    "path": "core/fix-ono.sh",
    "chars": 205,
    "preview": "sed -i '' -e 's,typeof module === \"object\" && typeof module.exports === \"object\",typeof module === \"object\" \\&\\& typeof "
  },
  {
    "path": "core/fixtures/oases/invalid-securities.yaml",
    "chars": 1617,
    "preview": "openapi: \"3.0.0\"\ninfo:\n  version: 1.0.0\n  title: Swagger Petstore\n  license:\n    name: MIT\nservers:\n  - url: http://pets"
  },
  {
    "path": "core/fixtures/oases/no-security.yaml",
    "chars": 1577,
    "preview": "openapi: \"3.0.0\"\ninfo:\n  version: 1.0.0\n  title: Swagger Petstore\n  license:\n    name: MIT\nservers:\n  - url: http://pets"
  },
  {
    "path": "core/fixtures/oases/petstore.yaml",
    "chars": 3349,
    "preview": "openapi: \"3.0.0\"\ninfo:\n  version: 1.0.0\n  title: Swagger Petstore\n  license:\n    name: MIT\nservers:\n  - url: http://pets"
  },
  {
    "path": "core/index.ts",
    "chars": 58,
    "preview": "export { default as ApiAgent } from \"./src/ai/api-agent\";\n"
  },
  {
    "path": "core/jest.config.js",
    "chars": 489,
    "preview": "/** @type {import('ts-jest').JestConfigWithTsJest} */\n\nmodule.exports = {\n  roots: [\"<rootDir>\"],\n  transform: {\n    \"^."
  },
  {
    "path": "core/package.json",
    "chars": 711,
    "preview": "{\n  \"name\": \"@api2ai/core\",\n  \"version\": \"0.6.1\",\n  \"main\": \"./dist/index.js\",\n  \"types\": \"./dist/index.d.ts\",\n  \"licens"
  },
  {
    "path": "core/src/ai/__tests__/api-agent.test.ts",
    "chars": 3367,
    "preview": "import path from \"path\";\n\nimport ApiAgent from \"../api-agent\";\n\nlet selectOperationResponse: any;\nlet parseArgsResponse:"
  },
  {
    "path": "core/src/ai/api-agent.ts",
    "chars": 2321,
    "preview": "import Operation from \"../api/operation\";\nimport { parse } from \"../api/oas-loader\";\nimport { selectOperation } from \"./"
  },
  {
    "path": "core/src/ai/tools/__tests__/parse-arguments.test.ts",
    "chars": 2982,
    "preview": "import path from \"path\";\n\nimport { parseArguments } from \"../parse-arguments\";\n\nimport Operation from \"../../../api/oper"
  },
  {
    "path": "core/src/ai/tools/__tests__/select-operation.test.ts",
    "chars": 2557,
    "preview": "import path from \"path\";\n\nimport { selectOperation } from \"../select-operation\";\nimport { parse } from \"../../../api/oas"
  },
  {
    "path": "core/src/ai/tools/parse-arguments.ts",
    "chars": 1083,
    "preview": "import OpenAI from \"openai\";\n\nconst SYSTEM_PROMPT =\n  \"Parse user input into arguments. Leave missing parameters blank. "
  },
  {
    "path": "core/src/ai/tools/select-operation.ts",
    "chars": 1469,
    "preview": "import OpenAI from \"openai\";\n\nimport Operation from \"../../api/operation\";\n\nconst selectOperationPrompt = ({\n  operation"
  },
  {
    "path": "core/src/api/__tests__/oas-loader.test.ts",
    "chars": 2836,
    "preview": "import path from \"path\";\nimport { parse } from \"../oas-loader\";\n\ndescribe(\"#parse\", () => {\n  const filename: string = p"
  },
  {
    "path": "core/src/api/__tests__/operation.test.ts",
    "chars": 17464,
    "preview": "import Operation from \"../operation\";\nimport Security from \"../security\";\n\nconst group = \"petstore\";\nconst httpMethod = "
  },
  {
    "path": "core/src/api/__tests__/security.test.ts",
    "chars": 5368,
    "preview": "import Security from \"../security\";\n\ndescribe(\"Security\", () => {\n  describe(\"#constructor\", () => {\n    describe(\"valid"
  },
  {
    "path": "core/src/api/oas-loader.ts",
    "chars": 1921,
    "preview": "import SwaggerParser from \"@apidevtools/swagger-parser\";\nimport Operation from \"./operation\";\nimport Security from \"./se"
  },
  {
    "path": "core/src/api/operation.ts",
    "chars": 6128,
    "preview": "import Security from \"./security\";\n\ninterface OperationInput {\n  group: string;\n  httpMethod: string;\n  baseUrl: string;"
  },
  {
    "path": "core/src/api/security.ts",
    "chars": 1760,
    "preview": "export default class Security {\n  type: string;\n  scheme?: string;\n  name?: string;\n  inKey?: string;\n\n  constructor(aut"
  },
  {
    "path": "core/tsconfig.json",
    "chars": 405,
    "preview": "{\n  \"include\": [\".\"],\n  \"exclude\": [\"dist\", \"build\", \"node_modules\"],\n  \"compilerOptions\": {\n    \"lib\": [\"DOM\", \"es2020\""
  },
  {
    "path": "package.json",
    "chars": 1031,
    "preview": "{\n  \"private\": true,\n  \"version\": \"0.6.0\",\n  \"license\": \"MIT\",\n  \"type\": \"module\",\n  \"scripts\": {\n    \"build\": \"turbo ru"
  },
  {
    "path": "packages/eslint-config-custom/index.js",
    "chars": 227,
    "preview": "module.exports = {\n  extends: [\"next\", \"turbo\", \"prettier\"],\n  rules: {\n    \"@next/next/no-html-link-for-pages\": \"off\",\n"
  },
  {
    "path": "packages/eslint-config-custom/package.json",
    "chars": 325,
    "preview": "{\n  \"name\": \"eslint-config-custom\",\n  \"version\": \"0.0.0\",\n  \"main\": \"index.js\",\n  \"license\": \"MIT\",\n  \"dependencies\": {\n"
  },
  {
    "path": "packages/tsconfig/base.json",
    "chars": 602,
    "preview": "{\n  \"$schema\": \"https://json.schemastore.org/tsconfig\",\n  \"display\": \"Default\",\n  \"compilerOptions\": {\n    \"composite\": "
  },
  {
    "path": "packages/tsconfig/nextjs.json",
    "chars": 530,
    "preview": "{\n  \"$schema\": \"https://json.schemastore.org/tsconfig\",\n  \"display\": \"Next.js\",\n  \"extends\": \"./base.json\",\n  \"compilerO"
  },
  {
    "path": "packages/tsconfig/package.json",
    "chars": 135,
    "preview": "{\n  \"name\": \"tsconfig\",\n  \"version\": \"0.0.0\",\n  \"private\": true,\n  \"license\": \"MIT\",\n  \"publishConfig\": {\n    \"access\": "
  },
  {
    "path": "packages/tsconfig/react-library.json",
    "chars": 241,
    "preview": "{\n  \"$schema\": \"https://json.schemastore.org/tsconfig\",\n  \"display\": \"React Library\",\n  \"extends\": \"./base.json\",\n  \"com"
  },
  {
    "path": "server/README.md",
    "chars": 1394,
    "preview": "## Getting Started\n\nFirst, run the development server:\n\n```bash\nyarn dev\n```\n\nOpen [http://localhost:3000](http://localh"
  },
  {
    "path": "server/app/layout.tsx",
    "chars": 262,
    "preview": "import \"bootstrap/dist/css/bootstrap.css\";\n\nexport default function RootLayout({\n  children,\n}: {\n  children: React.Reac"
  },
  {
    "path": "server/app/page.tsx",
    "chars": 4369,
    "preview": "\"use client\";\n\nimport { useState, useRef } from \"react\";\nconst URL = \"/api/run\";\n\nconst renderContent = (data) => {\n  if"
  },
  {
    "path": "server/next-env.d.ts",
    "chars": 267,
    "preview": "/// <reference types=\"next\" />\n/// <reference types=\"next/image-types/global\" />\n/// <reference types=\"next/navigation-t"
  },
  {
    "path": "server/next.config.js",
    "chars": 78,
    "preview": "module.exports = {\n  reactStrictMode: true,\n  transpilePackages: [\"core\"],\n};\n"
  },
  {
    "path": "server/oases/open-ai.yaml",
    "chars": 326475,
    "preview": "openapi: 3.0.0\ninfo:\n  title: OpenAI API\n  description: The OpenAI REST API. Please see https://platform.openai.com/docs"
  },
  {
    "path": "server/package.json",
    "chars": 584,
    "preview": "{\n  \"name\": \"@api2ai/server\",\n  \"version\": \"0.1.1\",\n  \"private\": true,\n  \"scripts\": {\n    \"dev\": \"next dev -p 5555\",\n   "
  },
  {
    "path": "server/pages/api/api2ai.config.ts",
    "chars": 389,
    "preview": "import path from \"path\";\nimport \"dotenv/config\";\n\nconst oasesDirectory = path.join(process.cwd(), \"oases\");\nconst openAI"
  },
  {
    "path": "server/pages/api/run.ts",
    "chars": 746,
    "preview": "// Don't import from @api2ai/core b/c the build needs fix-ono.sh\nimport { ApiAgent } from \"../../../core/index\";\nimport "
  },
  {
    "path": "server/tsconfig.json",
    "chars": 297,
    "preview": "{\n  \"extends\": \"tsconfig/nextjs.json\",\n  \"compilerOptions\": {\n    \"plugins\": [\n      {\n        \"name\": \"next\"\n      }\n  "
  },
  {
    "path": "turbo.json",
    "chars": 327,
    "preview": "{\n  \"$schema\": \"https://turbo.build/schema.json\",\n  \"globalDependencies\": [\"**/.env.*local\"],\n  \"pipeline\": {\n    \"build"
  }
]

About this extraction

This page contains the full source code of the mquan/api2ai GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 49 files (399.7 KB), approximately 91.3k tokens, and a symbol index with 37 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!