Repository: james1236/backseat.nvim
Branch: main
Commit: 3bb2781cdbf9
Files: 4
Total size: 22.5 KB
Directory structure:
gitextract_t7l404gc/
├── README.md
├── lua/
│ └── backseat/
│ ├── fewshot.lua
│ └── init.lua
└── plugin/
└── backseat.lua
================================================
FILE CONTENTS
================================================
================================================
FILE: README.md
================================================
# backseat.nvim
A neovim plugin that uses GPT to highlight and explain code readability issues. Get unsolicited advice of dubious quality in never-before-seen quantities!
<br><br>

# Commands
| User Command | Purpose |
| -- | -- |
| `:Backseat` | Sends the current buffer to OpenAI to highlight readability feedback |
| `:BackseatAsk <question>` | Ask a question about the code in the current buffer (i.e What does the function on line 20 do?, Summarize this code)
| `:BackseatClear` | Clear all Backseat highlighting from the current buffer
| `:BackseatClearLine` | Clear the current line of Backseat highlighting
If a buffer contains more than 100 lines, it will be split into multiple <= 100 line requests.
## Requirements
* curl
* OpenAI API key - You can get yours with a free account from [their website](https://platform.openai.com/account/api-keys). If you don't have any more free credits, usage is very cheap at ~$0.004 per 100 lines submitted.
# Install
### Lazy plugin manager
```lua
{
"james1236/backseat.nvim",
config = function()
require("backseat").setup({
-- Alternatively, set the env var $OPENAI_API_KEY by putting "export OPENAI_API_KEY=sk-xxxxx" in your ~/.bashrc
openai_api_key = 'sk-xxxxxxxxxxxxxx', -- Get yours from platform.openai.com/account/api-keys
openai_model_id = 'gpt-3.5-turbo', --gpt-4 (If you do not have access to a model, it says "The model does not exist")
-- language = 'english', -- Such as 'japanese', 'french', 'pirate', 'LOLCAT'
-- split_threshold = 100,
-- additional_instruction = "Respond snarkily", -- (GPT-3 will probably deny this request, but GPT-4 complies)
-- highlight = {
-- icon = '', -- ''
-- group = 'Comment',
-- }
})
end
},
```
### The result of using `additional_instruction = "Respond snarkily"`

# Config
| Setup Table Name | Default | Purpose |
| --- | --- | -- |
| `openai_api_key` | `nil` | Your OpenAI API key, needed to use their language models
| `openai_model_id` | `'gpt-3.5-turbo'` | The model's identifier, such as gpt-3.5-turbo and gpt-4
| `language` | `'english'` | The language of suggestions, such as 'japanese', 'french' or made up ones like 'pirate', 'LOLCAT'
| `split_threshold` | `100` | The max number of lines of code sent per request (lower uses more tokens but increases number of suggestions)
| `additional_instruction`, | `nil` | An additional instruction to give the AI, like "Make your responses more brief"
| `highlight.icon` | `''` | The sign column icon to display for each line containing suggestions
| `highlight.group` | `'String'` | The `:hi` highlight color group for the icon and the suggestion text
# More Examples

================================================
FILE: lua/backseat/fewshot.lua
================================================
local fewshot = {
messages = {
{
role = "system",
content = [[
You must identify any readability issues in the code snippet.
Some readability issues to consider:
- Unclear naming
- Unclear purpose
- Redundant or obvious comments
- Lack of comments
- Long or complex one liners
- Too much nesting
- Long variable names
- Inconsistent naming and code style.
- Code repetition
You may identify additional problems. The user submits a small section of code from a larger file.
Only list lines with readability issues, in the format line=<num>: <issue and proposed solution>
Your commentary must fit on a single line]]
},
{
role = "user",
content = [[
04 public class Logic {
05 public static void main(String[] args) {
06 Scanner sc = new Scanner(System.in);
07 int n = sc.nextInt();
08 int[] arr = new int[n];
09 for (int i = 0; i < n; i++) {
10 arr[i] = sc.nextInt();
11 }
12 int[] dp = new int[n];
13 dp[0] = arr[0];
14 dp[1] = Math.max(arr[0], arr[1]);
15 for (int i = 2; i < n; i++) {
16 dp[i] = Math.max(dp[i - 1], dp[i - 2] + arr[i]);
17 }
18 System.out.println(dp[n - 1]);
19 }
20 }]]
},
{
role = "assistant",
content = [[
line=4: The class name 'Logic' is too generic. A more meaningful name could be 'DynamicProgramming'
line=6: The variable name 'sc' is unclear. A more meaningful name could be 'scanner'.
line=7: The variable name 'n' is unclear. A more meaningful name could be 'arraySize' or 'numElements'.
line=8: The variable name 'arr' unclear. A more descriptive name could be 'inputArray' or 'elementValues'.
line=12: The variable name 'dp' is unclear. A more informative name could be 'maxSum' or 'optimalSolution'.
line=13: There are no comments explaining the meaning of the 'dp' array values and how they relate to the problem statement.
line=15: There are no comments explaining the logic and purpose of the for loop]]
},
{
role = "user",
content = [[
673 for (let i: number = 0; i < l; i++) {
674 let notAddr: boolean = false;
675 // non standard input
676 if (items[i].scriptSig && !items[i].addr) {
677 items[i].addr = 'Unparsed address [' + u++ + ']';
678 items[i].notAddr = true;
679 notAddr = true;
680 }
681
682 // non standard output
683 if (items[i].scriptPubKey && !items[i].scriptPubKey.addresses) {
684 items[i].scriptPubKey.addresses = ['Unparsed address [' + u++ + ']'];]]
},
{
role = "assistant",
content = [[
line=673: The variable name 'i' and 'l' are unclear and easily confused with other characters like '1'. More meaningful names could be 'index' and 'length' respectively.
line=674: The variable name 'notAddr' is unclear and a double negative. An alternative could be 'hasUnparsedAddress'.
line=676: The comment "non standard input" is not very informative. It could be more descriptive, e.g., "Check for non standard input address"
line=682: The comment "non standard output" is not very informative. It could be more descriptive, e.g., "Check for non standard output address"
line=683: The variable name 'items' might be more informative if changed to 'transactions' or 'txItems'.
line=684: The array element 'Unparsed address [' + u++ + ']' could use a more descriptive comment, e.g., "Assign a unique identifier to non standard output addresses"
line=684: The variable name 'u' is unclear. A more meaningful name could be 'unparsedAddressCount' or 'unparsedAddressId'.]]
}
}
}
return fewshot
================================================
FILE: lua/backseat/init.lua
================================================
local M = {}
local default_opts = {
openai_api_key = nil,
openai_model_id = 'gpt-3.5-turbo',
language = 'english',
additional_instruction = nil,
split_threshold = 100,
highlight = {
icon = '',
group = 'String',
}
}
function M.setup(opts)
-- Merge default_opts with opts
opts = vim.tbl_deep_extend('force', default_opts, opts or {})
-- Set the module's options
-- if vim.g.backseat_openai_api_key == nil then
vim.g.backseat_openai_api_key = opts.openai_api_key
-- end
-- if vim.g.backseat_openai_model_id == nil then
vim.g.backseat_openai_model_id = opts.openai_model_id
-- end
-- if vim.g.backseat_language == nil then
vim.g.backseat_language = opts.language
-- end
-- if vim.g.backseat_additional_instruction == nil then
vim.g.backseat_additional_instruction = opts.additional_instruction
-- end
-- if vim.g.backseat_split_threshold == nil then
vim.g.backseat_split_threshold = opts.split_threshold
-- end
-- if vim.g.backseat_highlight_icon == nil then
vim.g.backseat_highlight_icon = opts.highlight.icon
-- end
-- if vim.g.backseat_highlight_group == nil then
vim.g.backseat_highlight_group = opts.highlight.group
-- end
end
return M
================================================
FILE: plugin/backseat.lua
================================================
-- Automatically executed on startup
if vim.g.loaded_backseat then
return
end
vim.g.loaded_backseat = true
require("backseat").setup()
local fewshot = require("backseat.fewshot") -- The training messages
-- Create namespace for backseat suggestions
local backseatNamespace = vim.api.nvim_create_namespace("backseat")
local function print(msg)
_G.print("Backseat > " .. msg)
end
local function get_api_key()
-- Priority: 1. g:backseat_openai_api_key 2. $OPENAI_API_KEY 3. Prompt user
local api_key = vim.g.backseat_openai_api_key
if api_key == nil then
local key = os.getenv("OPENAI_API_KEY")
if key ~= nil then
return key
end
local message =
"No API key found. Please set openai_api_key in the setup table or set the $OPENAI_API_KEY environment variable."
vim.fn.confirm(message, "&OK", 1, "Warning")
return nil
end
return api_key
end
local function get_model_id()
local model = vim.g.backseat_openai_model_id
if model == nil then
if vim.g.backseat_model_id_complained == nil then
local message =
"No model id specified. Please set openai_model_id in the setup table. Defaulting to gpt-3.5-turbo for now" -- "gpt-4"
vim.fn.confirm(message, "&OK", 1, "Warning")
vim.g.backseat_model_id_complained = 1
end
return "gpt-3.5-turbo"
end
return model
end
local function get_language()
return vim.g.backseat_language
end
local function get_additional_instruction()
return vim.g.backseat_additional_instruction or ""
end
local function get_split_threshold()
return vim.g.backseat_split_threshold
end
local function get_highlight_icon()
return vim.g.backseat_highlight_icon
end
local function get_highlight_group()
return vim.g.backseat_highlight_group
end
local function split_long_text(text)
local lines = vim.split(text, "\n")
-- Get the width of the screen
local screenWidth = vim.api.nvim_win_get_width(0) - 20
-- Split any suggestionLines that are too long
local newLines = {}
for _, line in ipairs(lines) do
if string.len(line) >= screenWidth then
local splitLines = vim.split(line, " ")
local currentLine = ""
for _, word in ipairs(splitLines) do
if string.len(currentLine) + string.len(word) > screenWidth then
table.insert(newLines, currentLine)
currentLine = word
else
currentLine = currentLine .. " " .. word
end
end
table.insert(newLines, currentLine)
else
table.insert(newLines, line)
end
end
return newLines
end
local function gpt_request(dataJSON, callback, callbackTable)
local api_key = get_api_key()
if api_key == nil then
return nil
end
-- Check if curl is installed
if vim.fn.executable("curl") == 0 then
vim.fn.confirm("curl installation not found. Please install curl to use Backseat", "&OK", 1, "Warning")
return nil
end
local curlRequest
-- Create temp file
local tempFilePath = vim.fn.tempname()
local tempFile = io.open(tempFilePath, "w")
if tempFile == nil then
print("Error creating temp file")
return nil
end
-- Write dataJSON to temp file
tempFile:write(dataJSON)
tempFile:close()
-- Escape the name of the temp file for command line
local tempFilePathEscaped = vim.fn.fnameescape(tempFilePath)
-- Check if the user is on windows
local isWindows = vim.fn.has("win32") == 1 or vim.fn.has("win64") == 1
if isWindows ~= true then
-- Linux
curlRequest = string.format(
"curl -s https://api.openai.com/v1/chat/completions -H \"Content-Type: application/json\" -H \"Authorization: Bearer " ..
api_key ..
"\" --data-binary \"@" .. tempFilePathEscaped .. "\"; rm " .. tempFilePathEscaped .. " > /dev/null 2>&1"
)
else
-- Windows
curlRequest = string.format(
"curl -s https://api.openai.com/v1/chat/completions -H \"Content-Type: application/json\" -H \"Authorization: Bearer " ..
api_key ..
"\" --data-binary \"@" .. tempFilePathEscaped .. "\" & del " .. tempFilePathEscaped .. " > nul 2>&1"
)
end
-- vim.fn.confirm(curlRequest, "&OK", 1, "Warning")
vim.fn.jobstart(curlRequest, {
stdout_buffered = true,
on_stdout = function(_, data, _)
local response = table.concat(data, "\n")
local success, responseTable = pcall(vim.json.decode, response)
if success == false or responseTable == nil then
if response == nil then
response = "nil"
end
print("Bad or no response: " .. response)
return nil
end
if responseTable.error ~= nil then
print("OpenAI Error: " .. responseTable.error.message)
return nil
end
-- print(response)
callback(responseTable, callbackTable)
-- return response
end,
on_stderr = function(_, data, _)
return data
end,
on_exit = function(_, data, _)
return data
end,
})
-- vim.cmd("sleep 10000m") -- Sleep to give time to read the error messages
end
local function parse_response(response, partNumberString, bufnr)
-- split response.choices[1].message.content into lines
local lines = vim.split(response.choices[1].message.content, "\n")
--Suggestions may span multiple lines, so we need to change the list of lines into a list of suggestions
local suggestions = {}
-- Add each line to the suggestions table if it starts with line= or lines=
for _, line in ipairs(lines) do
if (string.sub(line, 1, 5) == "line=") or string.sub(line, 1, 6) == "lines=" then
-- Add this line to the suggestions table
table.insert(suggestions, line)
elseif #suggestions > 0 then
-- Append lines that don't start with line= or lines= to the previous suggestion
suggestions[#suggestions] = suggestions[#suggestions] .. "\n" .. line
end
end
if #suggestions == 0 then
print("AI Says: " ..
response.choices[1].message.content ..
" - Used " .. response.usage.total_tokens .. " tokens from model " .. get_model_id() .. partNumberString)
else
print("AI made " ..
#suggestions ..
" suggestion(s) using " ..
response.usage.total_tokens .. " tokens from model " .. get_model_id() .. partNumberString)
end
-- Act on each suggestion
for _, suggestion in ipairs(suggestions) do
-- Get the line number
local lineString = string.sub(suggestion, 6, string.find(suggestion, ":") - 1)
-- The string may be in the format "line=1-3", so we can extract the first number
if string.find(lineString, "-") ~= nil then
lineString = string.sub(lineString, 1, string.find(lineString, "-") - 1)
end
local lineNum = tonumber(lineString)
if lineNum == nil then
-- print("Bad line number: " .. line)
-- If the line number is bad, just add the suggestion to the first line
lineNum = 1
-- goto continue
end
-- Get the message
local message = string.sub(suggestion, string.find(suggestion, ":") + 1, string.len(suggestion))
-- If the first character is a space, remove it
if string.sub(message, 1, 1) == " " then
message = string.sub(message, 2, string.len(message))
end
-- print("Line " .. lineNum .. ": " .. message)
-- Split suggestion into line, highlight group pairs
local newLines = split_long_text(message)
local pairs = {}
for i, line in ipairs(newLines) do
local pair = {}
pair[1] = line
pair[2] = get_highlight_group()
pairs[i] = { pair }
end
-- Add suggestion virtual text and a lightbulb icon to the sign column
vim.api.nvim_buf_set_extmark(bufnr, backseatNamespace, lineNum - 1, 0, {
virt_text_pos = "overlay",
virt_lines = pairs,
hl_mode = "combine",
sign_text = get_highlight_icon(),
sign_hl_group = get_highlight_group()
})
-- ::continue::
end
end
local function prepare_code_snippet(bufnr, startingLineNumber, endingLineNumber)
-- print("Preparing code snippet from lines " .. startingLineNumber .. " to " .. endingLineNumber)
local lines = vim.api.nvim_buf_get_lines(bufnr, startingLineNumber - 1, endingLineNumber, false)
-- Get the max number of digits needed to display a line number
local maxDigits = string.len(tostring(#lines + startingLineNumber))
-- Prepend each line with its line number zero padded to numDigits
for i, line in ipairs(lines) do
lines[i] = string.format("%0" .. maxDigits .. "d", i - 1 + startingLineNumber) .. " " .. line
end
local text = table.concat(lines, "\n")
return text
end
local backseat_callback
local function backseat_send_from_request_queue(callbackTable)
-- Stop if there are no more requests in the queue
if (#callbackTable.requests == 0) then
return nil
end
-- Get bufname without the path
local bufname = vim.fn.fnamemodify(vim.fn.bufname(callbackTable.bufnr), ":t")
if callbackTable.requestIndex == 0 then
if callbackTable.startingRequestCount == 1 then
print("Sending " .. bufname .. " (" .. callbackTable.lineCount .. " lines) and waiting for response...")
else
print("Sending " ..
bufname .. " (split into " .. callbackTable.startingRequestCount .. " requests) and waiting for response...")
end
end
-- Get the first request from the queue
local requestJSON = table.remove(callbackTable.requests, 1)
callbackTable.requestIndex = callbackTable.requestIndex + 1
gpt_request(requestJSON, backseat_callback, callbackTable)
end
-- Callback for a backseat request
function backseat_callback(responseTable, callbackTable)
if responseTable ~= nil then
if callbackTable.startingRequestCount == 1 then
parse_response(responseTable, "", callbackTable.bufnr)
else
parse_response(responseTable,
" (request " .. callbackTable.requestIndex .. " of " .. callbackTable.startingRequestCount .. ")",
callbackTable.bufnr)
end
end
if callbackTable.requestIndex < callbackTable.startingRequestCount + 1 then
backseat_send_from_request_queue(callbackTable)
end
end
-- Send the current buffer to the AI for readability feedback
vim.api.nvim_create_user_command("Backseat", function()
-- Split the current buffer into groups of lines of size splitThreshold
local splitThreshold = get_split_threshold()
local bufnr = vim.api.nvim_get_current_buf()
local lines = vim.api.nvim_buf_get_lines(bufnr, 0, -1, false)
local numRequests = math.ceil(#lines / splitThreshold)
local model = get_model_id()
local requestTable = {
model = model,
messages = fewshot.messages
}
local requests = {}
for i = 1, numRequests do
local startingLineNumber = (i - 1) * splitThreshold + 1
local text = prepare_code_snippet(bufnr, startingLineNumber, startingLineNumber + splitThreshold - 1)
-- print(text)
if get_additional_instruction() ~= "" then
text = text .. "\n" .. get_additional_instruction()
end
if get_language() ~= "" and get_language() ~= "english" then
text = text .. "\nRespond only in " .. get_language() .. ", but keep the 'line=<num>:' part in english"
end
-- Make a copy of requestTable (value not reference)
local tempRequestTable = vim.deepcopy(requestTable)
-- Add the code snippet to the request
table.insert(tempRequestTable.messages, {
role = "user",
content = text
})
local requestJSON = vim.json.encode(tempRequestTable)
requests[i] = requestJSON
-- print(requestJSON)
end
backseat_send_from_request_queue({
requests = requests,
startingRequestCount = numRequests,
requestIndex = 0,
bufnr = bufnr,
lineCount = #lines,
})
-- require("backseat.main"):run()
end, {})
-- Use the underlying chat API to ask a question about the current buffer's code
local function backseat_ask_callback(responseTable)
if responseTable == nil then
return nil
end
local message = "AI Says: " .. responseTable.choices[1].message.content
-- Split long messages into multiple lines
message = table.concat(split_long_text(message), "\r\n")
vim.fn.confirm(message, "&OK", 1, "Generic")
end
vim.api.nvim_create_user_command("BackseatAsk", function(opts)
local bufnr = vim.api.nvim_get_current_buf()
local text = prepare_code_snippet(bufnr, 1, -1)
if get_additional_instruction() ~= "" then
text = text .. "\n" .. get_additional_instruction()
end
if get_language() ~= "" and get_language() ~= "english" then
text = text .. "\nRespond only in " .. get_language()
end
local bufname = vim.fn.fnamemodify(vim.fn.bufname(bufnr), ":t")
print("Asking AI '" .. opts.args .. "' (in " .. bufname .. ")...")
gpt_request(vim.json.encode(
{
model = get_model_id(),
messages = {
{
role = "system",
content = "You are a helpful assistant who can respond to questions about the following code. You can also act as a regular assistant"
},
{
role = "user",
content = text
},
{
role = "user",
content = opts.args
}
},
}
), backseat_ask_callback)
end, { nargs = "+" })
-- Clear all backseat virtual text and signs
vim.api.nvim_create_user_command("BackseatClear", function()
local bufnr = vim.api.nvim_get_current_buf()
vim.api.nvim_buf_clear_namespace(bufnr, backseatNamespace, 0, -1)
end, {})
-- Clear backseat virtual text and signs for that line
vim.api.nvim_create_user_command("BackseatClearLine", function()
local bufnr = vim.api.nvim_get_current_buf()
local lineNum = vim.api.nvim_win_get_cursor(0)[1]
vim.api.nvim_buf_clear_namespace(bufnr, backseatNamespace, lineNum - 1, lineNum)
end, {})
gitextract_t7l404gc/
├── README.md
├── lua/
│ └── backseat/
│ ├── fewshot.lua
│ └── init.lua
└── plugin/
└── backseat.lua
Condensed preview — 4 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (24K chars).
[
{
"path": "README.md",
"chars": 3116,
"preview": "# backseat.nvim\nA neovim plugin that uses GPT to highlight and explain code readability issues. Get unsolicited advice o"
},
{
"path": "lua/backseat/fewshot.lua",
"chars": 3739,
"preview": "local fewshot = {\n messages = {\n {\n role = \"system\",\n content = [[\nYou must identify any"
},
{
"path": "lua/backseat/init.lua",
"chars": 1288,
"preview": "local M = {}\n\nlocal default_opts = {\n openai_api_key = nil,\n openai_model_id = 'gpt-3.5-turbo',\n language = 'en"
},
{
"path": "plugin/backseat.lua",
"chars": 14888,
"preview": "-- Automatically executed on startup\nif vim.g.loaded_backseat then\n return\nend\nvim.g.loaded_backseat = true\n\nrequire("
}
]
About this extraction
This page contains the full source code of the james1236/backseat.nvim GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 4 files (22.5 KB), approximately 5.8k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.