Showing preview only (854K chars total). Download the full file or copy to clipboard to get everything.
Repository: GaiZhenbiao/ChuanhuChatGPT
Branch: main
Commit: 6cb3e2212add
Files: 106
Total size: 817.5 KB
Directory structure:
gitextract_9kyxuu8j/
├── .github/
│ ├── CONTRIBUTING.md
│ ├── ISSUE_TEMPLATE/
│ │ ├── config.yml
│ │ ├── feature-request.yml
│ │ ├── report-bug.yml
│ │ ├── report-docker.yml
│ │ ├── report-localhost.yml
│ │ ├── report-others.yml
│ │ └── report-server.yml
│ ├── pull_request_template.md
│ └── workflows/
│ ├── Build_Docker.yml
│ └── Release_docker.yml
├── .gitignore
├── CITATION.cff
├── ChuanhuChatbot.py
├── Dockerfile
├── LICENSE
├── README.md
├── config_example.json
├── configs/
│ └── ds_config_chatbot.json
├── locale/
│ ├── en_US.json
│ ├── extract_locale.py
│ ├── ja_JP.json
│ ├── ko_KR.json
│ ├── ru_RU.json
│ ├── sv_SE.json
│ ├── vi_VN.json
│ └── zh_CN.json
├── modules/
│ ├── __init__.py
│ ├── config.py
│ ├── index_func.py
│ ├── models/
│ │ ├── Azure.py
│ │ ├── ChatGLM.py
│ │ ├── ChuanhuAgent.py
│ │ ├── Claude.py
│ │ ├── DALLE3.py
│ │ ├── ERNIE.py
│ │ ├── GoogleGemini.py
│ │ ├── GoogleGemma.py
│ │ ├── GooglePaLM.py
│ │ ├── Groq.py
│ │ ├── LLaMA.py
│ │ ├── MOSS.py
│ │ ├── Ollama.py
│ │ ├── OpenAIInstruct.py
│ │ ├── OpenAIVision.py
│ │ ├── Qwen.py
│ │ ├── StableLM.py
│ │ ├── XMChat.py
│ │ ├── __init__.py
│ │ ├── base_model.py
│ │ ├── configuration_moss.py
│ │ ├── inspurai.py
│ │ ├── midjourney.py
│ │ ├── minimax.py
│ │ ├── modeling_moss.py
│ │ ├── models.py
│ │ ├── spark.py
│ │ └── tokenization_moss.py
│ ├── overwrites.py
│ ├── pdf_func.py
│ ├── presets.py
│ ├── repo.py
│ ├── shared.py
│ ├── train_func.py
│ ├── utils.py
│ ├── webui.py
│ └── webui_locale.py
├── readme/
│ ├── README_en.md
│ ├── README_ja.md
│ ├── README_ko.md
│ └── README_ru.md
├── requirements.txt
├── requirements_advanced.txt
├── run_Linux.sh
├── run_Windows.bat
├── run_macOS.command
└── web_assets/
├── html/
│ ├── appearance_switcher.html
│ ├── billing_info.html
│ ├── chatbot_header_btn.html
│ ├── chatbot_more.html
│ ├── chatbot_placeholder.html
│ ├── close_btn.html
│ ├── footer.html
│ ├── func_nav.html
│ ├── header_title.html
│ ├── update.html
│ └── web_config.html
├── javascript/
│ ├── ChuanhuChat.js
│ ├── chat-history.js
│ ├── chat-list.js
│ ├── external-scripts.js
│ ├── fake-gradio.js
│ ├── file-input.js
│ ├── localization.js
│ ├── message-button.js
│ ├── sliders.js
│ ├── updater.js
│ ├── user-info.js
│ ├── utils.js
│ └── webui.js
├── manifest.json
└── stylesheet/
├── ChuanhuChat.css
├── chatbot.css
├── custom-components.css
├── markdown.css
└── override-gradio.css
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/CONTRIBUTING.md
================================================
# 如何做出贡献
感谢您对 **川虎Chat** 的关注!感谢您投入时间为我们的项目做出贡献!
在开始之前,您可以阅读我们的以下简短提示。更多信息您可以点击链接查阅。
## GitHub 新手?
以下是 GitHub 的一些资源,如果您是GitHub新手,它们可帮助您开始为开源项目做贡献:
- [GitHub上为开源做出贡献的方法](https://docs.github.com/en/get-started/exploring-projects-on-github/finding-ways-to-contribute-to-open-source-on-github)
- [设置Git](https://docs.github.com/en/get-started/quickstart/set-up-git)
- [GitHub工作流](https://docs.github.com/en/get-started/quickstart/github-flow)
- [使用拉取请求](https://docs.github.com/en/github/collaborating-with-pull-requests)
## 提交 Issues
是的!提交ISSUE其实是您为项目做出贡献的一种方式!但需要您提出合理的ISSUE才是对项目有帮助的。
我们的[常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)中描述了您应当怎样提出一个不重复的ISSUE,以及什么情况应当提ISSUE,什么情况应当在讨论区发问。
**请注意,ISSUE不是项目的评论区。**
> **Note**
>
> 另外,请注意“问题”一词表示“question”和“problem”的区别。
> 如果您需要报告项目本身实际的技术问题、故障或错误(problem),那么欢迎提交一个新的 issue。但是,如果您只是碰到了一些自己无法解决的问题需要向其他用户或我们提问(question),那么最好的选择是在讨论区中发布一个新的帖子。 如果您不确定,请首先考虑在讨论区提问。
>
> 目前,我们默认了您发在 issue 中的问题是一个 question,但我们希望避免再在 issue 中见到类似“我该怎么操作?”的提问QAQ。
## 提交 Pull Request
如果您具备一定能力,您可以修改本项目的源代码,并提交一个 pull request!合并之后,您的名字将会出现在 CONTRIBUTORS 中~
我们的[贡献指南](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)详细地写出了您每一步应当做什么~ 如果您希望提交源代码的更改,快去看看吧~
> **Note**
>
> 我们不会强制要求您符合我们的规范,但希望您可以减轻我们的工作。
## 参与讨论
讨论区是我们进行对话的地方。
如果您想帮助有一个很棒的新想法,或者想分享您的使用技巧,请加入我们的讨论(Discussion)!同时,许多用户会在讨论区提出他们的疑问,如果您能为他们提供解答,我们也将无比感激!
-----
再次感谢您看到这里!感谢您为我们项目做出的贡献!
================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
blank_issues_enabled:
contact_links:
- name: 讨论区
url: https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions
about: 如果遇到疑问,请优先前往讨论区提问~
================================================
FILE: .github/ISSUE_TEMPLATE/feature-request.yml
================================================
name: 功能请求
description: "请求更多功能!"
title: "[功能请求]: "
labels: ["feature request"]
body:
- type: markdown
attributes:
value: 您可以请求更多功能!麻烦您花些时间填写以下信息~
- type: textarea
attributes:
label: 相关问题
description: 该功能请求是否与某个问题相关?
placeholder: 发送信息后有概率ChatGPT返回error,刷新后又要重新打一遍文字,较为麻烦
validations:
required: false
- type: textarea
attributes:
label: 可能的解决办法
description: 如果可以,给出一个解决思路~ 或者,你希望实现什么功能?
placeholder: 发送失败后在输入框或聊天气泡保留发送的文本
validations:
required: true
- type: checkboxes
attributes:
label: 帮助开发
description: 如果您能帮助开发并提交一个pull request,那再好不过了!<br />
参考:[贡献指南](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)
options:
- label: 我愿意协助开发!
required: false
- type: textarea
attributes:
label: 补充说明
description: |
链接?参考资料?任何更多背景信息!
================================================
FILE: .github/ISSUE_TEMPLATE/report-bug.yml
================================================
name: 报告BUG
description: "报告一个bug,且您确信这是bug而不是您的问题"
title: "[Bug]: "
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
**在一切开始之前,请确保您已经阅读过 [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) 页面**。
如果您确信这是一个我们的 bug,而不是因为您的原因部署失败,欢迎提交该issue!
如果您不能确定这是bug还是您的问题,请选择 [其他类型的issue模板](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/new/choose)。
------
- type: checkboxes
attributes:
label: 这个bug是否已存在现有issue了?
description: 请搜索全部issue和[常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)以查看您想报告的issue是否已存在。
options:
- label: 我确认没有已有issue,且已阅读**常见问题**。
required: true
- type: textarea
id: what-happened
attributes:
label: 错误表现
description: 请描述您遇到的bug。<br />
提示:如果可以,也请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
如果可以,也请提供`.json`格式的对话记录。
placeholder: 发生什么事了?
validations:
required: true
- type: textarea
attributes:
label: 复现操作
description: 你之前干了什么,然后出现了bug呢?
placeholder: |
1. 正常完成本地部署
2. 选取GPT3.5-turbo模型,正确填写API
3. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
4. ChatGPT 输出部分内容后程序被自动终止
validations:
required: true
- type: textarea
id: logs
attributes:
label: 错误日志
description: 请将终端中的主要错误报告粘贴至此处。
render: shell
- type: textarea
attributes:
label: 运行环境
description: |
网页底部会列出您运行环境的版本信息,请务必填写。以下是一个例子:
- **OS**: Windows11 22H2
- **Browser**: Chrome
- **Gradio version**: 3.22.1
- **Python version**: 3.11.1
value: |
- OS:
- Browser:
- Gradio version:
- Python version:
validations:
required: false
- type: checkboxes
attributes:
label: 帮助解决
description: 如果您能够并愿意协助解决该问题,向我们提交一个pull request,那再好不过了!<br />
参考:[贡献指南](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)
options:
- label: 我愿意协助解决!
required: false
- type: textarea
attributes:
label: 补充说明
description: 链接?参考资料?任何更多背景信息!
================================================
FILE: .github/ISSUE_TEMPLATE/report-docker.yml
================================================
name: Docker部署错误
description: "报告使用 Docker 部署时的问题或错误"
title: "[Docker]: "
labels: ["question","docker deployment"]
body:
- type: markdown
attributes:
value: |
感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
**在一切开始之前,请确保您已经阅读过 [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) 页面**,查看它是否已经对您的问题做出了解答。
如果没有,请检索 [issue](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues) 与 [discussion](https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions) ,查看有没有相同或类似的问题。
------
- type: checkboxes
attributes:
label: 是否已存在现有反馈与解答?
description: 请搜索issue、discussion和[常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)以查看您想报告的issue是否已存在。
options:
- label: 我确认没有已有issue或discussion,且已阅读**常见问题**。
required: true
- type: checkboxes
attributes:
label: 是否是一个代理配置相关的疑问?
description: 请不要提交代理配置相关的issue。如有疑问请前往 [讨论区](https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions)。
options:
- label: 我确认这不是一个代理配置相关的疑问。
required: true
- type: textarea
id: what-happened
attributes:
label: 错误描述
description: 请描述您遇到的错误或问题。<br />
提示:如果可以,也请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
如果可以,也请提供`.json`格式的对话记录。
placeholder: 发生什么事了?
validations:
required: true
- type: textarea
attributes:
label: 复现操作
description: 你之前干了什么,然后出现了错误呢?
placeholder: |
1. 正常完成本地部署
2. 选取GPT3.5-turbo模型,正确填写API
3. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
4. ChatGPT 输出部分内容后程序被自动终止
validations:
required: true
- type: textarea
id: logs
attributes:
label: 错误日志
description: 请将终端中的主要错误报告粘贴至此处。
render: shell
- type: textarea
attributes:
label: 运行环境
description: |
网页底部会列出您运行环境的版本信息,请务必填写。以下是一个例子:
- **OS**: Linux/amd64
- **Docker version**: 1.8.2
- **Gradio version**: 3.22.1
- **Python version**: 3.11.1
value: |
- OS:
- Docker version:
- Gradio version:
- Python version:
validations:
required: false
- type: textarea
attributes:
label: 补充说明
description: 链接?参考资料?任何更多背景信息!
================================================
FILE: .github/ISSUE_TEMPLATE/report-localhost.yml
================================================
name: 本地部署错误
description: "报告本地部署时的问题或错误(小白首选)"
title: "[本地部署]: "
labels: ["question","localhost deployment"]
body:
- type: markdown
attributes:
value: |
感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
**在一切开始之前,请确保您已经阅读过 [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) 页面**,查看它是否已经对您的问题做出了解答。
如果没有,请检索 [issue](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues) 与 [discussion](https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions) ,查看有没有相同或类似的问题。
**另外,请不要再提交 `Something went wrong Expecting value: line 1 column 1 (char 0)` 和 代理配置 相关的问题,请再看一遍 [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) 页,实在不行请前往 discussion。**
------
- type: checkboxes
attributes:
label: 是否已存在现有反馈与解答?
description: 请搜索issue、discussion和[常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)以查看您想报告的issue是否已存在。
options:
- label: 我确认没有已有issue或discussion,且已阅读**常见问题**。
required: true
- type: checkboxes
attributes:
label: 是否是一个代理配置相关的疑问?
description: 请不要提交代理配置相关的issue。如有疑问请前往 [讨论区](https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions)。
options:
- label: 我确认这不是一个代理配置相关的疑问。
required: true
- type: textarea
id: what-happened
attributes:
label: 错误描述
description: 请描述您遇到的错误或问题。<br />
提示:如果可以,也请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
如果可以,也请提供`.json`格式的对话记录。
placeholder: 发生什么事了?
validations:
required: true
- type: textarea
attributes:
label: 复现操作
description: 你之前干了什么,然后出现了错误呢?
placeholder: |
1. 正常完成本地部署
2. 选取GPT3.5-turbo模型,正确填写API
3. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
4. ChatGPT 输出部分内容后程序被自动终止
validations:
required: true
- type: textarea
id: logs
attributes:
label: 错误日志
description: 请将终端中的主要错误报告粘贴至此处。
render: shell
- type: textarea
attributes:
label: 运行环境
description: |
网页底部会列出您运行环境的版本信息,请务必填写。以下是一个例子:
- **OS**: Windows11 22H2
- **Browser**: Chrome
- **Gradio version**: 3.22.1
- **Python version**: 3.11.1
value: |
- OS:
- Browser:
- Gradio version:
- Python version:
render: markdown
validations:
required: false
- type: textarea
attributes:
label: 补充说明
description: 链接?参考资料?任何更多背景信息!
================================================
FILE: .github/ISSUE_TEMPLATE/report-others.yml
================================================
name: 其他错误
description: "报告其他问题(如 Hugging Face 中的 Space 等)"
title: "[其他]: "
labels: ["question"]
body:
- type: markdown
attributes:
value: |
感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
**在一切开始之前,请确保您已经阅读过 [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) 页面**,查看它是否已经对您的问题做出了解答。
如果没有,请检索 [issue](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues) 与 [discussion](https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions) ,查看有没有相同或类似的问题。
------
- type: checkboxes
attributes:
label: 是否已存在现有反馈与解答?
description: 请搜索issue、discussion和[常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)以查看您想报告的issue是否已存在。
options:
- label: 我确认没有已有issue或discussion,且已阅读**常见问题**。
required: true
- type: textarea
id: what-happened
attributes:
label: 错误描述
description: 请描述您遇到的错误或问题。<br />
提示:如果可以,也请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
如果可以,也请提供`.json`格式的对话记录。
placeholder: 发生什么事了?
validations:
required: true
- type: textarea
attributes:
label: 复现操作
description: 你之前干了什么,然后出现了错误呢?
placeholder: |
1. 正常完成本地部署
2. 选取GPT3.5-turbo模型,正确填写API
3. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
4. ChatGPT 输出部分内容后程序被自动终止
validations:
required: true
- type: textarea
id: logs
attributes:
label: 错误日志
description: 请将终端中的主要错误报告粘贴至此处。
render: shell
- type: textarea
attributes:
label: 运行环境
description: |
网页底部会列出您运行环境的版本信息,请务必填写。以下是一个例子:
- **OS**: Windows11 22H2
- **Browser**: Chrome
- **Gradio version**: 3.22.1
- **Python version**: 3.11.1
value: |
- OS:
- Browser:
- Gradio version:
- Python version:
(或您的其他运行环境信息)
validations:
required: false
- type: textarea
attributes:
label: 补充说明
description: 链接?参考资料?任何更多背景信息!
================================================
FILE: .github/ISSUE_TEMPLATE/report-server.yml
================================================
name: 服务器部署错误
description: "报告在远程服务器上部署时的问题或错误"
title: "[远程部署]: "
labels: ["question","server deployment"]
body:
- type: markdown
attributes:
value: |
感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
**在一切开始之前,请确保您已经阅读过 [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) 页面**,查看它是否已经对您的问题做出了解答。
如果没有,请检索 [issue](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues) 与 [discussion](https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions) ,查看有没有相同或类似的问题。
------
- type: checkboxes
attributes:
label: 是否已存在现有反馈与解答?
description: 请搜索issue、discussion和[常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)以查看您想报告的issue是否已存在。
options:
- label: 我确认没有已有issue或discussion,且已阅读**常见问题**。
required: true
- type: checkboxes
attributes:
label: 是否是一个代理配置相关的疑问?
description: 请不要提交代理配置相关的issue。如有疑问请前往 [讨论区](https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions)。
options:
- label: 我确认这不是一个代理配置相关的疑问。
required: true
- type: textarea
id: what-happened
attributes:
label: 错误描述
description: 请描述您遇到的错误或问题。<br />
提示:如果可以,也请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
如果可以,也请提供`.json`格式的对话记录。
placeholder: 发生什么事了?
validations:
required: true
- type: textarea
attributes:
label: 复现操作
description: 你之前干了什么,然后出现了错误呢?
placeholder: |
1. 正常完成本地部署
2. 选取GPT3.5-turbo模型,正确填写API
3. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
4. ChatGPT 输出部分内容后程序被自动终止
validations:
required: true
- type: textarea
id: logs
attributes:
label: 错误日志
description: 请将终端中的主要错误报告粘贴至此处。
render: shell
- type: textarea
attributes:
label: 运行环境
description: |
网页底部会列出您运行环境的版本信息,请务必填写。以下是一个例子:
- **OS**: Windows11 22H2
- **Docker version**: 1.8.2
- **Gradio version**: 3.22.1
- **Python version**: 3.11.1
value: |
- OS:
- Server:
- Gradio version:
- Python version:
validations:
required: false
- type: textarea
attributes:
label: 补充说明
description: 链接?参考资料?任何更多背景信息!
================================================
FILE: .github/pull_request_template.md
================================================
<!--
这是一个拉取请求模板。本文段处于注释中,请您先查看本注释,在您提交时该段文字将不会显示。
This is a pull request template. This paragraph is in the comments, please review this note first, the text will not be displayed when you submit it.
1. 在提交拉取请求前,您最好已经查看过 [https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南] 了解了我们的大致要求;
2. 如果您的这一个pr包含多个不同的功能添加或问题修复,请务必将您的提交拆分为多个不同的原子化的commit,甚至您可以在不同的分支中提交多个pull request;
3. 不过,就算您的提交完全不合规范也没有关系,您可以直接提交,我们会进行审查。总之我们欢迎您做出贡献!
1. Before submitting a pull request, it is recommended that you have already reviewed [https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南] to understand our general requirements.
2. If your pull request includes multiple different feature additions or bug fixes, please make sure to split your submission into multiple atomic commits. You can even submit multiple pull requests in different branches.
3. However, even if your submission does not fully comply with the guidelines, feel free to submit it directly; we will review it. In any case, we welcome your contributions!
-->
## 作者自述
### 描述
描述您的 pull request 所做的更改。
另外请附上相关程序运行时的截图(before & after),以直观地展现您的更改达成的效果。
### 相关问题
(如有)请列出与此拉取请求相关的issue。
### 补充信息
(如有)请提供任何其他信息或说明,有助于其他贡献者理解您的更改。
如果您提交的是 draft pull request,也请在这里写明开发进度。
<!-- ############ Copilot for pull request ############
不要删除下面的内容! DO NOT DELETE THE CONTENT BELOW!
## Copilot4PR [decrepated on 2023-12-15]
copilot:all
-->
================================================
FILE: .github/workflows/Build_Docker.yml
================================================
name: Build Docker when Push
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set commit SHA
run: echo "COMMIT_SHA=$(echo ${{ github.sha }} | cut -c 1-7)" >> ${GITHUB_ENV}
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.MY_TOKEN }}
- name: Owner names
run: |
GITOWNER=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')
echo "GITOWNER=$GITOWNER" >> ${GITHUB_ENV}
- name: Build and export
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: false
tags: |
ghcr.io/${{ env.GITOWNER }}/chuanhuchatgpt:latest
ghcr.io/${{ env.GITOWNER }}/chuanhuchatgpt:${{ github.sha }}
outputs: type=oci,dest=/tmp/myimage-${{ env.COMMIT_SHA }}.tar
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: chuanhuchatgpt-${{ env.COMMIT_SHA }}
path: /tmp/myimage-${{ env.COMMIT_SHA }}.tar
================================================
FILE: .github/workflows/Release_docker.yml
================================================
name: Build and Push Docker when Release
on:
release:
types: [published]
workflow_dispatch:
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.release.target_commitish }}
- name: Set release tag
run: |
echo "RELEASE_TAG=${{ github.event.release.tag_name }}" >> ${GITHUB_ENV}
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.MY_TOKEN }}
- name: Owner names
run: |
GITOWNER=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')
echo "GITOWNER=$GITOWNER" >> ${GITHUB_ENV}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
ghcr.io/${{ env.GITOWNER }}/chuanhuchatgpt:latest
ghcr.io/${{ env.GITOWNER }}/chuanhuchatgpt:${{ env.RELEASE_TAG }}
${{ secrets.DOCKERHUB_USERNAME }}/chuanhuchatgpt:latest
${{ secrets.DOCKERHUB_USERNAME }}/chuanhuchatgpt:${{ env.RELEASE_TAG }}
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
history/
index/
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# Mac system file
**/.DS_Store
#vscode
.vscode
# 配置文件/模型文件
api_key.txt
config.json
auth.json
.models/
models/*
lora/
.idea
templates/*
files/
tmp/
scripts/
include/
pyvenv.cfg
create_release.sh
================================================
FILE: CITATION.cff
================================================
cff-version: 1.2.0
title: Chuanhu Chat
message: >-
If you use this software, please cite it using these
metadata.
type: software
authors:
- given-names: Chuanhu
orcid: https://orcid.org/0000-0001-8954-8598
- given-names: MZhao
orcid: https://orcid.org/0000-0003-2298-6213
- given-names: Keldos
orcid: https://orcid.org/0009-0005-0357-272X
repository-code: 'https://github.com/GaiZhenbiao/ChuanhuChatGPT'
url: 'https://github.com/GaiZhenbiao/ChuanhuChatGPT'
abstract: This software provides a light and easy-to-use interface for ChatGPT API and many LLMs.
license: GPL-3.0
commit: c6c08bc62ef80e37c8be52f65f9b6051a7eea1fa
version: '20230709'
date-released: '2023-07-09'
================================================
FILE: ChuanhuChatbot.py
================================================
# -*- coding:utf-8 -*-
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
)
from modules.models.models import get_model
from modules.train_func import *
from modules.repo import *
from modules.webui import *
from modules.overwrites import patch_gradio
from modules.presets import *
from modules.utils import *
from modules.config import *
from modules import config
import gradio as gr
import colorama
logging.getLogger("httpx").setLevel(logging.WARNING)
patch_gradio()
# with open("web_assets/css/ChuanhuChat.css", "r", encoding="utf-8") as f:
# ChuanhuChatCSS = f.read()
def create_new_model():
return get_model(model_name=MODELS[DEFAULT_MODEL], access_key=my_api_key)[0]
with gr.Blocks(theme=small_and_beautiful_theme) as demo:
user_name = gr.Textbox("", visible=False)
promptTemplates = gr.State(load_template(get_template_names()[0], mode=2))
user_question = gr.State("")
assert type(my_api_key) == str
user_api_key = gr.State(my_api_key)
current_model = gr.State()
topic = gr.State(i18n("未命名对话历史记录"))
with gr.Row(elem_id="chuanhu-header"):
gr.HTML(get_html("header_title.html").format(
app_title=CHUANHU_TITLE), elem_id="app-title")
status_display = gr.Markdown(get_geoip, elem_id="status-display")
with gr.Row(elem_id="float-display"):
user_info = gr.Markdown(
value="getting user info...", elem_id="user-info")
update_info = gr.HTML(get_html("update.html").format(
current_version=repo_tag_html(),
version_time=version_time(),
cancel_btn=i18n("取消"),
update_btn=i18n("更新"),
seenew_btn=i18n("详情"),
ok_btn=i18n("好"),
close_btn=i18n("关闭"),
reboot_btn=i18n("立即重启"),
), visible=check_update)
with gr.Row(equal_height=True, elem_id="chuanhu-body"):
with gr.Column(elem_id="menu-area"):
with gr.Column(elem_id="chuanhu-history"):
with gr.Group():
with gr.Row(elem_id="chuanhu-history-header"):
with gr.Row(elem_id="chuanhu-history-search-row"):
with gr.Column(min_width=150, scale=2):
historySearchTextbox = gr.Textbox(show_label=False, container=False, placeholder=i18n(
"搜索(支持正则)..."), lines=1, elem_id="history-search-tb")
with gr.Column(min_width=52, scale=1, elem_id="gr-history-header-btns"):
uploadHistoryBtn = gr.UploadButton(
interactive=True, label="", file_types=[".json"], elem_id="gr-history-upload-btn", type="binary")
historyRefreshBtn = gr.Button("", elem_id="gr-history-refresh-btn")
with gr.Row(elem_id="chuanhu-history-body"):
with gr.Column(scale=6, elem_id="history-select-wrap"):
historySelectList = gr.Radio(
label=i18n("从列表中加载对话"),
choices=get_history_names(),
value=get_first_history_name(),
# multiselect=False,
container=False,
elem_id="history-select-dropdown"
)
with gr.Row(visible=False):
with gr.Column(min_width=42, scale=1):
historyDeleteBtn = gr.Button(
"🗑️", elem_id="gr-history-delete-btn")
with gr.Row(visible=False):
with gr.Column(scale=6):
saveFileName = gr.Textbox(
show_label=True,
placeholder=i18n("设置文件名: 默认为.json,可选为.md"),
label=i18n("设置保存文件名"),
value=i18n("对话历史记录"),
elem_classes="no-container"
# container=False,
)
with gr.Column(scale=1):
renameHistoryBtn = gr.Button(
i18n("💾 保存对话"), elem_id="gr-history-save-btn")
downloadHistoryJSONBtn = gr.DownloadButton(
i18n("历史记录(JSON)"), elem_id="gr-history-download-json-btn")
downloadHistoryMarkdownBtn = gr.DownloadButton(
i18n("导出为 Markdown"), elem_id="gr-history-download-md-btn")
with gr.Column(elem_id="chuanhu-menu-footer"):
with gr.Row(elem_id="chuanhu-func-nav"):
gr.HTML(get_html("func_nav.html"))
# gr.HTML(get_html("footer.html").format(versions=versions_html()), elem_id="footer")
# gr.Markdown(CHUANHU_DESCRIPTION, elem_id="chuanhu-author")
with gr.Column(elem_id="chuanhu-area", scale=5):
with gr.Column(elem_id="chatbot-area"):
with gr.Row(elem_id="chatbot-header"):
model_select_dropdown = gr.Dropdown(
label=i18n("选择模型"), choices=MODELS, multiselect=False, value=MODELS[DEFAULT_MODEL], interactive=True,
show_label=False, container=False, elem_id="model-select-dropdown", filterable=False
)
lora_select_dropdown = gr.Dropdown(
label=i18n("选择模型"), choices=[], multiselect=False, interactive=True, visible=False,
container=False,
)
gr.HTML(get_html("chatbot_header_btn.html").format(
json_label=i18n("历史记录(JSON)"),
md_label=i18n("导出为 Markdown")
), elem_id="chatbot-header-btn-bar")
with gr.Row():
chatbot = gr.Chatbot(
label="Chuanhu Chat",
elem_id="chuanhu-chatbot",
latex_delimiters=latex_delimiters_set,
sanitize_html=False,
# height=700,
show_label=False,
avatar_images=[config.user_avatar, config.bot_avatar],
show_share_button=False,
placeholder=setPlaceholder(model_name=MODELS[DEFAULT_MODEL]),
)
with gr.Row(elem_id="chatbot-footer"):
with gr.Column(elem_id="chatbot-input-box"):
with gr.Row(elem_id="chatbot-input-row"):
gr.HTML(get_html("chatbot_more.html").format(
single_turn_label=i18n("单轮对话"),
websearch_label=i18n("在线搜索"),
upload_file_label=i18n("上传文件"),
uploaded_files_label=i18n("知识库文件"),
uploaded_files_tip=i18n("在工具箱中管理知识库文件")
))
with gr.Row(elem_id="chatbot-input-tb-row"):
with gr.Column(min_width=225, scale=12):
user_input = gr.Textbox(
elem_id="user-input-tb",
show_label=False,
placeholder=i18n("在这里输入"),
elem_classes="no-container",
max_lines=5,
# container=False
)
with gr.Column(min_width=42, scale=1, elem_id="chatbot-ctrl-btns"):
submitBtn = gr.Button(
value="", variant="primary", elem_id="submit-btn")
cancelBtn = gr.Button(
value="", variant="secondary", visible=False, elem_id="cancel-btn")
# Note: Buttons below are set invisible in UI. But they are used in JS.
with gr.Row(elem_id="chatbot-buttons", visible=False):
with gr.Column(min_width=120, scale=1):
emptyBtn = gr.Button(
i18n("🧹 新的对话"), elem_id="empty-btn"
)
with gr.Column(min_width=120, scale=1):
retryBtn = gr.Button(
i18n("🔄 重新生成"), elem_id="gr-retry-btn")
with gr.Column(min_width=120, scale=1):
delFirstBtn = gr.Button(i18n("🗑️ 删除最旧对话"))
with gr.Column(min_width=120, scale=1):
delLastBtn = gr.Button(
i18n("🗑️ 删除最新对话"), elem_id="gr-dellast-btn")
with gr.Row(visible=False) as like_dislike_area:
with gr.Column(min_width=20, scale=1):
likeBtn = gr.Button(
"👍", elem_id="gr-like-btn")
with gr.Column(min_width=20, scale=1):
dislikeBtn = gr.Button(
"👎", elem_id="gr-dislike-btn")
with gr.Column(elem_id="toolbox-area", scale=1):
# For CSS setting, there is an extra box. Don't remove it.
with gr.Group(elem_id="chuanhu-toolbox"):
with gr.Row():
gr.Markdown("## "+i18n("工具箱"))
gr.HTML(get_html("close_btn.html").format(
obj="toolbox"), elem_classes="close-btn")
with gr.Tabs(elem_id="chuanhu-toolbox-tabs"):
with gr.Tab(label=i18n("对话")):
with gr.Accordion(label=i18n("模型"), open=not HIDE_MY_KEY, visible=not HIDE_MY_KEY):
modelDescription = gr.Markdown(
elem_id="gr-model-description",
value=i18n(MODEL_METADATA[MODELS[DEFAULT_MODEL]]["description"]),
visible=False,
)
keyTxt = gr.Textbox(
show_label=True,
placeholder=f"Your API-key...",
value=hide_middle_chars(user_api_key.value),
type="password",
visible=not HIDE_MY_KEY,
label="API-Key",
elem_id="api-key"
)
if multi_api_key:
usageTxt = gr.Markdown(i18n(
"多账号模式已开启,无需输入key,可直接开始对话"), elem_id="usage-display", elem_classes="insert-block", visible=show_api_billing)
else:
usageTxt = gr.Markdown(i18n(
"**发送消息** 或 **提交key** 以显示额度"), elem_id="usage-display", elem_classes="insert-block", visible=show_api_billing)
gr.Markdown("---", elem_classes="hr-line", visible=not HIDE_MY_KEY)
with gr.Accordion(label="Prompt", open=True):
systemPromptTxt = gr.Textbox(
show_label=True,
placeholder=i18n("在这里输入System Prompt..."),
label="System prompt",
value=INITIAL_SYSTEM_PROMPT,
lines=8
)
retain_system_prompt_checkbox = gr.Checkbox(
label=i18n("新建对话保留Prompt"), value=False, visible=True, elem_classes="switch-checkbox")
with gr.Accordion(label=i18n("加载Prompt模板"), open=False):
with gr.Column():
with gr.Row():
with gr.Column(scale=6):
templateFileSelectDropdown = gr.Dropdown(
label=i18n("选择Prompt模板集合文件"),
choices=get_template_names(),
multiselect=False,
value=get_template_names()[0],
# container=False,
)
with gr.Column(scale=1):
templateRefreshBtn = gr.Button(
i18n("🔄 刷新"))
with gr.Row():
with gr.Column():
templateSelectDropdown = gr.Dropdown(
label=i18n("从Prompt模板中加载"),
choices=load_template(
get_template_names()[
0], mode=1
),
multiselect=False,
# container=False,
)
gr.Markdown("---", elem_classes="hr-line")
with gr.Accordion(label=i18n("知识库"), open=True, elem_id="gr-kb-accordion"):
use_websearch_checkbox = gr.Checkbox(label=i18n(
"使用在线搜索"), value=False, elem_classes="switch-checkbox", elem_id="gr-websearch-cb", visible=False)
index_files = gr.Files(label=i18n(
"上传"), type="filepath", file_types=[".pdf", ".docx", ".pptx", ".epub", ".xlsx", ".txt", "text", "image"], elem_id="upload-index-file")
two_column = gr.Checkbox(label=i18n(
"双栏pdf"), value=advance_docs["pdf"].get("two_column", False))
summarize_btn = gr.Button(i18n("总结"))
# TODO: 公式ocr
# formula_ocr = gr.Checkbox(label=i18n("识别公式"), value=advance_docs["pdf"].get("formula_ocr", False))
with gr.Tab(label=i18n("参数")):
gr.Markdown(i18n("# ⚠️ 务必谨慎更改 ⚠️"),
elem_id="advanced-warning")
with gr.Accordion(i18n("参数"), open=True):
temperature_slider = gr.Slider(
minimum=-0,
maximum=2.0,
value=1.0,
step=0.1,
interactive=True,
label="temperature",
)
top_p_slider = gr.Slider(
minimum=-0,
maximum=1.0,
value=1.0,
step=0.05,
interactive=True,
label="top-p",
)
n_choices_slider = gr.Slider(
minimum=1,
maximum=10,
value=1,
step=1,
interactive=True,
label="n choices",
)
stop_sequence_txt = gr.Textbox(
show_label=True,
placeholder=i18n("停止符,用英文逗号隔开..."),
label="stop",
value="",
lines=1,
)
max_context_length_slider = gr.Slider(
minimum=1,
maximum=1048576,
value=2000,
step=1,
interactive=True,
label="max context",
)
max_generation_slider = gr.Slider(
minimum=1,
maximum=128000,
value=1000,
step=1,
interactive=True,
label="max generations",
)
presence_penalty_slider = gr.Slider(
minimum=-2.0,
maximum=2.0,
value=0.0,
step=0.01,
interactive=True,
label="presence penalty",
)
frequency_penalty_slider = gr.Slider(
minimum=-2.0,
maximum=2.0,
value=0.0,
step=0.01,
interactive=True,
label="frequency penalty",
)
logit_bias_txt = gr.Textbox(
show_label=True,
placeholder=f"word:likelihood",
label="logit bias",
value="",
lines=1,
)
user_identifier_txt = gr.Textbox(
show_label=True,
placeholder=i18n("用于定位滥用行为"),
label=i18n("用户标识符"),
value=user_name.value,
lines=1,
)
with gr.Tab(label=i18n("拓展")):
gr.Markdown(
"Will be here soon...\n(We hope)\n\nAnd we hope you can help us to make more extensions!")
# changeAPIURLBtn = gr.Button(i18n("🔄 切换API地址"))
with gr.Row(elem_id="popup-wrapper"):
with gr.Group(elem_id="chuanhu-popup"):
with gr.Group(elem_id="chuanhu-setting"):
with gr.Row():
gr.Markdown("## "+i18n("设置"))
gr.HTML(get_html("close_btn.html").format(
obj="box"), elem_classes="close-btn")
with gr.Tabs(elem_id="chuanhu-setting-tabs"):
# with gr.Tab(label=i18n("模型")):
# model_select_dropdown = gr.Dropdown(
# label=i18n("选择模型"), choices=MODELS, multiselect=False, value=MODELS[DEFAULT_MODEL], interactive=True
# )
# lora_select_dropdown = gr.Dropdown(
# label=i18n("选择LoRA模型"), choices=[], multiselect=False, interactive=True, visible=False
# )
# with gr.Row():
with gr.Tab(label=i18n("高级")):
gr.HTML(get_html("appearance_switcher.html").format(
label=i18n("切换亮暗色主题")), elem_classes="insert-block", visible=False)
use_streaming_checkbox = gr.Checkbox(
label=i18n("实时传输回答"), value=True, visible=ENABLE_STREAMING_OPTION, elem_classes="switch-checkbox no-container"
)
language_select_dropdown = gr.Dropdown(
label=i18n("选择回复语言(针对搜索&索引功能)"),
choices=REPLY_LANGUAGES,
multiselect=False,
value=REPLY_LANGUAGES[0],
elem_classes="no-container",
)
name_chat_method = gr.Dropdown(
label=i18n("对话命名方式"),
choices=HISTORY_NAME_METHODS,
multiselect=False,
interactive=True,
value=HISTORY_NAME_METHODS[chat_name_method_index],
elem_classes="no-container",
)
single_turn_checkbox = gr.Checkbox(label=i18n(
"单轮对话"), value=False, elem_classes="switch-checkbox", elem_id="gr-single-session-cb", visible=False)
# checkUpdateBtn = gr.Button(i18n("🔄 检查更新..."), visible=check_update)
logout_btn = gr.Button("Logout", link="/logout")
with gr.Tab(i18n("网络")):
gr.Markdown(
i18n("⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置"), elem_id="netsetting-warning")
default_btn = gr.Button(i18n("🔙 恢复默认网络设置"))
# 网络代理
proxyTxt = gr.Textbox(
show_label=True,
placeholder=i18n("未设置代理..."),
label=i18n("代理地址"),
value=config.http_proxy,
lines=1,
interactive=False,
# container=False,
elem_classes="view-only-textbox no-container",
)
# changeProxyBtn = gr.Button(i18n("🔄 设置代理地址"))
# 优先展示自定义的api_host
apihostTxt = gr.Textbox(
show_label=True,
placeholder="api.openai.com",
label="OpenAI API-Host",
value=config.api_host or shared.API_HOST,
lines=1,
interactive=False,
# container=False,
elem_classes="view-only-textbox no-container",
)
with gr.Tab(label=i18n("关于"), elem_id="about-tab"):
gr.Markdown(
'<img alt="Chuanhu Chat logo" src="file=web_assets/icon/any-icon-512.png" style="max-width: 144px;">')
gr.Markdown("# "+i18n("川虎Chat"))
gr.HTML(get_html("footer.html").format(
versions=versions_html()), elem_id="footer")
gr.Markdown(CHUANHU_DESCRIPTION, elem_id="description")
with gr.Group(elem_id="chuanhu-training"):
with gr.Row():
gr.Markdown("## "+i18n("训练"))
gr.HTML(get_html("close_btn.html").format(
obj="box"), elem_classes="close-btn")
with gr.Tabs(elem_id="chuanhu-training-tabs"):
with gr.Tab(label="OpenAI "+i18n("微调")):
openai_train_status = gr.Markdown(label=i18n("训练状态"), value=i18n(
"查看[使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35)"))
with gr.Tab(label=i18n("准备数据集")):
dataset_previewjson = gr.JSON(
label=i18n("数据集预览"))
dataset_selection = gr.Files(label=i18n("选择数据集"), file_types=[
".xlsx", ".jsonl"], file_count="single")
upload_to_openai_btn = gr.Button(
i18n("上传到OpenAI"), variant="primary", interactive=False)
with gr.Tab(label=i18n("训练")):
openai_ft_file_id = gr.Textbox(label=i18n(
"文件ID"), value="", lines=1, placeholder=i18n("上传到 OpenAI 后自动填充"))
openai_ft_suffix = gr.Textbox(label=i18n(
"模型名称后缀"), value="", lines=1, placeholder=i18n("可选,用于区分不同的模型"))
openai_train_epoch_slider = gr.Slider(label=i18n(
"训练轮数(Epochs)"), minimum=1, maximum=100, value=3, step=1, interactive=True)
openai_start_train_btn = gr.Button(
i18n("开始训练"), variant="primary", interactive=False)
with gr.Tab(label=i18n("状态")):
openai_status_refresh_btn = gr.Button(i18n("刷新状态"))
openai_cancel_all_jobs_btn = gr.Button(
i18n("取消所有任务"))
add_to_models_btn = gr.Button(
i18n("添加训练好的模型到模型列表"), interactive=False)
with gr.Group(elem_id="web-config", visible=False):
gr.HTML(get_html('web_config.html').format(
enableCheckUpdate_config=check_update,
hideHistoryWhenNotLoggedIn_config=hide_history_when_not_logged_in,
forView_i18n=i18n("仅供查看"),
deleteConfirm_i18n_pref=i18n("你真的要删除 "),
deleteConfirm_i18n_suff=i18n(" 吗?"),
usingLatest_i18n=i18n("您使用的就是最新版!"),
updatingMsg_i18n=i18n("正在尝试更新..."),
updateSuccess_i18n=i18n("更新成功,请重启本程序"),
updateFailure_i18n=i18n(
"更新失败,请尝试[手动更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)"),
regenerate_i18n=i18n("重新生成"),
deleteRound_i18n=i18n("删除这轮问答"),
renameChat_i18n=i18n("重命名该对话"),
validFileName_i18n=i18n("请输入有效的文件名,不要包含以下特殊字符:"),
clearFileHistoryMsg_i18n=i18n("⚠️请先删除知识库中的历史文件,再尝试上传!"),
dropUploadMsg_i18n=i18n("释放文件以上传"),
))
with gr.Group(elem_id="fake-gradio-components", visible=False):
updateChuanhuBtn = gr.Button(
visible=False, elem_classes="invisible-btn", elem_id="update-chuanhu-btn")
rebootChuanhuBtn = gr.Button(
visible=False, elem_classes="invisible-btn", elem_id="reboot-chuanhu-btn")
changeSingleSessionBtn = gr.Button(
visible=False, elem_classes="invisible-btn", elem_id="change-single-session-btn")
changeOnlineSearchBtn = gr.Button(
visible=False, elem_classes="invisible-btn", elem_id="change-online-search-btn")
historySelectBtn = gr.Button(
visible=False, elem_classes="invisible-btn", elem_id="history-select-btn") # Not used
# https://github.com/gradio-app/gradio/pull/3296
def create_greeting(request: gr.Request):
if hasattr(request, "username") and request.username: # is not None or is not ""
logging.info(f"Get User Name: {request.username}")
user_info, user_name = gr.Markdown(
value=f"User: {request.username}"), request.username
else:
user_info, user_name = gr.Markdown(
value=f"", visible=False), ""
current_model = get_model(
model_name=MODELS[DEFAULT_MODEL], access_key=my_api_key, user_name=user_name)[0]
if not hide_history_when_not_logged_in or user_name:
loaded_stuff = current_model.auto_load()
else:
current_model.new_auto_history_filename()
loaded_stuff = [gr.update(), gr.update(), gr.Chatbot(label=MODELS[DEFAULT_MODEL]), current_model.single_turn, current_model.temperature, current_model.top_p, current_model.n_choices, current_model.stop_sequence, current_model.token_upper_limit, current_model.max_generation_token, current_model.presence_penalty, current_model.frequency_penalty, current_model.logit_bias, current_model.user_identifier, current_model.stream, gr.DownloadButton(), gr.DownloadButton()]
return user_info, user_name, current_model, toggle_like_btn_visibility(DEFAULT_MODEL), *loaded_stuff, init_history_list(user_name, prepend=current_model.history_file_path.rstrip(".json"))
demo.load(create_greeting, inputs=None, outputs=[
user_info, user_name, current_model, like_dislike_area, saveFileName, systemPromptTxt, chatbot, single_turn_checkbox, temperature_slider, top_p_slider, n_choices_slider, stop_sequence_txt, max_context_length_slider, max_generation_slider, presence_penalty_slider, frequency_penalty_slider, logit_bias_txt, user_identifier_txt, use_streaming_checkbox, downloadHistoryJSONBtn, downloadHistoryMarkdownBtn, historySelectList], api_name="load")
chatgpt_predict_args = dict(
fn=predict,
inputs=[
current_model,
user_question,
chatbot,
use_websearch_checkbox,
index_files,
language_select_dropdown,
],
outputs=[chatbot, status_display],
show_progress=True,
concurrency_limit=CONCURRENT_COUNT
)
start_outputing_args = dict(
fn=start_outputing,
inputs=[],
outputs=[submitBtn, cancelBtn],
show_progress=True,
)
end_outputing_args = dict(
fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn]
)
reset_textbox_args = dict(
fn=reset_textbox, inputs=[], outputs=[user_input]
)
transfer_input_args = dict(
fn=transfer_input, inputs=[user_input], outputs=[
user_question, user_input, submitBtn, cancelBtn], show_progress=True
)
get_usage_args = dict(
fn=billing_info, inputs=[current_model], outputs=[
usageTxt], show_progress=False
)
load_history_from_file_args = dict(
fn=load_chat_history,
inputs=[current_model, historySelectList],
outputs=[saveFileName, systemPromptTxt, chatbot, single_turn_checkbox, temperature_slider, top_p_slider, n_choices_slider, stop_sequence_txt, max_context_length_slider, max_generation_slider, presence_penalty_slider, frequency_penalty_slider, logit_bias_txt, user_identifier_txt, use_streaming_checkbox, downloadHistoryJSONBtn, downloadHistoryMarkdownBtn],
)
refresh_history_args = dict(
fn=get_history_list, inputs=[user_name], outputs=[historySelectList]
)
auto_name_chat_history_args = dict(
fn=auto_name_chat_history,
inputs=[current_model, name_chat_method, user_question, single_turn_checkbox],
outputs=[historySelectList],
show_progress=False,
)
# Chatbot
cancelBtn.click(interrupt, [current_model], [])
user_input.submit(**transfer_input_args).then(**
chatgpt_predict_args).then(**end_outputing_args).then(**auto_name_chat_history_args)
user_input.submit(**get_usage_args)
# user_input.submit(auto_name_chat_history, [current_model, user_question, chatbot, user_name], [historySelectList], show_progress=False)
submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args,
api_name="predict").then(**end_outputing_args).then(**auto_name_chat_history_args)
submitBtn.click(**get_usage_args)
# submitBtn.click(auto_name_chat_history, [current_model, user_question, chatbot, user_name], [historySelectList], show_progress=False)
index_files.upload(handle_file_upload, [current_model, index_files, chatbot, language_select_dropdown], [
index_files, chatbot, status_display])
summarize_btn.click(handle_summarize_index, [
current_model, index_files, chatbot, language_select_dropdown], [chatbot, status_display])
emptyBtn.click(
reset,
inputs=[current_model, retain_system_prompt_checkbox],
outputs=[chatbot, status_display, historySelectList, systemPromptTxt, single_turn_checkbox, temperature_slider, top_p_slider, n_choices_slider, stop_sequence_txt, max_context_length_slider, max_generation_slider, presence_penalty_slider, frequency_penalty_slider, logit_bias_txt, user_identifier_txt, use_streaming_checkbox],
show_progress=True,
js='(a,b)=>{return clearChatbot(a,b);}',
)
retryBtn.click(**start_outputing_args).then(
retry,
[
current_model,
chatbot,
use_websearch_checkbox,
index_files,
language_select_dropdown,
],
[chatbot, status_display],
show_progress=True,
).then(**end_outputing_args)
retryBtn.click(**get_usage_args)
delFirstBtn.click(
delete_first_conversation,
[current_model],
[status_display],
)
delLastBtn.click(
delete_last_conversation,
[current_model, chatbot],
[chatbot, status_display],
show_progress=False
)
likeBtn.click(
like,
[current_model],
[status_display],
show_progress=False
)
dislikeBtn.click(
dislike,
[current_model],
[status_display],
show_progress=False
)
two_column.change(update_doc_config, [two_column], None)
# LLM Models
keyTxt.change(set_key, [current_model, keyTxt], [
user_api_key, status_display], api_name="set_key").then(**get_usage_args)
keyTxt.submit(**get_usage_args)
single_turn_checkbox.change(
set_single_turn, [current_model, single_turn_checkbox], None, show_progress=False)
use_streaming_checkbox.change(set_streaming, [current_model, use_streaming_checkbox], None, show_progress=False)
model_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt, user_name, current_model], [
current_model, status_display, chatbot, lora_select_dropdown, user_api_key, keyTxt, modelDescription, use_streaming_checkbox], show_progress=True, api_name="get_model")
model_select_dropdown.change(toggle_like_btn_visibility, [model_select_dropdown], [
like_dislike_area], show_progress=False)
# model_select_dropdown.change(
# toggle_file_type, [model_select_dropdown], [index_files], show_progress=False)
lora_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider,
top_p_slider, systemPromptTxt, user_name, current_model], [current_model, status_display, chatbot, modelDescription], show_progress=True)
# Template
systemPromptTxt.change(set_system_prompt, [
current_model, systemPromptTxt], None)
templateRefreshBtn.click(get_template_dropdown, None, [
templateFileSelectDropdown])
templateFileSelectDropdown.input(
load_template,
[templateFileSelectDropdown],
[promptTemplates, templateSelectDropdown],
show_progress=True,
)
templateSelectDropdown.change(
get_template_content,
[promptTemplates, templateSelectDropdown, systemPromptTxt],
[systemPromptTxt],
show_progress=True,
)
# S&L
renameHistoryBtn.click(
rename_chat_history,
[current_model, saveFileName],
[historySelectList],
show_progress=True,
js='(a,b,c,d)=>{return saveChatHistory(a,b,c,d);}'
)
historyRefreshBtn.click(**refresh_history_args)
historyDeleteBtn.click(delete_chat_history, [current_model, historySelectList], [status_display, historySelectList, chatbot], js='(a,b,c)=>{return showConfirmationDialog(a, b, c);}').then(
reset,
inputs=[current_model, retain_system_prompt_checkbox],
outputs=[chatbot, status_display, historySelectList, systemPromptTxt],
show_progress=True,
js='(a,b)=>{return clearChatbot(a,b);}',
)
historySelectList.select(**load_history_from_file_args)
uploadHistoryBtn.upload(upload_chat_history, [current_model, uploadHistoryBtn], [
saveFileName, systemPromptTxt, chatbot, single_turn_checkbox, temperature_slider, top_p_slider, n_choices_slider, stop_sequence_txt, max_context_length_slider, max_generation_slider, presence_penalty_slider, frequency_penalty_slider, logit_bias_txt, user_identifier_txt, use_streaming_checkbox, downloadHistoryJSONBtn, downloadHistoryMarkdownBtn, historySelectList]).then(**refresh_history_args)
historySearchTextbox.input(
filter_history,
[user_name, historySearchTextbox],
[historySelectList]
)
# Train
dataset_selection.upload(handle_dataset_selection, dataset_selection, [
dataset_previewjson, upload_to_openai_btn, openai_train_status])
dataset_selection.clear(handle_dataset_clear, [], [
dataset_previewjson, upload_to_openai_btn])
upload_to_openai_btn.click(upload_to_openai, [dataset_selection], [
openai_ft_file_id, openai_train_status], show_progress=True)
openai_ft_file_id.change(lambda x: gr.update(interactive=True) if len(
x) > 0 else gr.update(interactive=False), [openai_ft_file_id], [openai_start_train_btn])
openai_start_train_btn.click(start_training, [
openai_ft_file_id, openai_ft_suffix, openai_train_epoch_slider], [openai_train_status])
openai_status_refresh_btn.click(get_training_status, [], [
openai_train_status, add_to_models_btn])
add_to_models_btn.click(add_to_models, [], [
model_select_dropdown, openai_train_status], show_progress=True)
openai_cancel_all_jobs_btn.click(
cancel_all_jobs, [], [openai_train_status], show_progress=True)
# Advanced
temperature_slider.input(
set_temperature, [current_model, temperature_slider], None, show_progress=False)
top_p_slider.input(set_top_p, [current_model, top_p_slider], None, show_progress=False)
n_choices_slider.input(
set_n_choices, [current_model, n_choices_slider], None, show_progress=False)
stop_sequence_txt.input(
set_stop_sequence, [current_model, stop_sequence_txt], None, show_progress=False)
max_context_length_slider.input(
set_token_upper_limit, [current_model, max_context_length_slider], None, show_progress=False)
max_generation_slider.input(
set_max_tokens, [current_model, max_generation_slider], None, show_progress=False)
presence_penalty_slider.input(
set_presence_penalty, [current_model, presence_penalty_slider], None, show_progress=False)
frequency_penalty_slider.input(
set_frequency_penalty, [current_model, frequency_penalty_slider], None, show_progress=False)
logit_bias_txt.input(
set_logit_bias, [current_model, logit_bias_txt], None, show_progress=False)
user_identifier_txt.input(set_user_identifier, [
current_model, user_identifier_txt], None, show_progress=False)
default_btn.click(
reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True
)
# changeAPIURLBtn.click(
# change_api_host,
# [apihostTxt],
# [status_display],
# show_progress=True,
# )
# changeProxyBtn.click(
# change_proxy,
# [proxyTxt],
# [status_display],
# show_progress=True,
# )
# checkUpdateBtn.click(fn=None, js='manualCheckUpdate')
# Invisible elements
updateChuanhuBtn.click(
update_chuanhu,
[user_name],
[status_display],
show_progress=True,
)
rebootChuanhuBtn.click(
reboot_chuanhu,
[],
[],
show_progress=True,
js='rebootingChuanhu'
)
changeSingleSessionBtn.click(
fn=lambda value: gr.Checkbox(value=value),
inputs=[single_turn_checkbox],
outputs=[single_turn_checkbox],
js='(a)=>{return bgChangeSingleSession(a);}'
)
changeOnlineSearchBtn.click(
fn=lambda value: gr.Checkbox(value=value),
inputs=[use_websearch_checkbox],
outputs=[use_websearch_checkbox],
js='(a)=>{return bgChangeOnlineSearch(a);}'
)
historySelectBtn.click( # This is an experimental feature... Not actually used.
fn=load_chat_history,
inputs=[current_model, historySelectList],
outputs=[saveFileName, systemPromptTxt, chatbot, single_turn_checkbox, temperature_slider, top_p_slider, n_choices_slider, stop_sequence_txt, max_context_length_slider, max_generation_slider, presence_penalty_slider, frequency_penalty_slider, logit_bias_txt, user_identifier_txt, use_streaming_checkbox, downloadHistoryJSONBtn, downloadHistoryMarkdownBtn],
js='(a,b)=>{return bgSelectHistory(a,b);}'
)
# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接
demo.title = i18n("川虎Chat 🚀")
if __name__ == "__main__":
reload_javascript()
setup_wizard()
demo.queue().launch(
allowed_paths=["web_assets"],
blocked_paths=["config.json", "files", "models", "lora", "modules", "history"],
server_name=server_name,
server_port=server_port,
share=share,
auth=auth_from_conf if authflag else None,
favicon_path="./web_assets/favicon.ico",
inbrowser=autobrowser and not dockerflag, # 禁止在docker下开启inbrowser
)
================================================
FILE: Dockerfile
================================================
FROM python:3.10-slim-buster as builder
# Install build essentials, Rust, and additional dependencies
RUN apt-get update \
&& apt-get install -y build-essential curl cmake pkg-config libssl-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Add Cargo to PATH
ENV PATH="/root/.cargo/bin:${PATH}"
# Upgrade pip
RUN pip install --upgrade pip
COPY requirements.txt .
COPY requirements_advanced.txt .
# Install Python packages
RUN pip install --user --no-cache-dir -r requirements.txt
# Uncomment the following line if you want to install advanced requirements
# RUN pip install --user --no-cache-dir -r requirements_advanced.txt
FROM python:3.10-slim-buster
LABEL maintainer="iskoldt"
# Copy Rust and Cargo from builder
COPY --from=builder /root/.cargo /root/.cargo
COPY --from=builder /root/.rustup /root/.rustup
# Copy Python packages from builder
COPY --from=builder /root/.local /root/.local
# Set up environment
ENV PATH=/root/.local/bin:/root/.cargo/bin:$PATH
ENV RUSTUP_HOME=/root/.rustup
ENV CARGO_HOME=/root/.cargo
COPY . /app
WORKDIR /app
ENV dockerrun=yes
CMD ["python3", "-u", "ChuanhuChatbot.py","2>&1", "|", "tee", "/var/log/application.log"]
EXPOSE 7860
================================================
FILE: LICENSE
================================================
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
================================================
FILE: README.md
================================================
<div align="right">
<!-- 语言: -->
简体中文 | <a title="English" href="./readme/README_en.md">English</a> | <a title="Japanese" href="./readme/README_ja.md">日本語</a> | <a title="Russian" href="./readme/README_ru.md">Russian</a> | <a title="Korean" href="./readme/README_ko.md">한국어</a>
</div>
<h1 align="center">川虎 Chat 🐯 Chuanhu Chat</h1>
<div align="center">
<a href="https://github.com/GaiZhenBiao/ChuanhuChatGPT">
<img src="https://github.com/GaiZhenbiao/ChuanhuChatGPT/assets/70903329/aca3a7ec-4f1d-4667-890c-a6f47bf08f63" alt="Logo" height="156">
</a>
<p align="center">
<h3>为ChatGPT等多种LLM提供了一个轻快好用的Web图形界面和众多附加功能</h3>
<p align="center">
<a href="https://github.com/GaiZhenbiao/ChuanhuChatGPT/blob/main/LICENSE">
<img alt="Tests Passing" src="https://img.shields.io/github/license/GaiZhenbiao/ChuanhuChatGPT" />
</a>
<a href="https://gradio.app/">
<img alt="GitHub Contributors" src="https://img.shields.io/badge/Base-Gradio-fb7d1a?style=flat" />
</a>
<a href="https://t.me/tkdifferent">
<img alt="GitHub pull requests" src="https://img.shields.io/badge/Telegram-Group-blue.svg?logo=telegram" />
</a>
<p>
支持 DeepSeek R1 & GPT 4 · 基于文件问答 · LLM本地部署 · 联网搜索 · Agent 助理 · 支持 Fine-tune
</p>
<a href="https://www.bilibili.com/video/BV1mo4y1r7eE"><strong>视频教程</strong></a>
·
<a href="https://www.bilibili.com/video/BV1184y1w7aP"><strong>2.0介绍视频</strong></a>
||
<a href="https://huggingface.co/spaces/JohnSmith9982/ChuanhuChatGPT"><strong>在线体验</strong></a>
·
<a href="https://huggingface.co/login?next=%2Fspaces%2FJohnSmith9982%2FChuanhuChatGPT%3Fduplicate%3Dtrue"><strong>一键部署</strong></a>
</p>
</p>
</div>
> 📢 新增:现已支持 GPT-5(含 GPT-5、GPT-5-mini、GPT-5-nano;400k 上下文、最多 128k 输出)。
[](https://github.com/GaiZhenbiao/ChuanhuChatGPT/assets/51039745/0eee1598-c2fd-41c6-bda9-7b059a3ce6e7?autoplay=1)
## 目录
| [支持模型](#支持模型) | [使用技巧](#使用技巧) | [安装方式](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程) | [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) | [给作者买可乐🥤](#捐款) | [加入Telegram群组](https://t.me/tkdifferent) |
| --- | --- | --- | --- | --- | --- |
## ✨ 5.0 重磅更新!

<sup>New!</sup> 全新的用户界面!精致得不像 Gradio,甚至有毛玻璃效果!
<sup>New!</sup> 适配了移动端(包括全面屏手机的挖孔/刘海),层级更加清晰。
<sup>New!</sup> 历史记录移到左侧,使用更加方便。并且支持搜索(支持正则)、删除、重命名。
<sup>New!</sup> 现在可以让大模型自动命名历史记录(需在设置或配置文件中开启)。
<sup>New!</sup> 现在可以将 川虎Chat 作为 PWA 应用程序安装,体验更加原生!支持 Chrome/Edge/Safari 等浏览器。
<sup>New!</sup> 图标适配各个平台,看起来更舒服。
<sup>New!</sup> 支持 Finetune(微调) GPT 3.5!
## 支持模型
| API 调用模型 | 备注 | 本地部署模型 | 备注 |
| :---: | --- | :---: | --- |
| [ChatGPT(GPT-5、GPT-4、GPT-4o、o1)](https://chat.openai.com) | 支持微调 gpt-3.5 | [ChatGLM](https://github.com/THUDM/ChatGLM-6B) ([ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)) ([ChatGLM3](https://huggingface.co/THUDM/chatglm3-6b)) ||
| [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) | | [LLaMA](https://github.com/facebookresearch/llama) | 支持 Lora 模型 |
| [Google Gemini Pro](https://ai.google.dev/gemini-api/docs/api-key?hl=zh-cn) | | [StableLM](https://github.com/Stability-AI/StableLM) ||
| [讯飞星火认知大模型](https://xinghuo.xfyun.cn) | | [MOSS](https://github.com/OpenLMLab/MOSS) ||
| [Inspur Yuan 1.0](https://air.inspur.com/home) | | [通义千问](https://github.com/QwenLM/Qwen/tree/main) ||
| [MiniMax](https://api.minimax.chat/) ||[DeepSeek](https://platform.deepseek.com)||
| [XMChat](https://github.com/MILVLG/xmchat) | 不支持流式传输|||
| [Midjourney](https://www.midjourney.com/) | 不支持流式传输|||
| [Claude](https://www.anthropic.com/) | ✨ 现已支持Claude 3 Opus、Sonnet,Haiku将会在推出后的第一时间支持|||
| DALL·E 3 ||||
## 使用技巧
### 💪 强力功能
- **川虎助理**:类似 AutoGPT,全自动解决你的问题;
- **在线搜索**:ChatGPT 的数据太旧?给 LLM 插上网络的翅膀;
- **知识库**:让 ChatGPT 帮你量子速读!根据文件回答问题。
- **本地部署LLM**:一键部署,获取属于你自己的大语言模型。
- **GPT 3.5微调**:支持微调 GPT 3.5,让 ChatGPT 更加个性化。
- **[自定义模型](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E8%87%AA%E5%AE%9A%E4%B9%89%E6%A8%A1%E5%9E%8B-Custom-Models)**:灵活地自定义模型,例如对接本地推理服务。
### 🤖 System Prompt
- 通过 System Prompt 设定前提条件,可以很有效地进行角色扮演;
- 川虎Chat 预设了Prompt模板,点击`加载Prompt模板`,先选择 Prompt 模板集合,然后在下方选择想要的 Prompt。
### 💬 基础对话
- 如果回答不满意,可以使用 `重新生成` 按钮再试一次,或者直接 `删除这轮对话`;
- 输入框支持换行,按 <kbd>Shift</kbd> + <kbd>Enter</kbd>即可;
- 在输入框按 <kbd>↑</kbd> <kbd>↓</kbd> 方向键,可以在发送记录中快速切换;
- 每次新建一个对话太麻烦,试试 `单论对话` 功能;
- 回答气泡旁边的小按钮,不仅能 `一键复制`,还能 `查看Markdown原文`;
- 指定回答语言,让 ChatGPT 固定以某种语言回答。
### 📜 对话历史
- 对话历史记录会被自动保存,不用担心问完之后找不到了;
- 多用户历史记录隔离,除了你都看不到;
- 重命名历史记录,方便日后查找;
- <sup>New!</sup> 魔法般自动命名历史记录,让 LLM 理解对话内容,帮你自动为历史记录命名!
- <sup>New!</sup> 搜索历史记录,支持正则表达式!
### 🖼️ 小而美的体验
- 自研 Small-and-Beautiful 主题,带给你小而美的体验;
- 自动亮暗色切换,给你从早到晚的舒适体验;
- 完美渲染 LaTeX / 表格 / 代码块,支持代码高亮;
- <sup>New!</sup> 非线性动画、毛玻璃效果,精致得不像 Gradio!
- <sup>New!</sup> 适配 Windows / macOS / Linux / iOS / Android,从图标到全面屏适配,给你最合适的体验!
- <sup>New!</sup> 支持以 PWA应用程序 安装,体验更加原生!
### 👨💻 极客功能
- <sup>New!</sup> 支持 Fine-tune(微调)gpt-3.5!
- 大量 LLM 参数可调;
- 支持更换 api-host;
- 支持自定义代理;
- 支持多 api-key 负载均衡。
### ⚒️ 部署相关
- 部署到服务器:在 `config.json` 中设置 `"server_name": "0.0.0.0", "server_port": <你的端口号>,`。
- 获取公共链接:在 `config.json` 中设置 `"share": true,`。注意程序必须在运行,才能通过公共链接访问。
- 在Hugging Face上使用:建议在右上角 **复制Space** 再使用,这样App反应可能会快一点。
## 快速上手
在终端执行以下命令:
```shell
git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git
cd ChuanhuChatGPT
pip install -r requirements.txt
```
然后,在项目文件夹中复制一份 `config_example.json`,并将其重命名为 `config.json`,在其中填入 `API-Key` 等设置。
```shell
python ChuanhuChatbot.py
```
一个浏览器窗口将会自动打开,此时您将可以使用 **川虎Chat** 与ChatGPT或其他模型进行对话。
> **Note**
>
> 具体详尽的安装教程和使用教程请查看[本项目的wiki页面](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)。
## 疑难杂症解决
在遇到各种问题查阅相关信息前,您可以先尝试 **手动拉取本项目的最新更改<sup>1</sup>** 并 **更新依赖库<sup>2</sup>**,然后重试。步骤为:
1. 点击网页上的 `Download ZIP` 按钮,下载最新代码并解压覆盖,或
```shell
git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f
```
2. 尝试再次安装依赖(可能本项目引入了新的依赖)
```
pip install -r requirements.txt
```
很多时候,这样就可以解决问题。
如果问题仍然存在,请查阅该页面:[常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)
该页面列出了**几乎所有**您可能遇到的各种问题,包括如何配置代理,以及遇到问题后您该采取的措施,**请务必认真阅读**。
## 了解更多
若需了解更多信息,请查看我们的 [wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki):
- [想要做出贡献?](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)
- [项目更新情况?](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志)
- [二次开发许可?](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可)
- [如何引用项目?](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目)
## Starchart
[](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date)
## Contributors
<a href="https://github.com/GaiZhenbiao/ChuanhuChatGPT/graphs/contributors">
<img src="https://contrib.rocks/image?repo=GaiZhenbiao/ChuanhuChatGPT" />
</a>
## 捐款
🐯如果觉得这个软件对你有所帮助,欢迎请作者喝可乐、喝咖啡~
联系作者:请去[我的bilibili账号](https://space.bilibili.com/29125536)私信我。
<a href="https://www.buymeacoffee.com/ChuanhuChat" ><img src="https://img.buymeacoffee.com/button-api/?text=Buy me a coffee&emoji=&slug=ChuanhuChat&button_colour=219d53&font_colour=ffffff&font_family=Poppins&outline_colour=ffffff&coffee_colour=FFDD00" alt="Buy Me A Coffee" width="250"></a>
<img width="250" alt="image" src="https://user-images.githubusercontent.com/51039745/226920291-e8ec0b0a-400f-4c20-ac13-dafac0c3aeeb.JPG">
================================================
FILE: config_example.json
================================================
{
// 各配置具体说明,见 [https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#配置-configjson]
//== API 配置 ==
"openai_api_key": "", // 你的 OpenAI API Key,一般必填,若空缺则需在图形界面中填入API Key
"deepseek_api_key": "", // 你的 DeepSeek API Key,用于 DeepSeek Chat 和 Reasoner(R1) 对话模型
"google_genai_api_key": "", // 你的 Google Gemini API Key ,用于 Google Gemini 对话模型
"google_genai_api_host": "generativelanguage.googleapis.com", // 你的 Google Gemini API Host 地址,一般无需更改
"xmchat_api_key": "", // 你的 xmchat API Key,用于 XMChat 对话模型
"minimax_api_key": "", // 你的 MiniMax API Key,用于 MiniMax 对话模型
"minimax_group_id": "", // 你的 MiniMax Group ID,用于 MiniMax 对话模型
"midjourney_proxy_api_base": "https://xxx/mj", // 你的 https://github.com/novicezk/midjourney-proxy 代理地址
"midjourney_proxy_api_secret": "", // 你的 MidJourney Proxy API Secret,用于鉴权访问 api,可选
"midjourney_discord_proxy_url": "", // 你的 MidJourney Discord Proxy URL,用于对生成对图进行反代,可选
"midjourney_temp_folder": "./tmp", // 你的 MidJourney 临时文件夹,用于存放生成的图片,填空则关闭自动下载切图(直接显示MJ的四宫格图)
"spark_appid": "", // 你的 讯飞星火大模型 API AppID,用于讯飞星火大模型对话模型
"spark_api_key": "", // 你的 讯飞星火大模型 API Key,用于讯飞星火大模型对话模型
"spark_api_secret": "", // 你的 讯飞星火大模型 API Secret,用于讯飞星火大模型对话模型
"claude_api_secret":"",// 你的 Claude API Secret,用于 Claude 对话模型
"ernie_api_key": "",// 你的文心一言在百度云中的API Key,用于文心一言对话模型
"ernie_secret_key": "",// 你的文心一言在百度云中的Secret Key,用于文心一言对话模型
"ollama_host": "", // 你的 Ollama Host,用于 Ollama 对话模型
"huggingface_auth_token": "", // 你的 Hugging Face API Token,用于访问有限制的模型
"groq_api_key": "", // 你的 Groq API Key,用于 Groq 对话模型(https://console.groq.com/)
//== Azure ==
"openai_api_type": "openai", // 可选项:azure, openai
"azure_openai_api_key": "", // 你的 Azure OpenAI API Key,用于 Azure OpenAI 对话模型
"azure_openai_api_base_url": "", // 你的 Azure Base URL
"azure_openai_api_version": "2023-05-15", // 你的 Azure OpenAI API 版本
"azure_deployment_name": "", // 你的 Azure OpenAI Chat 模型 Deployment 名称
"azure_embedding_deployment_name": "", // 你的 Azure OpenAI Embedding 模型 Deployment 名称
"azure_embedding_model_name": "text-embedding-ada-002", // 你的 Azure OpenAI Embedding 模型名称
//== 基础配置 ==
"language": "auto", // 界面语言,可选"auto", "zh_CN", "en_US", "ja_JP", "ko_KR", "sv_SE", "ru_RU", "vi_VN"
"users": [], // 用户列表,[["用户名1", "密码1"], ["用户名2", "密码2"], ...]
"admin_list": [], // 管理员列表,["用户名1", "用户名2", ...] 只有管理员可以重启服务
"local_embedding": false, //是否在本地编制索引
"hide_history_when_not_logged_in": false, //未登录情况下是否不展示对话历史
"check_update": true, //是否启用检查更新
"default_model": "GPT3.5 Turbo", // 默认模型
"chat_name_method_index": 2, // 选择对话名称的方法。0: 使用日期时间命名;1: 使用第一条提问命名,2: 使用模型自动总结
"bot_avatar": "default", // 机器人头像,可填写本地或网络图片链接,或者"none"(不显示头像)
"user_avatar": "default", // 用户头像,可填写本地或网络图片链接,或者"none"(不显示头像)
//== API 用量 ==
"show_api_billing": false, //是否显示OpenAI API用量(启用需要填写sensitive_id)
"sensitive_id": "", // 你 OpenAI 账户的 Sensitive ID,用于查询 API 用量
"usage_limit": 120, // 该 OpenAI API Key 的当月限额,单位:美元,用于计算百分比和显示上限
"legacy_api_usage": false, // 是否使用旧版 API 用量查询接口(OpenAI现已关闭该接口,但是如果你在使用第三方 API,第三方可能仍然支持此接口)
//== 川虎助理设置 ==
"GOOGLE_CSE_ID": "", //谷歌搜索引擎ID,用于川虎助理Pro模式,获取方式请看 https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search
"GOOGLE_API_KEY": "", //谷歌API Key,用于川虎助理Pro模式
"WOLFRAM_ALPHA_APPID": "", //Wolfram Alpha API Key,用于川虎助理Pro模式,获取方式请看 https://products.wolframalpha.com/api/
"SERPAPI_API_KEY": "", //SerpAPI API Key,用于川虎助理Pro模式,获取方式请看 https://serpapi.com/
//== 文档处理与显示 ==
"latex_option": "default", // LaTeX 公式渲染策略,可选"default", "strict", "all"或者"disabled"
"advance_docs": {
"pdf": {
"two_column": false, // 是否认为PDF是双栏的
"formula_ocr": true // 是否使用OCR识别PDF中的公式
}
},
//== 高级配置 ==
// 是否多个API Key轮换使用
"multi_api_key": false,
"hide_my_key": false, // 如果你想在UI中隐藏 API 密钥输入框,将此值设置为 true
// "available_models": ["GPT3.5 Turbo", "GPT4 Turbo", "GPT4 Vision"], // 可用的模型列表,将覆盖默认的可用模型列表
// "extra_models": ["模型名称3", "模型名称4", ...], // 额外的模型,将添加到可用的模型列表之后
// "extra_model_metadata": {
// "GPT-3.5 Turbo Keldos": {
// "model_name": "gpt-3.5-turbo",
// "description": "GPT-3.5 Turbo is a large language model trained by OpenAI. It is the latest version of the GPT series of models, and is known for its ability to generate human-like text.",
// "model_type": "OpenAI",
// "multimodal": false,
// "api_host": "https://www.example.com",
// "token_limit": 4096,
// "max_generation": 4096,
// },
// }
// "api_key_list": [
// "sk-xxxxxxxxxxxxxxxxxxxxxxxx1",
// "sk-xxxxxxxxxxxxxxxxxxxxxxxx2",
// "sk-xxxxxxxxxxxxxxxxxxxxxxxx3"
// ],
// "rename_model": "GPT-4o-mini", //指定默认命名模型
// 自定义OpenAI API Base
// "openai_api_base": "https://api.openai.com",
// 自定义使用代理(请替换代理URL)
// "https_proxy": "http://127.0.0.1:1079",
// "http_proxy": "http://127.0.0.1:1079",
// 自定义端口、自定义ip(请替换对应内容)
// "server_name": "0.0.0.0",
// "server_port": 7860,
// 如果要share到gradio,设置为true
// "share": false,
//如果不想自动打开浏览器,设置为false
//"autobrowser": false
}
================================================
FILE: configs/ds_config_chatbot.json
================================================
{
"fp16": {
"enabled": false
},
"bf16": {
"enabled": true
},
"comms_logger": {
"enabled": false,
"verbose": false,
"prof_all": false,
"debug": false
},
"steps_per_print": 20000000000000000,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": false
}
================================================
FILE: locale/en_US.json
================================================
{
"API Key 列表": "API Key List",
"Azure OpenAI Chat 模型 Deployment 名称": "Azure OpenAI Chat Model Deployment Name",
"Azure OpenAI Embedding 模型 Deployment 名称": "Azure OpenAI Embedding Model Deployment Name",
"Azure OpenAI Embedding 模型名称": "Azure OpenAI Embedding Model Name",
"HTTP 代理": "HTTP Proxy",
"LaTeX 公式渲染策略": "LaTeX formula rendering strategy",
"MidJourney Discord Proxy URL(用于对生成对图进行反代,可选)": "MidJourney Discord Proxy URL (used to reverse the generated image, optional)",
"MidJourney Proxy API Secret(用于鉴权访问 api,可选)": "MidJourney Proxy API Secret (used for authentication access api, optional)",
"SerpAPI API Key(获取方式请看 https://serpapi.com/)": "SerpAPI API Key (see https://serpapi.com/ for how to get it)",
"Wolfram Alpha API Key(获取方式请看 https://products.wolframalpha.com/api/)": "Wolfram Alpha API Key (see https://products.wolframalpha.com/api/ for how to get it)",
"你的 MidJourney 临时文件夹,用于存放生成的图片,填空则关闭自动下载切图(直接显示MJ的四宫格图)": "Your MidJourney temporary folder, used to store the generated images, leave blank to turn off the automatic download of the cut image (display the four-grid image of MJ directly)",
"可用模型列表": "Available model list",
"可选的本地模型为:": "The optional local models are:",
"如果不设置,将无法使用GPT模型和知识库在线索引功能。如果不设置此选项,您必须每次手动输入API Key。如果不设置,将自动启用本地编制索引的功能,可与本地模型配合使用。请问要设置默认 OpenAI API Key 吗?": "If not set, you will not be able to use the GPT model and the knowledge base online indexing function. If this option is not set, you must manually enter the API Key each time. If not set, the function of indexing locally will be automatically enabled, which can be used with local models. Do you want to set the default OpenAI API Key?",
"川虎助理使用的模型": "The model used by Chuanhu Assistant",
"是否不展示对话历史": "Do not show conversation history",
"是否启用检查更新": "Enable check for update",
"是否启用检查更新?如果设置,软件启动时会自动检查更新。": "Enable check for update? If set, the software will automatically check for updates when it starts.",
"是否在本地编制知识库索引?如果是,可以在使用本地模型时离线使用知识库,否则使用OpenAI服务来编制索引(需要OpenAI API Key)。请确保你的电脑有至少16GB内存。本地索引模型需要从互联网下载。": "Do you want to index the knowledge base locally? If so, you can use the knowledge base offline when using the local model, otherwise use the OpenAI service to index (requires OpenAI API Key). Make sure your computer has at least 16GB of memory. The local index model needs to be downloaded from the Internet.",
"是否指定可用模型列表?如果设置,将只会在 UI 中显示指定的模型。默认展示所有模型。可用的模型有:": "Specify the available model list? If set, only the specified models will be displayed in the UI. All models are displayed by default. The available models are:",
"是否更改默认模型?如果设置,软件启动时会自动加载该模型,无需在 UI 中手动选择。目前的默认模型为 gpt-3.5-turbo。可选的在线模型有:": "Change the default model? If set, the software will automatically load the model when it starts, and there is no need to manually select it in the UI. The current default model is gpt-3.5-turbo. The optional online models are:",
"是否添加模型到列表?例如,训练好的GPT模型可以添加到列表中。可以在UI中自动添加模型到列表。": "Add model to list? For example, the trained GPT model can be added to the list. You can automatically add models to the list in the UI.",
"是否设置 Azure OpenAI?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,将无法使用 Azure OpenAI 模型。": "Set the default Azure OpenAI API Key? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, the Azure OpenAI model will not be available.",
"是否设置 Midjourney ?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,将无法使用 Midjourney 模型。": "Set the default Midjourney API Key? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, the Midjourney model will not be available.",
"是否设置Cloude API?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,将无法使用 Cloude 模型。": "Set the default Cloude API Key? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, the Cloude model will not be available.",
"是否设置多 API Key 切换?如果设置,将在多个API Key之间切换使用。": "Set multiple API Key switching? If set, it will switch between multiple API Keys.",
"是否设置川虎助理?如果不设置,仍可设置川虎助理。如果设置,可以使用川虎助理Pro模式。": "Set Chuanhu Assistant? If not set, Chuanhu Assistant can still be set. If set, you can use Chuanhu Assistant Pro mode.",
"是否设置文心一言?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,将无法使用 文心一言 模型。": "Set the default ERNIE Bot API Key? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, the ERNIE Bot model will not be available.",
"是否设置文档处理与显示?可选的 LaTeX 公式渲染策略有:\"default\", \"strict\", \"all\"或者\"disabled\"。": "Set document processing and display? The optional LaTeX formula rendering strategies are: \"default\", \"strict\", \"all\" or \"disabled\".",
"是否设置未登录情况下是否不展示对话历史?如果设置,未登录情况下将不展示对话历史。": "Set whether to show conversation history when not logged in? If set, the conversation history will not be displayed when not logged in.",
"是否设置机器人头像和用户头像?可填写本地或网络图片链接,或者\"none\"(不显示头像)。": "Set the bot avatar and user avatar? You can fill in the local or network picture link, or \"none\" (do not display the avatar).",
"是否设置讯飞星火?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,将无法使用 讯飞星火 模型。请注意不要搞混App ID和API Secret。": "Set the default Spark API Key? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, the Spark model will not be available. Please be careful not to confuse App ID and API Secret.",
"是否设置默认 Google AI Studio API 密钥?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,可以在软件启动后手动输入 API Key。": "Set the default Google Palm API Key? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, you can manually enter the API Key after the software starts.",
"是否设置默认 HTTP 代理?这可以透过代理使用OpenAI API。": "Set the default HTTP proxy? This can use the OpenAI API through the proxy.",
"是否设置默认 MiniMax API 密钥和 Group ID?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,将无法使用 MiniMax 模型。": "Set the default MiniMax API Key and Group ID? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, the MiniMax model will not be available.",
"是否设置默认 OpenAI API Base?如果你在使用第三方API或者CloudFlare Workers等来中转OpenAI API,可以在这里设置。": "Set the default OpenAI API Base? If you are using a third-party API or CloudFlare Workers to transfer the OpenAI API, you can set it here.",
"是否设置默认 OpenAI API Key?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,可以在软件启动后手动输入 API Key。": "Set the default OpenAI API Key? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, you can manually enter the API Key after the software starts.",
"是否设置默认 XMChat API 密钥?如果设置,软件启动时会自动加载该API Key,无需在 UI 中手动输入。如果不设置,可以在软件启动后手动输入 API Key。": "Set the default XMChat API Key? If set, the API Key will be automatically loaded when the software starts, and there is no need to manually enter it in the UI. If not set, you can manually enter the API Key after the software starts.",
"是否选择自动命名对话历史的方式?": "Do you want to choose the way to automatically name the conversation history?",
"是否通过gradio分享?": "Share via gradio?",
"是否通过gradio分享?可以通过公网访问。": "Share via gradio? Can be accessed through the public network.",
"是否配置运行地址和端口?(不建议设置)": "Configure the running address and port? (Not recommended)",
"是否隐藏API Key输入框": "Hide API Key input box",
"是否隐藏API Key输入框?如果设置,将不会在 UI 中显示API Key输入框。": "Hide API Key input box? If set, the API Key input box will not be displayed in the UI.",
"服务器地址,例如设置为 0.0.0.0 则可以通过公网访问(如果你用公网IP)": "Server address, for example, set to 0.0.0。0 can be accessed through the public network (if you use a public network IP)",
"服务器端口": "Server port",
"未登录情况下是否不展示对话历史": "Do not show conversation history when not logged in",
"未设置用户名/密码情况下是否不展示对话历史?": "Do not show conversation history when username/password is not set?",
"本地编制索引": "Local indexing",
"机器人头像": "Bot avatar",
"用户头像": "User avatar",
"由于下面的原因,Google 拒绝返回 Gemini 的回答:\n\n": "For the following reasons, Google refuses to return Gemini's response:\n\n",
"百度云中的文心一言 API Key": "Baidu Cloud's ERNIE Bot API Key",
"百度云中的文心一言 Secret Key": "Baidu Cloud's ERNIE Bot Secret Key",
"自动命名对话历史的方式(0: 使用日期时间命名;1: 使用第一条提问命名,2: 使用模型自动总结。)": "The way to automatically name the conversation history (0: name by date and time; 1: name by first question, 2: name by model auto summary.)",
"讯飞星火 API Key": "Spark API Key",
"讯飞星火 API Secret": "Spark API Secret",
"讯飞星火 App ID": "Spark App ID",
"谷歌API Key(获取方式请看 https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search)": "Google API Key (see https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search for how to get it)",
"谷歌搜索引擎ID(获取方式请看 https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search)": "Google search engine ID (see https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search for how to get it)",
"输入的不是数字,将使用默认值。": "The input is not a number, the default value will be used.",
"额外模型列表": "Extra model list",
"默认模型": "Default model",
"获取资源错误": "Error retrieving resources.",
"该模型不支持多模态输入": "This model does not accept multi-modal input.",
" 中。": ".",
" 为: ": " as: ",
" 吗?": " ?",
"# ⚠️ 务必谨慎更改 ⚠️": "# ⚠️ Caution: Changes require care. ⚠️",
"**发送消息** 或 **提交key** 以显示额度": "**Send message** or **Submit key** to display credit",
"**本月使用金额** ": "**Monthly usage** ",
"**获取API使用情况失败**": "**Failed to get API usage**",
"**获取API使用情况失败**,sensitive_id错误或已过期": "**Failed to get API usage**, wrong or expired sensitive_id",
"**获取API使用情况失败**,需在填写`config.json`中正确填写sensitive_id": "**Failed to get API usage**, correct sensitive_id needed in `config.json`",
"== API 配置 ==": "== API Configuration ==",
"== 基础配置 ==": "== Basic Settings ==",
"== 高级配置 ==": "== Advanced Settings ==",
"API key为空,请检查是否输入正确。": "API key is empty, check whether it is entered correctly.",
"API密钥更改为了": "The API key is changed to",
"IP地址信息正在获取中,请稍候...": "IP address information is being retrieved, please wait...",
"JSON解析错误,收到的内容: ": "JSON parsing error, received content: ",
"SSL错误,无法获取对话。": "SSL error, unable to get dialogue.",
"Token 计数: ": "Token Count: ",
"☹️发生了错误:": "☹️Error: ",
"⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置": "⚠️ To ensure the security of API-Key, please modify the network settings in the configuration file `config.json`.",
"⚠️请先删除知识库中的历史文件,再尝试上传!": "⚠️ Please clear the files in the knowledge base before trying to upload new files!",
"。": ".",
"。你仍然可以使用聊天功能。": ". You can still use the chat function.",
"上传": "Upload",
"上传了": "Uploaded",
"上传到 OpenAI 后自动填充": "Automatically filled after uploading to OpenAI",
"上传到OpenAI": "Upload to OpenAI",
"上传文件": "Upload files",
"不支持的文件: ": "Unsupported file:",
"中。": ".",
"中,包含了可用设置项及其简要说明。请查看 wiki 获取更多信息:": " contains available settings and brief descriptions. Please check the wiki for more information:",
"仅供查看": "For viewing only",
"从Prompt模板中加载": "Load from Prompt Template",
"从列表中加载对话": "Load dialog from list",
"代理地址": "Proxy address",
"代理错误,无法获取对话。": "Proxy error, unable to get dialogue.",
"你没有权限访问 GPT4,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)": "You do not have permission to access GPT-4, [learn more](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)",
"你没有选择任何对话历史": "You have not selected any conversation history.",
"你的": "Your ",
"你真的要删除 ": "Are you sure you want to delete ",
"你设置了 ": "You set ",
"你选择了不设置 ": "You chose not to set ",
"你选择了不设置用户账户。": "You chose not to set user account.",
"使用在线搜索": "Use online search",
"停止符,用英文逗号隔开...": "Type in stop token here, separated by comma...",
"关于": "About",
"关闭": "Close",
"准备数据集": "Prepare Dataset",
"切换亮暗色主题": "Switch light/dark theme",
"删除对话历史成功": "Successfully deleted conversation history.",
"删除这轮问答": "Delete this round of Q&A",
"刷新状态": "Refresh Status",
"剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)": "Insufficient remaining quota, [learn more](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)",
"加载Prompt模板": "Load Prompt Template",
"单轮对话": "Single-turn",
"历史记录(JSON)": "History file (JSON)",
"参数": "Parameters",
"双栏pdf": "Two-column pdf",
"取消": "Cancel",
"取消所有任务": "Cancel All Tasks",
"可选,用于区分不同的模型": "Optional, used to distinguish different models",
"启用的工具:": "Enabled tools: ",
"在": "in",
"在工具箱中管理知识库文件": "Manage knowledge base files in the toolbox",
"在线搜索": "Web search",
"在这里输入": "Type in here",
"在这里输入System Prompt...": "Type in System Prompt here...",
"多账号模式已开启,无需输入key,可直接开始对话": "Multi-account mode is enabled, no need to enter key, you can start the dialogue directly",
"好": "OK",
"实时传输回答": "Stream output",
"对话": "Dialogue",
"对话历史": "Conversation history",
"对话历史记录": "Dialog History",
"对话命名方式": "History naming method",
"导出为 Markdown": "Export as Markdown",
"川虎Chat": "Chuanhu Chat",
"川虎Chat 🚀": "Chuanhu Chat 🚀",
"工具箱": "Toolbox",
"已经被删除啦": "It has been deleted.",
"开始实时传输回答……": "Start streaming output...",
"开始训练": "Start Training",
"微调": "Fine-tuning",
"总结": "Summarize",
"总结完成": "Summary completed.",
"您使用的就是最新版!": "You are using the latest version!",
"您的IP区域:": "Your IP region: ",
"您的IP区域:未知。": "Your IP region: Unknown.",
"您输入的 API 密钥为:": "The API key you entered is:",
"找到了缓存的索引文件,加载中……": "Found cached index file, loading...",
"拓展": "Extensions",
"搜索(支持正则)...": "Search (supports regex)...",
"数据集预览": "Dataset Preview",
"文件ID": "File ID",
"新对话 ": "New Chat ",
"新建对话保留Prompt": "Retain Prompt For New Chat",
"是否设置 HTTP 代理?[Y/N]:": "Do you want to set up an HTTP proxy? [Y/N]:",
"是否设置 OpenAI API 密钥?[Y/N]:": "Have you set the OpenAI API key? [Y/N]:",
"是否设置用户账户?设置后,用户需要登陆才可访问。输入 Yes(y) 或 No(n),默认No:": "Set user account? After setting, users need to log in to access. Enter Yes(y) or No(n), default No: ",
"暂时未知": "Unknown",
"更新": "Update",
"更新失败,请尝试[手动更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)": "Update failed, please try [manually updating](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)",
"更新成功,请重启本程序": "Updated successfully, please restart this program",
"未命名对话历史记录": "Unnamed Dialog History",
"未设置代理...": "No proxy...",
"本月使用金额": "Monthly usage",
"构建索引中……": "Building index...",
"查看[使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35)": "View the [usage guide](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35) for more details",
"根据日期时间": "By date and time",
"模型": "Model",
"模型名称后缀": "Model Name Suffix",
"模型自动总结(消耗tokens)": "Auto summary by LLM (Consume tokens)",
"模型设置为了:": "Model is set to: ",
"正在尝试更新...": "Trying to update...",
"正在尝试重启...": "Trying to restart...",
"正在获取IP地址信息,请稍候...": "Getting IP address information, please wait...",
"正在进行首次设置,请按照提示进行配置,配置将会被保存在": "First-time setup is in progress, please follow the prompts to configure, and the configuration will be saved in",
"没有找到任何支持的文档。": "No supported documents found.",
"添加训练好的模型到模型列表": "Add trained model to the model list",
"状态": "Status",
"现在开始设置其他在线模型的API Key": "Start setting the API Key for other online models",
"现在开始进行交互式配置。碰到不知道该怎么办的设置项时,请直接按回车键跳过,程序会自动选择合适的默认值。": "Starting interactive configuration now. When you encounter a setting that you don't know what to do, just press the Enter key to skip, and the program will automatically select the appropriate default value.",
"现在开始进行交互式配置:": "Interactive configuration will now begin:",
"现在开始进行软件功能设置": "Start setting the software function now",
"生成内容总结中……": "Generating content summary...",
"用于定位滥用行为": "Used to locate abuse",
"用户标识符": "User identifier",
"由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发<br />访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本": "Developed by Bilibili [土川虎虎虎](https://space.bilibili.com/29125536), [明昭MZhao](https://space.bilibili.com/24807452) and [Keldos](https://github.com/Keldos-Li)\n\nDownload latest code from [GitHub](https://github.com/GaiZhenbiao/ChuanhuChatGPT)",
"由于下面的原因,Google 拒绝返回 Gemini 的回答:\\n\\n": "Google has refused to return Gemini's response due to the following reasons:",
"知识库": "Knowledge base",
"知识库文件": "Knowledge base files",
"立即重启": "Restart now",
"第一条提问": "By first question",
"索引已保存至本地!": "Index saved locally!",
"索引构建失败!": "Index build failed!",
"索引构建完成": "Indexing complete.",
"索引构建完成!": "Indexing completed!",
"网络": "Network",
"获取API使用情况失败:": "Failed to get API usage:",
"获取IP地理位置失败。原因:": "Failed to get IP location. Reason: ",
"获取对话时发生错误,请查看后台日志": "Error occurred when getting dialogue, check the background log",
"覆盖gradio.oauth /logout路由": "Overrided the gradio.oauth/logout route",
"训练": "Training",
"训练状态": "Training Status",
"训练轮数(Epochs)": "Training Epochs",
"设置": "Settings",
"设置保存文件名": "Set save file name",
"设置完成。现在请重启本程序。": "Setup completed. Please restart this program now.",
"设置文件名: 默认为.json,可选为.md": "Set file name: default is .json, optional is .md",
"识别公式": "formula OCR",
"详情": "Details",
"请先输入用户名,输入空行结束添加用户:": "Please enter the username first, press Enter to add the user: ",
"请先选择Ollama后端模型\\n\\n": "Please select the Ollama backend model first.",
"请查看 config_example.json,配置 Azure OpenAI": "Please review config_example.json to configure Azure OpenAI",
"请检查网络连接,或者API-Key是否有效。": "Check the network connection or whether the API-Key is valid.",
"请输入 ": "Please enter ",
"请输入 HTTP 代理地址:": "Please enter the HTTP proxy address:",
"请输入 OpenAI API 密钥:": "Please enter your OpenAI API key:",
"请输入密码:": "Please enter the password: ",
"请输入对话内容。": "Enter the content of the conversation.",
"请输入有效的文件名,不要包含以下特殊字符:": "Please enter a valid file name, do not include the following special characters: ",
"读取超时,无法获取对话。": "Read timed out, unable to get dialogue.",
"账单信息不适用": "Billing information is not applicable",
"跳过设置 HTTP 代理。": "Skip setting up HTTP proxy.",
"跳过设置 OpenAI API 密钥。": "Skip setting up OpenAI API key.",
"输入 Yes(y) 或 No(n),默认No:": "Enter Yes(y) or No(n), default No: ",
"连接超时,无法获取对话。": "Connection timed out, unable to get dialogue.",
"退出用户": "Log out user.",
"选择LoRA模型": "Select LoRA Model",
"选择Prompt模板集合文件": "Select Prompt Template Collection File",
"选择回复语言(针对搜索&索引功能)": "Select reply language (for search & index)",
"选择数据集": "Select Dataset",
"选择模型": "Select Model",
"配置已保存在 config.json 中。": "The configuration has been saved in config.json.",
"释放文件以上传": "Drop files to upload",
"重命名该对话": "Rename this chat",
"重新生成": "Regenerate",
"高级": "Advanced",
",本次对话累计消耗了 ": ", total cost: ",
",请使用 .pdf, .docx, .pptx, .epub, .xlsx 等文档。": "Please use .pdf, .docx, .pptx, .epub, .xlsx, etc. documents.",
",输入空行结束:": ", press Enter to end: ",
",默认为 ": ", default is ",
":": ": ",
"💾 保存对话": "💾 Save Dialog",
"📝 导出为 Markdown": "📝 Export as Markdown",
"🔄 切换API地址": "🔄 Switch API Address",
"🔄 刷新": "🔄 Refresh",
"🔄 检查更新...": "🔄 Check for Update...",
"🔄 设置代理地址": "🔄 Set Proxy Address",
"🔄 重新生成": "🔄 Regeneration",
"🔙 恢复默认网络设置": "🔙 Reset Network Settings",
"🗑️ 删除最新对话": "🗑️ Delete latest dialog",
"🗑️ 删除最旧对话": "🗑️ Delete oldest dialog",
"🧹 新的对话": "🧹 New Dialogue",
"gpt3.5turbo_description": "GPT-3.5 Turbo is a text-only large language model developed by OpenAI. It is based on the GPT-3 model and has been fine-tuned on a large amount of data. The latest version of GPT-3.5 Turbo has been optimized for performance and accuracy, supporting a context window of up to 16k tokens and a maximum response length of 4096 tokens. This model always uses the latest version of GPT-3.5 Turbo that is available.",
"gpt3.5turbo_instruct_description": "GPT3.5 Turbo Instruct is a text completion model developed by OpenAI. It has similar capabilities as GPT-3 era models. Compatible with legacy Completions endpoint and not Chat Completions. This model has a context window of 4096 tokens.",
"gpt3.5turbo_16k_description": "Legacy model of GPT-3.5 Turbo with a context window of 16k tokens.",
"gpt4_description": "GPT-4 is a text-only large language model developed by OpenAI. It has a context window of 8192 tokens, and a maximum response length of 4096 tokens. This model always uses the latest version of GPT-4 that is available. It's recommended to use GPT-4 Turbo for better performance, better speed and lower cost.",
"gpt4_32k_description": "GPT-4 32k is a text-only large language model developed by OpenAI. It has a context window of 32k tokens, and a maximum response length of 4096 tokens. This model was never rolled out widely in favor of GPT-4 Turbo.",
"gpt4turbo_description": "GPT-4 Turbo is a multimodal large language model developed by OpenAI. It offers state-of-the-art performance on a wide range of natural language processing tasks, including text generation, translation, summarization, visual question answering, and more. GPT-4 Turbo has a context window of up to 128k tokens and a maximum response length of 4096 tokens. This model always uses the latest version of GPT-4 Turbo that is available.",
"claude3_haiku_description": "Claude3 Haiku is a multimodal large language model developed by Anthropic. It's the fastest and most compact model in the Claude 3 model family, designed for near-instant responsiveness and seamless AI experiences that mimic human interactions. Claude3 Haiku has a context window of up to 200k tokens and a maximum response length of 4096 tokens. This model always uses the latest version of Claude3 Haiku that is available.",
"claude3_sonnet_description": "Claude3 Sonnet is a multimodal large language model developed by Anthropic. It's the most balanced model between intelligence and speed in the Claude 3 model family, designed for enterprise workloads and scaled AI deployments. Claude3 Sonnet has a context window of up to 200k tokens and a maximum response length of 4096 tokens. This model always uses the latest version of Claude3 Sonnet that is available.",
"claude3_opus_description": "Claude3 Opus is a multimodal large language model developed by Anthropic. It's the most intelligent and largest model in the Claude 3 model family, delivering state-of-the-art performance on highly complex tasks and demonstrating fluency and human-like understanding. Claude3 Opus has a context window of up to 200k tokens and a maximum response length of 4096 tokens. This model always uses the latest version of Claude3 Opus that is available.",
"groq_llama3_8b_description": "LLaMA 3 8B with [Groq](https://console.groq.com/), the impressively fast language model inferencing service.",
"groq_llama3_70b_description": "LLaMA 3 70B with [Groq](https://console.groq.com/), the impressively fast language model inferencing service.",
"groq_mixtral_8x7b_description": "Mixtral 8x7B with [Groq](https://console.groq.com/), the impressively fast language model inferencing service.",
"groq_gemma_7b_description": "Gemma 7B with [Groq](https://console.groq.com/), the impressively fast language model inferencing service.",
"chuanhu_description": "An agent that can use multiple tools to solve complex problems.",
"gpt_default_slogan": "How can I help you today?",
"claude_default_slogan": "What can l help you with?",
"chuanhu_slogan": "What can Chuanhu do for you today?",
"chuanhu_question_1": "What's the weather in Hangzhou today?",
"chuanhu_question_2": "Any new releases from Apple?",
"chuanhu_question_3": "Current prices of graphics cards?",
"chuanhu_question_4": "Any new trends on TikTok?",
"gpt4o_description": "OpenAI's most advanced, multimodal flagship model that’s cheaper and faster than GPT-4 Turbo.",
"gpt4omini_description": "OpenAI's affordable and intelligent small model for fast, lightweight tasks.",
"gpt5_description": "The best model for coding and agentic tasks across domains. 400,000-token context window and up to 128,000 output tokens.",
"gpt5mini_description": "A faster, more cost-efficient version of GPT-5 for well-defined tasks. 400,000-token context window and up to 128,000 output tokens.",
"gpt5nano_description": "Fastest, most cost-efficient version of GPT-5. 400,000-token context window and up to 128,000 output tokens.",
"o1_description": "The o1 series of large language models are trained with reinforcement learning to perform complex reasoning. o1 models think before they answer, producing a long internal chain of thought before responding to the user.",
"no_permission_to_update_description": "You do not have permission to update. Please contact the administrator. The administrator's configuration method is to add the username to the admin_list in the configuration file config.json."
}
================================================
FILE: locale/extract_locale.py
================================================
import asyncio
import logging
import os
import re
import sys
import aiohttp
import commentjson
import commentjson as json
asyncio.set_event_loop_policy(asyncio.DefaultEventLoopPolicy())
with open("config.json", "r", encoding="utf-8") as f:
config = commentjson.load(f)
api_key = config["openai_api_key"]
url = config["openai_api_base"] + "/v1/chat/completions" if "openai_api_base" in config else "https://api.openai.com/v1/chat/completions"
def get_current_strings():
pattern = r'i18n\s*\(\s*["\']([^"\']*(?:\)[^"\']*)?)["\']\s*\)'
# Load the .py files
contents = ""
for dirpath, dirnames, filenames in os.walk("."):
for filename in filenames:
if filename.endswith(".py"):
filepath = os.path.join(dirpath, filename)
with open(filepath, 'r', encoding='utf-8') as f:
contents += f.read()
# Matching with regular expressions
matches = re.findall(pattern, contents, re.DOTALL)
data = {match.strip('()"'): '' for match in matches}
fixed_data = {} # fix some keys
for key, value in data.items():
if "](" in key and key.count("(") != key.count(")"):
fixed_data[key+")"] = value
else:
fixed_data[key] = value
return fixed_data
def get_locale_strings(filename):
try:
with open(filename, "r", encoding="utf-8") as f:
locale_strs = json.load(f)
except FileNotFoundError:
locale_strs = {}
return locale_strs
def sort_strings(existing_translations):
# Sort the merged data
sorted_translations = {}
# Add entries with (NOT USED) in their values
for key, value in sorted(existing_translations.items(), key=lambda x: x[0]):
if "(🔴NOT USED)" in value:
sorted_translations[key] = value
# Add entries with empty values
for key, value in sorted(existing_translations.items(), key=lambda x: x[0]):
if value == "":
sorted_translations[key] = value
# Add the rest of the entries
for key, value in sorted(existing_translations.items(), key=lambda x: x[0]):
if value != "" and "(NOT USED)" not in value:
sorted_translations[key] = value
return sorted_translations
async def auto_translate(str, language):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}",
"temperature": f"{0}",
}
payload = {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": f"You are a translation program;\nYour job is to translate user input into {language};\nThe content you are translating is a string in the App;\nDo not explain emoji;\nIf input is only a emoji, please simply return origin emoji;\nPlease ensure that the translation results are concise and easy to understand."
},
{"role": "user", "content": f"{str}"}
],
}
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload) as response:
data = await response.json()
return data["choices"][0]["message"]["content"]
async def main(auto=False):
current_strs = get_current_strings()
locale_files = []
# 遍历locale目录下的所有json文件
for dirpath, dirnames, filenames in os.walk("locale"):
for filename in filenames:
if filename.endswith(".json"):
locale_files.append(os.path.join(dirpath, filename))
for locale_filename in locale_files:
if "zh_CN" in locale_filename:
continue
try:
locale_strs = get_locale_strings(locale_filename)
except json.decoder.JSONDecodeError:
import traceback
traceback.print_exc()
logging.error(f"Error decoding {locale_filename}")
continue
# Add new keys
new_keys = []
for key in current_strs:
if key not in locale_strs:
new_keys.append(key)
locale_strs[key] = ""
print(f"{locale_filename[7:-5]}'s new str: {len(new_keys)}")
# Add (NOT USED) to invalid keys
for key in locale_strs:
if key not in current_strs:
locale_strs[key] = "(🔴NOT USED)" + locale_strs[key]
print(f"{locale_filename[7:-5]}'s invalid str: {len(locale_strs) - len(current_strs)}")
locale_strs = sort_strings(locale_strs)
if auto:
tasks = []
non_translated_keys = []
for key in locale_strs:
if locale_strs[key] == "":
non_translated_keys.append(key)
tasks.append(auto_translate(key, locale_filename[7:-5]))
results = await asyncio.gather(*tasks)
for key, result in zip(non_translated_keys, results):
locale_strs[key] = "(🟡REVIEW NEEDED)" + result
print(f"{locale_filename[7:-5]}'s auto translated str: {len(non_translated_keys)}")
with open(locale_filename, 'w', encoding='utf-8') as f:
json.dump(locale_strs, f, ensure_ascii=False, indent=4)
if __name__ == "__main__":
auto = False
if len(sys.argv) > 1 and sys.argv[1] == "--auto":
auto = True
asyncio.run(main(auto))
================================================
FILE: locale/ja_JP.json
================================================
{
"获取资源错误": "リソースの取得エラー",
"该模型不支持多模态输入": "このモデルはマルチモーダル入力に対応していません。",
" 中。": "中。",
" 为: ": "対:",
" 吗?": " を削除してもよろしいですか?",
"# ⚠️ 务必谨慎更改 ⚠️": "# ⚠️ 変更を慎重に ⚠️",
"**发送消息** 或 **提交key** 以显示额度": "**メッセージを送信** または **キーを送信** して、クレジットを表示します",
"**本月使用金额** ": "**今月の使用料金** ",
"**获取API使用情况失败**": "**API使用状況の取得に失敗しました**",
"**获取API使用情况失败**,sensitive_id错误或已过期": "**API使用状況の取得に失敗しました**、sensitive_idが間違っているか、期限切れです",
"**获取API使用情况失败**,需在填写`config.json`中正确填写sensitive_id": "**API使用状況の取得に失敗しました**、`config.json`に正しい`sensitive_id`を入力する必要があります",
"== API 配置 ==": "== API設定 ==",
"== 基础配置 ==": "== Basic Configuration ==",
"== 高级配置 ==": "== Advanced Settings ==",
"API key为空,请检查是否输入正确。": "APIキーが入力されていません。正しく入力されているか確認してください。",
"API密钥更改为了": "APIキーが変更されました",
"IP地址信息正在获取中,请稍候...": "IPアドレス情報を取得中です。お待ちください...",
"JSON解析错误,收到的内容: ": "JSON解析エラー、受信内容: ",
"SSL错误,无法获取对话。": "SSLエラー、会話を取得できません。",
"Token 计数: ": "Token数: ",
"☹️发生了错误:": "エラーが発生しました: ",
"⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置": "⚠️ APIキーの安全性を確保するために、`config.json`ファイルでネットワーク設定を変更してください。",
"⚠️请先删除知识库中的历史文件,再尝试上传!": "⚠️ ナレッジベースの履歴ファイルを削除してから、アップロードを試してください!",
"。": "。",
"。你仍然可以使用聊天功能。": "。あなたはまだチャット機能を使用できます。",
"上传": "アップロード",
"上传了": "アップロードしました。",
"上传到 OpenAI 后自动填充": "OpenAIへのアップロード後、自動的に入力されます",
"上传到OpenAI": "OpenAIへのアップロード",
"上传文件": "ファイルをアップロード",
"不支持的文件: ": "サポートされていないファイル:",
"中。": "ちゅう。",
"中,包含了可用设置项及其简要说明。请查看 wiki 获取更多信息:": "使用者名またはパスワードが正しくありません。再試行してください。",
"仅供查看": "閲覧専用",
"从Prompt模板中加载": "Promptテンプレートから読込",
"从列表中加载对话": "リストから会話を読込",
"代理地址": "プロキシアドレス",
"代理错误,无法获取对话。": "プロキシエラー、会話を取得できません。",
"你没有权限访问 GPT4,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)": "GPT-4にアクセス権がありません、[詳細はこちら](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)",
"你没有选择任何对话历史": "あなたは何の会話履歴も選択していません。",
"你的": "あなたの",
"你真的要删除 ": "本当に ",
"你设置了 ": "設定した内容: ",
"你选择了不设置 ": "設定を選択していません。",
"你选择了不设置用户账户。": "You have chosen not to set up a user account.",
"使用在线搜索": "オンライン検索を使用",
"停止符,用英文逗号隔开...": "英語のカンマで区切りにしてください。...",
"关于": "について",
"关闭": "閉じる",
"准备数据集": "データセットの準備",
"切换亮暗色主题": "テーマの明暗切替",
"删除对话历史成功": "削除した会話の履歴",
"删除这轮问答": "この質疑応答を削除",
"刷新状态": "ステータスを更新",
"剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)": "剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)",
"加载Prompt模板": "Promptテンプレートを読込",
"单轮对话": "単発会話",
"历史记录(JSON)": "履歴ファイル(JSON)",
"参数": "調整",
"双栏pdf": "2カラムpdf",
"取消": "キャンセル",
"取消所有任务": "すべてのタスクをキャンセル",
"可选,用于区分不同的模型": "オプション、異なるモデルを区別するために使用",
"启用的工具:": "有効なツール:",
"在": "In",
"在工具箱中管理知识库文件": "ツールボックスでナレッジベースファイルの管理を行う",
"在线搜索": "オンライン検索",
"在这里输入": "ここに入力",
"在这里输入System Prompt...": "System Promptを入力してください...",
"多账号模式已开启,无需输入key,可直接开始对话": "複数アカウントモードがオンになっています。キーを入力する必要はありません。会話を開始できます",
"好": "はい",
"实时传输回答": "ストリーム出力",
"对话": "会話",
"对话历史": "対話履歴",
"对话历史记录": "会話履歴",
"对话命名方式": "会話の命名方法",
"导出为 Markdown": "Markdownでエクスポート",
"川虎Chat": "川虎Chat",
"川虎Chat 🚀": "川虎Chat 🚀",
"工具箱": "ツールボックス",
"已经被删除啦": "削除されました。",
"开始实时传输回答……": "ストリーム出力開始……",
"开始训练": "トレーニングを開始",
"微调": "ファインチューニング",
"总结": "要約する",
"总结完成": "完了",
"您使用的就是最新版!": "最新バージョンを使用しています!",
"您的IP区域:": "あなたのIPアドレス地域:",
"您的IP区域:未知。": "あなたのIPアドレス地域:不明",
"您输入的 API 密钥为:": "入力されたAPIキーは:",
"找到了缓存的索引文件,加载中……": "キャッシュされたインデックスファイルが見つかりました、読み込んでいます...",
"拓展": "拡張",
"搜索(支持正则)...": "検索(正規表現をサポート)...",
"数据集预览": "データセットのプレビュー",
"文件ID": "ファイルID",
"新对话 ": "新しい会話 ",
"新建对话保留Prompt": "新しい会話を作るたびに、このプロンプトが維持しますか。",
"是否设置 HTTP 代理?[Y/N]:": "HTTPプロキシを設定しますか?[Y/N]:",
"是否设置 OpenAI API 密钥?[Y/N]:": "OpenAI APIのキーを設定しますか?[Y/N]:",
"是否设置用户账户?设置后,用户需要登陆才可访问。输入 Yes(y) 或 No(n),默认No:": "ユーザーアカウントを設定しますか?アカウントを設定すると、ユーザーはログインしてアクセスする必要があります。Yes(y) または No(n) を入力してください。デフォルトはNoです:",
"暂时未知": "しばらく不明である",
"更新": "アップデート",
"更新失败,请尝试[手动更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)": "更新に失敗しました、[手動での更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)をお試しください。",
"更新成功,请重启本程序": "更新が成功しました、このプログラムを再起動してください",
"未命名对话历史记录": "名無しの会話履歴",
"未设置代理...": "代理が設定されていません...",
"本月使用金额": "今月の使用料金",
"构建索引中……": "インデックスを構築中...",
"查看[使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35)": "[使用ガイド](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35)を表示",
"根据日期时间": "日付と時刻に基づいて",
"模型": "LLMモデル",
"模型名称后缀": "モデル名のサフィックス",
"模型自动总结(消耗tokens)": "モデルによる自動要約(トークン消費)",
"模型设置为了:": "LLMモデルを設定しました: ",
"正在尝试更新...": "更新を試みています...",
"正在尝试重启...": "再起動を試みています...",
"正在获取IP地址信息,请稍候...": "IPアドレス情報を取得しています、しばらくお待ちください...",
"正在进行首次设置,请按照提示进行配置,配置将会被保存在": "最初のセットアップ中です。指示に従って設定を行い、設定は保存されます。",
"没有找到任何支持的文档。": "サポートされているドキュメントが見つかりませんでした。",
"添加训练好的模型到模型列表": "トレーニング済みモデルをモデルリストに追加",
"状态": "ステータス",
"现在开始设置其他在线模型的API Key": "他のオンラインモデルのAPIキーを設定します。",
"现在开始进行交互式配置。碰到不知道该怎么办的设置项时,请直接按回车键跳过,程序会自动选择合适的默认值。": "インタラクティブな構成が始まりました。わからない設定がある場合は、Enterキーを押してスキップしてください。プログラムが適切なデフォルト値を自動で選択します。",
"现在开始进行交互式配置:": "インタラクティブな設定が始まります:",
"现在开始进行软件功能设置": "ソフトウェア機能の設定を開始します",
"生成内容总结中……": "コンテンツ概要を生成しています...",
"用于定位滥用行为": "不正行為を特定できるため",
"用户标识符": "ユーザー識別子",
"由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发<br />访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本": "開発:Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) と [明昭MZhao](https://space.bilibili.com/24807452) と [Keldos](https://github.com/Keldos-Li)\n\n最新コードは川虎Chatのサイトへ [GitHubプロジェクト](https://github.com/GaiZhenbiao/ChuanhuChatGPT)",
"由于下面的原因,Google 拒绝返回 Gemini 的回答:\\n\\n": "GoogleがGeminiの回答を返信しない理由:",
"知识库": "ファイル収納庫",
"知识库文件": "ナレッジベースファイル",
"立即重启": "今すぐ再起動",
"第一条提问": "最初の質問",
"索引已保存至本地!": "インデックスはローカルに保存されました!",
"索引构建失败!": "インデックスの作成に失敗しました!",
"索引构建完成": "索引の構築が完了しました。",
"索引构建完成!": "インデックスの構築が完了しました!",
"网络": "ネットワーク",
"获取API使用情况失败:": "API使用状況の取得に失敗しました:",
"获取IP地理位置失败。原因:": "IPアドレス地域の取得に失敗しました。理由:",
"获取对话时发生错误,请查看后台日志": "会話取得時にエラー発生、あとのログを確認してください",
"覆盖gradio.oauth /logout路由": "\"gradio.oauth /logout\" ルートをオーバーライドします。",
"训练": "トレーニング",
"训练状态": "トレーニングステータス",
"训练轮数(Epochs)": "トレーニングエポック数",
"设置": "設定",
"设置保存文件名": "保存ファイル名を設定",
"设置完成。现在请重启本程序。": "設定完了。今度はアプリを再起動してください。",
"设置文件名: 默认为.json,可选为.md": "ファイル名を設定: デフォルトは.json、.mdを選択できます",
"识别公式": "formula OCR",
"详情": "詳細",
"请先输入用户名,输入空行结束添加用户:": "ユーザー名を入力してください。ユーザーの追加は空行で終了します。",
"请先选择Ollama后端模型\\n\\n": "Ollamaのバックエンドモデルを選択してください。",
"请查看 config_example.json,配置 Azure OpenAI": "Azure OpenAIの設定については、config_example.jsonをご覧ください",
"请检查网络连接,或者API-Key是否有效。": "ネットワーク接続を確認するか、APIキーが有効かどうかを確認してください。",
"请输入 ": "入力してください",
"请输入 HTTP 代理地址:": "HTTPプロキシアドレスを入力してください:",
"请输入 OpenAI API 密钥:": "OpenAI APIキーを入力してください:",
"请输入密码:": "パスワードを入力してください。",
"请输入对话内容。": "会話内容を入力してください。",
"请输入有效的文件名,不要包含以下特殊字符:": "有効なファイル名を入力してください。以下の特殊文字は使用しないでください:",
"读取超时,无法获取对话。": "読み込みタイムアウト、会話を取得できません。",
"账单信息不适用": "課金情報は対象外です",
"跳过设置 HTTP 代理。": "Skip setting up HTTP proxy.",
"跳过设置 OpenAI API 密钥。": "OpenAI APIキーの設定をスキップします。",
"输入 Yes(y) 或 No(n),默认No:": "Yes(y)またはNo(n)を入力してください、デフォルトはNoです:",
"连接超时,无法获取对话。": "接続タイムアウト、会話を取得できません。",
"退出用户": "ユーザーをログアウトします。",
"选择LoRA模型": "LoRAモデルを選択",
"选择Prompt模板集合文件": "Promptテンプレートコレクションを選択",
"选择回复语言(针对搜索&索引功能)": "回答言語を選択(検索とインデックス機能に対して)",
"选择数据集": "データセットの選択",
"选择模型": "LLMモデルを選択",
"配置已保存在 config.json 中。": "Config.json に設定が保存されました。",
"释放文件以上传": "ファイルをアップロードするには、ここでドロップしてください",
"重命名该对话": "会話の名前を変更",
"重新生成": "再生成",
"高级": "Advanced",
",本次对话累计消耗了 ": ", 今の会話で消費合計 ",
",请使用 .pdf, .docx, .pptx, .epub, .xlsx 等文档。": ".pdf、.docx、.pptx、.epub、.xlsxなどのドキュメントを使用してください。",
",输入空行结束:": "、空行で終了します:",
",默认为 ": "デフォルトです",
":": ":",
"💾 保存对话": "💾 会話を保存",
"📝 导出为 Markdown": "📝 Markdownにエクスポート",
"🔄 切换API地址": "🔄 APIアドレスを切り替え",
"🔄 刷新": "🔄 更新",
"🔄 检查更新...": "🔄 アップデートをチェック...",
"🔄 设置代理地址": "🔄 プロキシアドレスを設定",
"🔄 重新生成": "🔄 再生成",
"🔙 恢复默认网络设置": "🔙 ネットワーク設定のリセット",
"🗑️ 删除最新对话": "🗑️ 最新の会話削除",
"🗑️ 删除最旧对话": "🗑️ 最古の会話削除",
"🧹 新的对话": "🧹 新しい会話",
"gpt5_description": "ドメイン横断のコーディングとエージェントタスクに最適なモデル。40万トークンのコンテキスト、最大12.8万トークン出力に対応。",
"gpt5mini_description": "明確に定義されたタスク向けの、より高速かつ高コスト効率なGPT-5のバージョン。40万トークンのコンテキスト、最大12.8万トークン出力に対応。",
"gpt5nano_description": "最速で最もコスト効率の高いGPT-5のバージョン。40万トークンのコンテキスト、最大12.8万トークン出力に対応。",
"o1_description": "o1シリーズの大規模言語モデルは、複雑な推論を行うために強化学習で訓練されています。o1モデルは回答する前に考え、ユーザーに応答する前に長い内部思考の連鎖を生成します。",
"no_permission_to_update_description": "アップデートの権限がありません。 管理者に連絡してください。 管理者の設定は、設定ファイルconfig.jsonのadmin_listにユーザー名を追加することで行います。"
}
================================================
FILE: locale/ko_KR.json
================================================
{
"获取资源错误": "Error fetching resources",
"该模型不支持多模态输入": "이 모델은 다중 모달 입력을 지원하지 않습니다.",
" 中。": "가운데입니다.",
" 为: ": "되다",
" 吗?": " 을(를) 삭제하시겠습니까?",
"# ⚠️ 务必谨慎更改 ⚠️": "# ⚠️ 주의: 변경시 주의하세요. ⚠️",
"**发送消息** 或 **提交key** 以显示额度": "**메세지를 전송** 하거나 **Key를 입력**하여 크레딧 표시",
"**本月使用金额** ": "**이번 달 사용금액** ",
"**获取API使用情况失败**": "**API 사용량 가져오기 실패**",
"**获取API使用情况失败**,sensitive_id错误或已过期": "**API 사용량 가져오기 실패**. sensitive_id가 잘못되었거나 만료되었습니다",
"**获取API使用情况失败**,需在填写`config.json`中正确填写sensitive_id": "**API 사용량 가져오기 실패**. `config.json`에 올바른 `sensitive_id`를 입력해야 합니다",
"== API 配置 ==": "== API 설정 ==",
"== 基础配置 ==": "== Basic Settings ==",
"== 高级配置 ==": "== Advanced Settings ==",
"API key为空,请检查是否输入正确。": "API 키가 비어 있습니다. 올바르게 입력되었는지 확인하십세요.",
"API密钥更改为了": "API 키가 변경되었습니다.",
"IP地址信息正在获取中,请稍候...": "IP 주소 정보를 가져오는 중입니다. 잠시 기다려주세요...",
"JSON解析错误,收到的内容: ": "JSON 파싱 에러, 응답: ",
"SSL错误,无法获取对话。": "SSL 에러, 대화를 가져올 수 없습니다.",
"Token 计数: ": "토큰 수: ",
"☹️发生了错误:": "☹️에러: ",
"⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置": "⚠️ API-Key의 안전을 보장하기 위해 네트워크 설정을 `config.json` 구성 파일에서 수정해주세요.",
"⚠️请先删除知识库中的历史文件,再尝试上传!": "⚠️ 먼저 지식 라이브러리에서 기록 파일을 삭제한 후 다시 업로드하세요!",
"。": "。",
"。你仍然可以使用聊天功能。": ". 채팅 기능을 계속 사용할 수 있습니다.",
"上传": "업로드",
"上传了": "업로드완료.",
"上传到 OpenAI 后自动填充": "OpenAI로 업로드한 후 자동으로 채워집니다",
"上传到OpenAI": "OpenAI로 업로드",
"上传文件": "파일 업로드",
"不支持的文件: ": "지원되지 않는 파일:",
"中。": "중요합니다.",
"中,包含了可用设置项及其简要说明。请查看 wiki 获取更多信息:": "중에는 사용 가능한 설정 옵션과 간단한 설명이 포함되어 있습니다. 자세한 정보는 위키를 확인해주세요.",
"仅供查看": "읽기 전용",
"从Prompt模板中加载": "프롬프트 템플릿에서 불러오기",
"从列表中加载对话": "리스트에서 대화 불러오기",
"代理地址": "프록시 주소",
"代理错误,无法获取对话。": "프록시 에러, 대화를 가져올 수 없습니다.",
"你没有权限访问 GPT4,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)": "GPT-4에 접근 권한이 없습니다. [자세히 알아보기](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)",
"你没有选择任何对话历史": "대화 기록을 선택하지 않았습니다.",
"你的": "당신의",
"你真的要删除 ": "정말로 ",
"你设置了 ": "설정되었습니다.",
"你选择了不设置 ": "설정을 하지 않았습니다",
"你选择了不设置用户账户。": "사용자 계정을 설정하지 않았습니다.",
"使用在线搜索": "온라인 검색 사용",
"停止符,用英文逗号隔开...": "여기에 정지 토큰 입력, ','로 구분됨...",
"关于": "관련",
"关闭": "닫기",
"准备数据集": "데이터셋 준비",
"切换亮暗色主题": "라이트/다크 테마 전환",
"删除对话历史成功": "대화 기록이 성공적으로 삭제되었습니다.",
"删除这轮问答": "이 라운드의 질문과 답변 삭제",
"刷新状态": "상태 새로 고침",
"剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)": "남은 할당량이 부족합니다. [자세한 내용](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)을 확인하세요.",
"加载Prompt模板": "프롬프트 템플릿 불러오기",
"单轮对话": "단일 대화",
"历史记录(JSON)": "기록 파일 (JSON)",
"参数": "파라미터들",
"双栏pdf": "2-column pdf",
"取消": "취소",
"取消所有任务": "모든 작업 취소",
"可选,用于区分不同的模型": "선택 사항, 다른 모델을 구분하는 데 사용",
"启用的工具:": "활성화된 도구: ",
"在": "올",
"在工具箱中管理知识库文件": "지식 라이브러리 파일을 도구 상자에서 관리",
"在线搜索": "온라인 검색",
"在这里输入": "여기에 입력하세요",
"在这里输入System Prompt...": "여기에 시스템 프롬프트를 입력하세요...",
"多账号模式已开启,无需输入key,可直接开始对话": "다중 계정 모드가 활성화되어 있으므로 키를 입력할 필요가 없이 바로 대화를 시작할 수 있습니다",
"好": "예",
"实时传输回答": "실시간 전송",
"对话": "대화",
"对话历史": "대화 내역",
"对话历史记录": "대화 기록",
"对话命名方式": "대화 이름 설정",
"导出为 Markdown": "Markdown으로 내보내기",
"川虎Chat": "Chuanhu Chat",
"川虎Chat 🚀": "Chuanhu Chat 🚀",
"工具箱": "도구 상자",
"已经被删除啦": "이미 삭제되었습니다.",
"开始实时传输回答……": "실시간 응답 출력 시작...",
"开始训练": "훈련 시작",
"微调": "파인튜닝",
"总结": "요약",
"总结完成": "작업 완료",
"您使用的就是最新版!": "최신 버전을 사용하고 있습니다!",
"您的IP区域:": "당신의 IP 지역: ",
"您的IP区域:未知。": "IP 지역: 알 수 없음.",
"您输入的 API 密钥为:": "당신이 입력한 API 키는:",
"找到了缓存的索引文件,加载中……": "캐시된 인덱스 파일을 찾았습니다. 로딩 중...",
"拓展": "확장",
"搜索(支持正则)...": "검색 (정규식 지원)...",
"数据集预览": "데이터셋 미리보기",
"文件ID": "파일 ID",
"新对话 ": "새 대화 ",
"新建对话保留Prompt": "새 대화 생성, 프롬프트 유지하기",
"是否设置 HTTP 代理?[Y/N]:": "HTTP 프록시를 설정하시겠습니까? [Y/N]: ",
"是否设置 OpenAI API 密钥?[Y/N]:": "OpenAI API 키를 설정하시겠습니까? [Y/N]:",
"是否设置用户账户?设置后,用户需要登陆才可访问。输入 Yes(y) 或 No(n),默认No:": "사용자 계정을 설정하시겠습니까? 계정을 설정하면 사용자는 로그인해야만 접속할 수 있습니다. Yes(y) 또는 No(n)을 입력하세요. 기본값은 No입니다:",
"暂时未知": "알 수 없음",
"更新": "업데이트",
"更新失败,请尝试[手动更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)": "업데이트 실패, [수동 업데이트](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)를 시도하십시오",
"更新成功,请重启本程序": "업데이트 성공, 이 프로그램을 재시작 해주세요",
"未命名对话历史记录": "이름없는 대화 기록",
"未设置代理...": "프록시가 설정되지 않았습니다...",
"本月使用金额": "이번 달 사용금액",
"构建索引中……": "인덱스 작성 중...",
"查看[使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35)": "[사용 가이드](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35) 보기",
"根据日期时间": "날짜 및 시간 기준",
"模型": "LLM 모델",
"模型名称后缀": "모델 이름 접미사",
"模型自动总结(消耗tokens)": "모델에 의한 자동 요약 (토큰 소비)",
"模型设置为了:": "설정된 모델: ",
"正在尝试更新...": "업데이트를 시도 중...",
"正在尝试重启...": "재시작을 시도 중...",
"正在获取IP地址信息,请稍候...": "IP 주소 정보를 가져오는 중입니다. 잠시만 기다려주세요...",
"正在进行首次设置,请按照提示进行配置,配置将会被保存在": "첫 설정 중입니다. 안내에 따라 구성하십시오. 설정은 저장됩니다.",
"没有找到任何支持的文档。": "지원되는 문서를 찾을 수 없습니다.",
"添加训练好的模型到模型列表": "훈련된 모델을 모델 목록에 추가",
"状态": "상태",
"现在开始设置其他在线模型的API Key": "다른 온라인 모델의 API 키를 설정하세요.",
"现在开始进行交互式配置。碰到不知道该怎么办的设置项时,请直接按回车键跳过,程序会自动选择合适的默认值。": "인터랙티브 설정이 시작되었습니다. 설정 항목을 모르는 경우 바로 Enter 키를 눌러 기본값을 자동으로 선택합니다.",
"现在开始进行交互式配置:": "대화형 설정이 시작됩니다:",
"现在开始进行软件功能设置": "소프트웨어 기능 설정을 시작합니다.",
"生成内容总结中……": "콘텐츠 요약 생성중...",
"用于定位滥用行为": "악용 사례 파악에 활용됨",
"用户标识符": "사용자 식별자",
"由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发<br />访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本": "제작: Bilibili [土川虎虎虎](https://space.bilibili.com/29125536), [明昭MZhao](https://space.bilibili.com/24807452), [Keldos](https://github.com/Keldos-Li)\n\n최신 코드 다운로드: [GitHub](https://github.com/GaiZhenbiao/ChuanhuChatGPT)",
"由于下面的原因,Google 拒绝返回 Gemini 的回答:\\n\\n": "Google이 Gemini의 답변을 반환하는 것을 거부하는 이유에는 다음과 같은 이유가 있습니다:",
"知识库": "knowledge base",
"知识库文件": "knowledge base 파일",
"立即重启": "지금 재시작",
"第一条提问": "첫 번째 질문",
"索引已保存至本地!": "로컬에 인덱스가 저장되었습니다!",
"索引构建失败!": "인덱스 빌드 실패했습니다!",
"索引构建完成": "인덱스 구축이 완료되었습니다.",
"索引构建完成!": "인덱스가 구축되었습니다!",
"网络": "네트워크",
"获取API使用情况失败:": "API 사용량 가져오기 실패:",
"获取IP地理位置失败。原因:": "다음과 같은 이유로 IP 위치를 가져올 수 없습니다. 이유: ",
"获取对话时发生错误,请查看后台日志": "대화를 가져오는 중 에러가 발생했습니다. 백그라운드 로그를 확인하세요",
"覆盖gradio.oauth /logout路由": "gradio.oauth/logout 경로를 덮어쓰세요.",
"训练": "학습",
"训练状态": "학습 상태",
"训练轮数(Epochs)": "학습 Epochs",
"设置": "설정",
"设置保存文件名": "저장 파일명 설정",
"设置完成。现在请重启本程序。": "앱 설정이 완료되었습니다. 이제 앱을 다시 시작해주세요.",
"设置文件名: 默认为.json,可选为.md": "파일 이름 설정: 기본값: .json, 선택: .md",
"识别公式": "formula OCR",
"详情": "상세",
"请先输入用户名,输入空行结束添加用户:": "사용자 이름을 먼저 입력하고 사용자 추가를 완료하려면 빈 줄을 입력하세요:",
"请先选择Ollama后端模型\\n\\n": "Ollama 후단 모델을 먼저 선택하십시오.",
"请查看 config_example.json,配置 Azure OpenAI": "Azure OpenAI 설정을 확인하세요",
"请检查网络连接,或者API-Key是否有效。": "네트워크 연결 또는 API키가 유효한지 확인하세요",
"请输入 ": "입력하십시오",
"请输入 HTTP 代理地址:": "HTTP 프록시 주소를 입력하세요.",
"请输入 OpenAI API 密钥:": "OpenAI API 키를 입력하십시오:",
"请输入密码:": "비밀번호를 입력하십시오:",
"请输入对话内容。": "대화 내용을 입력하세요.",
"请输入有效的文件名,不要包含以下特殊字符:": "유효한 파일 이름을 입력하세요. 다음 특수 문자를 포함하지 마세요: ",
"读取超时,无法获取对话。": "읽기 시간 초과, 대화를 가져올 수 없습니다.",
"账单信息不适用": "청구 정보를 가져올 수 없습니다",
"跳过设置 HTTP 代理。": "HTTP 프록시 설정을 건너뛰세요.",
"跳过设置 OpenAI API 密钥。": "OpenAI API 키 설정을 건너뛸까요.",
"输入 Yes(y) 或 No(n),默认No:": "예(y)나 아니오(n)를 입력하십시오. 기본값은 아니오입니다.",
"连接超时,无法获取对话。": "연결 시간 초과, 대화를 가져올 수 없습니다.",
"退出用户": "사용자 로그 아웃",
"选择LoRA模型": "LoRA 모델 선택",
"选择Prompt模板集合文件": "프롬프트 콜렉션 파일 선택",
"选择回复语言(针对搜索&索引功能)": "답장 언어 선택 (검색 & 인덱스용)",
"选择数据集": "데이터셋 선택",
"选择模型": "모델 선택",
"配置已保存在 config.json 中。": "구성은 config.json 파일에 저장되어 있습니다.",
"释放文件以上传": "파일을 놓아 업로드",
"重命名该对话": "대화 이름 변경",
"重新生成": "재생성",
"高级": "고급",
",本次对话累计消耗了 ": ",이 대화의 전체 비용은 ",
",请使用 .pdf, .docx, .pptx, .epub, .xlsx 等文档。": ".pdf, .docx, .pptx, .epub, .xlsx 등의 문서를 사용해주세요.",
",输入空行结束:": "입력하려면 빈 줄을 입력하십시오.",
",默认为 ": "기본값:",
":": "원하시는 내용이 없습니다.",
"💾 保存对话": "💾 대화 저장",
"📝 导出为 Markdown": "📝 Markdown으로 내보내기",
"🔄 切换API地址": "🔄 API 주소 변경",
"🔄 刷新": "🔄 새로고침",
"🔄 检查更新...": "🔄 업데이트 확인...",
"🔄 设置代理地址": "🔄 프록시 주소 설정",
"🔄 重新生成": "🔄 재생성",
"🔙 恢复默认网络设置": "🔙 네트워크 설정 초기화",
"🗑️ 删除最新对话": "🗑️ 최신 대화 삭제",
"🗑️ 删除最旧对话": "🗑️ 가장 오래된 대화 삭제",
"🧹 新的对话": "🧹 새로운 대화",
"gpt5_description": "도메인 전반의 코딩 및 에이전트 작업에 최적화된 최고 성능 모델. 400,000 토큰 컨텍스트와 최대 128,000 토큰 출력 지원.",
"gpt5mini_description": "명확히 정의된 작업을 위한 더 빠르고 비용 효율적인 GPT-5 버전. 400,000 토큰 컨텍스트와 최대 128,000 토큰 출력 지원.",
"gpt5nano_description": "가장 빠르고 비용 효율이 가장 높은 GPT-5 버전. 400,000 토큰 컨텍스트와 최대 128,000 토큰 출력 지원.",
"no_permission_to_update_description": "업데이트할 수 있는 권한이 없습니다. 관리자에게 문의하세요. 관리자는 구성 파일 config.json의 admin_list에 사용자 아이디를 추가하여 구성합니다."
}
================================================
FILE: locale/ru_RU.json
================================================
{
"获取资源错误": "Ошибка при получении ресурса",
"该模型不支持多模态输入": "Эта модель не поддерживает многомодальный ввод",
" 中。": "в центре.",
" 为: ": "Язык:",
" 吗?": " ?",
"# ⚠️ 务必谨慎更改 ⚠️": "# ⚠️ ВНИМАНИЕ: ИЗМЕНЯЙТЕ ОСТОРОЖНО ⚠️",
"**发送消息** 或 **提交key** 以显示额度": "**Отправить сообщение** или **отправить ключ** для отображения лимита",
"**本月使用金额** ": "**Использовано средств в этом месяце**",
"**获取API使用情况失败**": "**Не удалось получить информацию об использовании API**",
"**获取API使用情况失败**,sensitive_id错误或已过期": "**Не удалось получить информацию об использовании API**, ошибка sensitive_id или истек срок действия",
"**获取API使用情况失败**,需在填写`config.json`中正确填写sensitive_id": "**Не удалось получить информацию об использовании API**, необходимо правильно заполнить sensitive_id в `config.json`",
"== API 配置 ==": "== Настройка API ==",
"== 基础配置 ==": "== Basic settings ==",
"== 高级配置 ==": "== Advanced settings ==",
"API key为空,请检查是否输入正确。": "Пустой API-Key, пожалуйста, проверьте правильность ввода.",
"API密钥更改为了": "Ключ API изменен на",
"IP地址信息正在获取中,请稍候...": "Информация об IP-адресе загружается, пожалуйста, подождите...",
"JSON解析错误,收到的内容: ": "Ошибка анализа JSON, полученный контент:",
"SSL错误,无法获取对话。": "Ошибка SSL, не удалось получить диалог.",
"Token 计数: ": "Использованно токенов: ",
"☹️发生了错误:": "☹️ Произошла ошибка:",
"⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置": "⚠️ Для обеспечения безопасности API-Key, измените настройки сети в файле конфигурации `config.json`",
"⚠️请先删除知识库中的历史文件,再尝试上传!": "⚠️ Сначала удалите исторические файлы из базы знаний, а затем попробуйте загрузить!",
"。": "。",
"。你仍然可以使用聊天功能。": ". Вы все равно можете использовать функцию чата.",
"上传": "Загрузить",
"上传了": "Загрузка завершена.",
"上传到 OpenAI 后自动填充": "Автоматическое заполнение после загрузки в OpenAI",
"上传到OpenAI": "Загрузить в OpenAI",
"上传文件": "Загрузить файл",
"不支持的文件: ": "Неподдерживаемый файл:",
"中。": "Центр.",
"中,包含了可用设置项及其简要说明。请查看 wiki 获取更多信息:": "На стороне клиента API, после того как на клиенте будет создан HELP_BASE64_JS событие, нужно отправить идентификатор события на сервер для обработки.",
"仅供查看": "Только для просмотра",
"从Prompt模板中加载": "Загрузить из шаблона Prompt",
"从列表中加载对话": "Загрузить диалог из списка",
"代理地址": "Адрес прокси",
"代理错误,无法获取对话。": "Ошибка прокси, не удалось получить диалог.",
"你没有权限访问 GPT4,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)": "У вас нет доступа к GPT4, [подробнее](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)",
"你没有选择任何对话历史": "Вы не выбрали никакой истории переписки",
"你的": "Your",
"你真的要删除 ": "Вы уверены, что хотите удалить ",
"你设置了 ": "Вы установили.",
"你选择了不设置 ": "Вы выбрали не устанавливать",
"你选择了不设置用户账户。": "Вы выбрали не создавать учетную запись пользователя.",
"使用在线搜索": "Использовать онлайн-поиск",
"停止符,用英文逗号隔开...": "Разделительные символы, разделенные запятой...",
"关于": "О программе",
"关闭": "Закрыть",
"准备数据集": "Подготовка набора данных",
"切换亮暗色主题": "Переключить светлую/темную тему",
"删除对话历史成功": "Успешно удалена история переписки.",
"删除这轮问答": "Удалить этот раунд вопросов и ответов",
"刷新状态": "Обновить статус",
"剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)": "剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)",
"加载Prompt模板": "Загрузить шаблон Prompt",
"单轮对话": "Одиночный диалог",
"历史记录(JSON)": "Файл истории (JSON)",
"参数": "Параметры",
"双栏pdf": "Двухколоночный PDF",
"取消": "Отмена",
"取消所有任务": "Отменить все задачи",
"可选,用于区分不同的模型": "Необязательно, используется для различения разных моделей",
"启用的工具:": "Включенные инструменты:",
"在": "в",
"在工具箱中管理知识库文件": "Управление файлами базы знаний в инструментах",
"在线搜索": "Онлайн-поиск",
"在这里输入": "Введите здесь",
"在这里输入System Prompt...": "Введите здесь системное подсказку...",
"多账号模式已开启,无需输入key,可直接开始对话": "Режим множественных аккаунтов включен, не требуется ввод ключа, можно сразу начать диалог",
"好": "Хорошо",
"实时传输回答": "Передача ответа в реальном времени",
"对话": "Диалог",
"对话历史": "Диалоговая история",
"对话历史记录": "История диалога",
"对话命名方式": "Способ названия диалога",
"导出为 Markdown": "Экспортировать в Markdown",
"川虎Chat": "Chuanhu Чат",
"川虎Chat 🚀": "Chuanhu Чат 🚀",
"工具箱": "Инструменты",
"已经被删除啦": "Уже удалено.",
"开始实时传输回答……": "Начните трансляцию ответов в режиме реального времени...",
"开始训练": "Начать обучение",
"微调": "Своя модель",
"总结": "Подведение итога",
"总结完成": "Готово",
"您使用的就是最新版!": "Вы используете последнюю версию!",
"您的IP区域:": "Ваша IP-зона:",
"您的IP区域:未知。": "Ваша IP-зона: неизвестно.",
"您输入的 API 密钥为:": "Ваш API ключ:",
"找到了缓存的索引文件,加载中……": "Индексный файл кэша найден, загрузка…",
"拓展": "Расширенные настройки",
"搜索(支持正则)...": "Поиск (поддержка регулярности)...",
"数据集预览": "Предпросмотр набора данных",
"文件ID": "Идентификатор файла",
"新对话 ": "Новый диалог ",
"新建对话保留Prompt": "Создать диалог с сохранением подсказки",
"是否设置 HTTP 代理?[Y/N]:": "Нужно установить HTTP-прокси? [Д/Н]:",
"是否设置 OpenAI API 密钥?[Y/N]:": "Установить ключ API OpenAI? [Д/Н]:",
"是否设置用户账户?设置后,用户需要登陆才可访问。输入 Yes(y) 或 No(n),默认No:": "Вы хотите установить учетную запись пользователя? После установки пользователь должен войти в систему, чтобы получить доступ. Введите Да(д) или Нет(н), по умолчанию Нет:",
"暂时未知": "Временно неизвестно",
"更新": "Обновить",
"更新失败,请尝试[手动更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)": "Обновление не удалось, пожалуйста, попробуйте обновить вручную",
"更新成功,请重启本程序": "Обновление успешно, пожалуйста, перезапустите программу",
"未命名对话历史记录": "Безымянная история диалога",
"未设置代理...": "Прокси не настроен...",
"本月使用金额": "Использовано средств в этом месяце",
"构建索引中……": "Построение индекса…",
"查看[使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35)": "[Здесь](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35) можно ознакомиться с инструкцией по использованию",
"根据日期时间": "По дате и времени",
"模型": "Модель",
"模型名称后缀": "Суффикс имени модели",
"模型自动总结(消耗tokens)": "Автоматическое подведение итогов модели (потребление токенов)",
"模型设置为了:": "Модель настроена на:",
"正在尝试更新...": "Попытка обновления...",
"正在尝试重启...": "Попытка перезапуска...",
"正在获取IP地址信息,请稍候...": "Получение информации об IP-адресе, пожалуйста, подождите...",
"正在进行首次设置,请按照提示进行配置,配置将会被保存在": "Выполняется первоначальная настройка, следуйте подсказкам для настройки, результаты будут сохранены.",
"没有找到任何支持的文档。": "Документация не найдена.",
"添加训练好的模型到模型列表": "Добавить обученную модель в список моделей",
"状态": "Статус",
"现在开始设置其他在线模型的API Key": "Укажите ключ API для других онлайн моделей.",
"现在开始进行交互式配置。碰到不知道该怎么办的设置项时,请直接按回车键跳过,程序会自动选择合适的默认值。": "Проводится интерактивная настройка. Для пропуска непонятных параметров просто нажмите Enter, программа автоматически выберет соответствующее значение по умолчанию.",
"现在开始进行交互式配置:": "Теперь начнется интерактивная настройка:",
"现在开始进行软件功能设置": "Настройка функций программы начата.",
"生成内容总结中……": "Создание сводки контента...",
"用于定位滥用行为": "Используется для выявления злоупотреблений",
"用户标识符": "Идентификатор пользователя",
"由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发<br />访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本": "Разработано [土川虎虎虎](https://space.bilibili.com/29125536), [明昭MZhao](https://space.bilibili.com/24807452) и [Keldos](https://github.com/Keldos-Li).<br />посетите [GitHub Project](https://github.com/GaiZhenbiao/ChuanhuChatGPT) чата Chuanhu, чтобы загрузить последнюю версию скрипта",
"由于下面的原因,Google 拒绝返回 Gemini 的回答:\\n\\n": "Из-за указанных причин Google отказывается возвращать ответ от Gemini:",
"知识库": "База знаний",
"知识库文件": "Файл базы знаний",
"立即重启": "Перезапустить сейчас",
"第一条提问": "Первый вопрос",
"索引已保存至本地!": "Индекс сохранен локально!",
"索引构建失败!": "Индексация не удалась!",
"索引构建完成": "Индексирование завершено.",
"索引构建完成!": "Индекс построен!",
"网络": "Параметры сети",
"获取API使用情况失败:": "Не удалось получитьAPIинформацию об использовании:",
"获取IP地理位置失败。原因:": "Не удалось получить географическое положение IP. Причина:",
"获取对话时发生错误,请查看后台日志": "Возникла ошибка при получении диалога, пожалуйста, проверьте журналы",
"覆盖gradio.oauth /logout路由": "Перепишите маршрут gradio.oauth/logout.",
"训练": "Обучение",
"训练状态": "Статус обучения",
"训练轮数(Epochs)": "Количество эпох обучения",
"设置": "Настройки",
"设置保存文件名": "Установить имя сохраняемого файла",
"设置完成。现在请重启本程序。": "Настройки завершены. Пожалуйста, перезапустите приложение.",
"设置文件名: 默认为.json,可选为.md": "Установить имя файла: по умолчанию .json, можно выбрать .md",
"识别公式": "Распознавание формул",
"详情": "Подробности",
"请先输入用户名,输入空行结束添加用户:": "Пожалуйста, введите имя пользователя. Для завершения добавления пользователя оставьте строку пустой.",
"请先选择Ollama后端模型\\n\\n": "Пожалуйста, выберите модель Ollama для бэкэнда.",
"请查看 config_example.json,配置 Azure OpenAI": "Пожалуйста, просмотрите config_example.json для настройки Azure OpenAI",
"请检查网络连接,或者API-Key是否有效。": "Проверьте подключение к сети или действительность API-Key.",
"请输入 ": "Введите",
"请输入 HTTP 代理地址:": "Введите адрес HTTP-прокси:",
"请输入 OpenAI API 密钥:": "Введите ключ API OpenAI:",
"请输入密码:": "Введите пароль:",
"请输入对话内容。": "Пожалуйста, введите содержание диалога.",
"请输入有效的文件名,不要包含以下特殊字符:": "Введите действительное имя файла, не содержащее следующих специальных символов: ",
"读取超时,无法获取对话。": "Тайм-аут чтения, не удалось получить диалог.",
"账单信息不适用": "Информация о счете не применима",
"跳过设置 HTTP 代理。": "Пропустить настройку HTTP-прокси.",
"跳过设置 OpenAI API 密钥。": "Пропустить настройку ключа API OpenAI.",
"输入 Yes(y) 或 No(n),默认No:": "Введите Да(д) или Нет(н), по умолчанию Нет:",
"连接超时,无法获取对话。": "Тайм-аут подключения, не удалось получить диалог.",
"退出用户": "Пользователь вышел",
"选择LoRA模型": "Выберите модель LoRA",
"选择Prompt模板集合文件": "Выберите файл с набором шаблонов Prompt",
"选择回复语言(针对搜索&索引功能)": "Выберите язык ответа (для функций поиска и индексации)",
"选择数据集": "Выберите набор данных",
"选择模型": "Выберите модель",
"配置已保存在 config.json 中。": "Конфигурация сохранена в файле config.json.",
"释放文件以上传": "Отпустите файл для загрузки",
"重命名该对话": "Переименовать этот диалог",
"重新生成": "Пересоздать",
"高级": "Расширенные настройки",
",本次对话累计消耗了 ": ", Общая стоимость этого диалога составляет ",
",请使用 .pdf, .docx, .pptx, .epub, .xlsx 等文档。": "Пожалуйста, используйте файлы .pdf, .docx, .pptx, .epub, .xlsx и т. д.",
",输入空行结束:": "Введите пустую строку, чтобы завершить:",
",默认为 ": "По умолчанию",
":": ":",
"💾 保存对话": "💾 Сохранить диалог",
"📝 导出为 Markdown": "📝 Экспортировать в Markdown",
"🔄 切换API地址": "🔄 Переключить адрес API",
"🔄 刷新": "🔄 Обновить",
"🔄 检查更新...": "🔄 Проверить обновления...",
"🔄 设置代理地址": "🔄 Установить адрес прокси",
"🔄 重新生成": "🔄 Пересоздать",
"🔙 恢复默认网络设置": "🔙 Восстановить настройки сети по умолчанию",
"🗑️ 删除最新对话": "🗑️ Удалить последний диалог",
"🗑️ 删除最旧对话": "🗑️ Удалить старейший диалог",
"🧹 新的对话": "🧹 Новый диалог",
"gpt5_description": "Лучшая модель для кодинга и агентных задач в разных доменах. Контекст 400 000 токенов и до 128 000 токенов на вывод.",
"gpt5mini_description": "Более быстрая и экономичная версия GPT-5 для четко определенных задач. Контекст 400 000 токенов и до 128 000 токенов на вывод.",
"gpt5nano_description": "Самая быстрая и наименее затратная версия GPT-5. Контекст 400 000 токенов и до 128 000 токенов на вывод.",
"no_permission_to_update_description": "У вас нет разрешения на обновление. Пожалуйста, свяжитесь с администратором. Администратор настраивается путем добавления имени пользователя в список admin_list в файле config.json."
}
================================================
FILE: locale/sv_SE.json
================================================
{
"获取资源错误": "Fel vid hämtning av resurser",
"该模型不支持多模态输入": "Den här modellen stöder inte multitmodal inmatning.",
" 中。": "Mitten.",
" 为: ": "För:",
" 吗?": " ?",
"# ⚠️ 务必谨慎更改 ⚠️": "# ⚠️ Var försiktig med ändringar. ⚠️",
"**发送消息** 或 **提交key** 以显示额度": "**Skicka meddelande** eller **Skicka in nyckel** för att visa kredit",
"**本月使用金额** ": "**Månadens användning** ",
"**获取API使用情况失败**": "**Misslyckades med att hämta API-användning**",
"**获取API使用情况失败**,sensitive_id错误或已过期": "**Misslyckades med att hämta API-användning**, felaktig eller utgången sensitive_id",
"**获取API使用情况失败**,需在填写`config.json`中正确填写sensitive_id": "**Misslyckades med att hämta API-användning**, korrekt sensitive_id behövs i `config.json`",
"== API 配置 ==": "== API-inställningar ==",
"== 基础配置 ==": "== Grundläggande konfiguration ==",
"== 高级配置 ==": "== Avancerade inställningar ==",
"API key为空,请检查是否输入正确。": "API-nyckeln är tom, kontrollera om den är korrekt inmatad.",
"API密钥更改为了": "API-nyckeln har ändrats till",
"IP地址信息正在获取中,请稍候...": "IP-adressinformation hämtas, vänligen vänta...",
"JSON解析错误,收到的内容: ": "JSON-tolkningsfel, mottaget innehåll: ",
"SSL错误,无法获取对话。": "SSL-fel, kunde inte hämta dialogen.",
"Token 计数: ": "Tokenräkning: ",
"☹️发生了错误:": "☹️Fel: ",
"⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置": "⚠️ För att säkerställa säkerheten för API-nyckeln, vänligen ändra nätverksinställningarna i konfigurationsfilen `config.json`.",
"⚠️请先删除知识库中的历史文件,再尝试上传!": "⚠️ Ta bort historikfilen i kunskapsbanken innan du försöker ladda upp!",
"。": "。",
"。你仍然可以使用聊天功能。": ". Du kan fortfarande använda chattfunktionen.",
"上传": "Ladda upp",
"上传了": "Uppladdad",
"上传到 OpenAI 后自动填充": "Automatiskt ifylld efter uppladdning till OpenAI",
"上传到OpenAI": "Ladda upp till OpenAI",
"上传文件": "ladda upp fil",
"不支持的文件: ": "Ogiltig fil:",
"中。": "Mellan.",
"中,包含了可用设置项及其简要说明。请查看 wiki 获取更多信息:": "I, innehåller tillgängliga inställningsalternativ och deras korta beskrivningar. Besök wikin för mer information:",
"仅供查看": "Endast för visning",
"从Prompt模板中加载": "Ladda från Prompt-mall",
"从列表中加载对话": "Ladda dialog från lista",
"代理地址": "Proxyadress",
"代理错误,无法获取对话。": "Proxyfel, kunde inte hämta dialogen.",
"你没有权限访问 GPT4,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)": "Du har inte behörighet att komma åt GPT-4, [läs mer](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)",
"你没有选择任何对话历史": "Du har inte valt någon konversationshistorik.",
"你的": "Din",
"你真的要删除 ": "Är du säker på att du vill ta bort ",
"你设置了 ": "Du har ställt in.",
"你选择了不设置 ": "Du har valt att inte ställa in",
"你选择了不设置用户账户。": "Du har valt att inte skapa ett användarkonto.",
"使用在线搜索": "Använd online-sökning",
"停止符,用英文逗号隔开...": "Skriv in stopptecken här, separerade med kommatecken...",
"关于": "om",
"关闭": "Stäng",
"准备数据集": "Förbered dataset",
"切换亮暗色主题": "Byt ljus/mörk tema",
"删除对话历史成功": "Raderade konversationens historik.",
"删除这轮问答": "Ta bort denna omgång av Q&A",
"刷新状态": "Uppdatera status",
"剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)": "Återstående kvot är otillräcklig, [läs mer](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%C3%84mnen)",
"加载Prompt模板": "Ladda Prompt-mall",
"单轮对话": "Enkel dialog",
"历史记录(JSON)": "Historikfil (JSON)",
"参数": "Parametrar",
"双栏pdf": "Två-kolumns pdf",
"取消": "Avbryt",
"取消所有任务": "Avbryt alla uppgifter",
"可选,用于区分不同的模型": "Valfritt, används för att särskilja olika modeller",
"启用的工具:": "Aktiverade verktyg: ",
"在": "på",
"在工具箱中管理知识库文件": "hantera kunskapsbankfiler i verktygslådan",
"在线搜索": "onlinesökning",
"在这里输入": "Skriv in här",
"在这里输入System Prompt...": "Skriv in System Prompt här...",
"多账号模式已开启,无需输入key,可直接开始对话": "Flerkontoläge är aktiverat, ingen nyckel behövs, du kan starta dialogen direkt",
"好": "OK",
"实时传输回答": "Strömmande utdata",
"对话": "konversation",
"对话历史": "Dialoghistorik",
"对话历史记录": "Dialoghistorik",
"对话命名方式": "Dialognamn",
"导出为 Markdown": "Exportera som Markdown",
"川虎Chat": "Chuanhu Chat",
"川虎Chat 🚀": "Chuanhu Chat 🚀",
"工具箱": "verktygslåda",
"已经被删除啦": "Har raderats.",
"开始实时传输回答……": "Börjar strömma utdata...",
"开始训练": "Börja träning",
"微调": "Finjustering",
"总结": "Sammanfatta",
"总结完成": "Slutfört sammanfattningen.",
"您使用的就是最新版!": "Du använder den senaste versionen!",
"您的IP区域:": "Din IP-region: ",
"您的IP区域:未知。": "Din IP-region: Okänd.",
"您输入的 API 密钥为:": "Den API-nyckel du angav är:",
"找到了缓存的索引文件,加载中……": "Hittade cachefilens index, laddar...",
"拓展": "utvidgning",
"搜索(支持正则)...": "Sök (stöd för reguljära uttryck)...",
"数据集预览": "Datasetförhandsvisning",
"文件ID": "Fil-ID",
"新对话 ": "Ny dialog ",
"新建对话保留Prompt": "Skapa ny konversation med bevarad Prompt",
"是否设置 HTTP 代理?[Y/N]:": "Vill du ställa in en HTTP-proxy? [J/N]:",
"是否设置 OpenAI API 密钥?[Y/N]:": "Har du ställt in OpenAI API-nyckeln? [J/N]:",
"是否设置用户账户?设置后,用户需要登陆才可访问。输入 Yes(y) 或 No(n),默认No:": "Vill du skapa ett användarkonto? Användaren måste logga in för att få åtkomst. Ange Ja(j) eller Nej(n), standard är Nej:",
"暂时未知": "Okänd",
"更新": "Uppdatera",
"更新失败,请尝试[手动更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)": "Uppdateringen misslyckades, prova att [uppdatera manuellt](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)",
"更新成功,请重启本程序": "Uppdaterat framgångsrikt, starta om programmet",
"未命名对话历史记录": "Onämnd Dialoghistorik",
"未设置代理...": "Inte inställd proxy...",
"本月使用金额": "Månadens användning",
"构建索引中……": "Bygger index ...",
"查看[使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35)": "Se [användarguiden](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35) för mer information",
"根据日期时间": "Enligt datum och tid",
"模型": "Modell",
"模型名称后缀": "Modellnamnstillägg",
"模型自动总结(消耗tokens)": "Modellens automatiska sammanfattning (förbrukar tokens)",
"模型设置为了:": "Modellen är inställd på: ",
"正在尝试更新...": "Försöker uppdatera...",
"正在尝试重启...": "Försöker starta om...",
"正在获取IP地址信息,请稍候...": "Hämtar IP-adressinformation, vänta...",
"正在进行首次设置,请按照提示进行配置,配置将会被保存在": "Du håller på med första inställningen, följ anvisningarna för att konfigurera. Konfigurationen sparas i",
"没有找到任何支持的文档。": "Inga supportdokument hittades.",
"添加训练好的模型到模型列表": "Lägg till tränad modell i modellistan",
"状态": "Status",
"现在开始设置其他在线模型的API Key": "Nu börja ställa in API-nyckeln för andra online-modeller.",
"现在开始进行交互式配置。碰到不知道该怎么办的设置项时,请直接按回车键跳过,程序会自动选择合适的默认值。": "Interaktiv konfiguration påbörjas nu. Om du stöter på en inställning som du inte vet hur du ska hantera, tryck bara på Retur-tangenten för att hoppa över den. Programmet kommer automatiskt välja lämpligt standardvärde.",
"现在开始进行交互式配置:": "Interaktiv konfiguration påbörjas nu:",
"现在开始进行软件功能设置": "Nu börjar du konfigurera programfunktionaliteten.",
"生成内容总结中……": "Genererar innehållssammanfattning...",
"用于定位滥用行为": "Används för att lokalisera missbruk",
"用户标识符": "Användar-ID",
"由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发<br />访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本": "Utvecklad av Bilibili [土川虎虎虎](https://space.bilibili.com/29125536), [明昭MZhao](https://space.bilibili.com/24807452) och [Keldos](https://github.com/Keldos-Li)\n\nLadda ner senaste koden från [GitHub](https://github.com/GaiZhenbiao/ChuanhuChatGPT)",
"由于下面的原因,Google 拒绝返回 Gemini 的回答:\\n\\n": "Google vägrar att returnera Gemini's svar av följande skäl:",
"知识库": "kunskapsbank",
"知识库文件": "kunskapsbankfil",
"立即重启": "Starta om nu",
"第一条提问": "Första frågan",
"索引已保存至本地!": "Index har sparats lokalt!",
"索引构建失败!": "Indexeringen misslyckades!",
"索引构建完成": "Indexet har blivit byggt färdigt.",
"索引构建完成!": "Indexeringen är klar!",
"网络": "nätverksparametrar",
"获取API使用情况失败:": "Misslyckades med att hämta API-användning:",
"获取IP地理位置失败。原因:": "Misslyckades med att hämta IP-plats. Orsak: ",
"获取对话时发生错误,请查看后台日志": "Ett fel uppstod när dialogen hämtades, kontrollera bakgrundsloggen",
"覆盖gradio.oauth /logout路由": "Ersätt 'gradio.oauth/logout'-rutten.",
"训练": "träning",
"训练状态": "Träningsstatus",
"训练轮数(Epochs)": "Träningsomgångar (Epochs)",
"设置": "inställningar",
"设置保存文件名": "Ställ in sparfilnamn",
"设置完成。现在请重启本程序。": "Inställningarna är klara. Vänligen starta om programmet nu.",
"设置文件名: 默认为.json,可选为.md": "Ställ in filnamn: standard är .json, valfritt är .md",
"识别公式": "Formel OCR",
"详情": "Detaljer",
"请先输入用户名,输入空行结束添加用户:": "Var god och ange användarnamn, lägg till användare genom att trycka på Enter när du är klar:",
"请先选择Ollama后端模型\\n\\n": "Vänligen välj först Ollama backend-modellen.",
"请查看 config_example.json,配置 Azure OpenAI": "Vänligen granska config_example.json för att konfigurera Azure OpenAI",
"请检查网络连接,或者API-Key是否有效。": "Kontrollera nätverksanslutningen eller om API-nyckeln är giltig.",
"请输入 ": "Ange texten.",
"请输入 HTTP 代理地址:": "Ange HTTP-proxyadressen:",
"请输入 OpenAI API 密钥:": "Ange OpenAI API-nyckel:",
"请输入密码:": "Ange lösenord:",
"请输入对话内容。": "Ange dialoginnehåll.",
"请输入有效的文件名,不要包含以下特殊字符:": "Ange ett giltigt filnamn, använd inte följande specialtecken: ",
"读取超时,无法获取对话。": "Läsningen tog för lång tid, kunde inte hämta dialogen.",
"账单信息不适用": "Faktureringsinformation är inte tillämplig",
"跳过设置 HTTP 代理。": "Hoppa över inställning av HTTP-proxy.",
"跳过设置 OpenAI API 密钥。": "Hoppa över att ange OpenAI API-nyckel.",
"输入 Yes(y) 或 No(n),默认No:": "Ange Ja(j) eller Nej(n), standard är Nej:",
"连接超时,无法获取对话。": "Anslutningen tog för lång tid, kunde inte hämta dialogen.",
"退出用户": "Logga ut användaren",
"选择LoRA模型": "Välj LoRA Modell",
"选择Prompt模板集合文件": "Välj Prompt-mall Samlingsfil",
"选择回复语言(针对搜索&索引功能)": "Välj svarspråk (för sök- och indexfunktion)",
"选择数据集": "Välj dataset",
"选择模型": "Välj Modell",
"配置已保存在 config.json 中。": "Inställningarna har sparats i config.json.",
"释放文件以上传": "Släpp filen för att ladda upp",
"重命名该对话": "Byt namn på dialogen",
"重新生成": "Återgenerera",
"高级": "Avancerat",
",本次对话累计消耗了 ": ", Total kostnad för denna dialog är ",
",请使用 .pdf, .docx, .pptx, .epub, .xlsx 等文档。": "Använd .pdf, .docx, .pptx, .epub, .xlsx eller liknande dokument.",
",输入空行结束:": ",Ange en tom rad för att avsluta:",
",默认为 ": "Standardinställning.",
":": ":",
"💾 保存对话": "💾 Spara Dialog",
"📝 导出为 Markdown": "📝 Exportera som Markdown",
"🔄 切换API地址": "🔄 Byt API-adress",
"🔄 刷新": "🔄 Uppdatera",
"🔄 检查更新...": "🔄 Sök efter uppdateringar...",
"🔄 设置代理地址": "🔄 Ställ in Proxyadress",
"🔄 重新生成": "🔄 Regenerera",
"🔙 恢复默认网络设置": "🔙 Återställ standardnätverksinställningar+",
"🗑️ 删除最新对话": "🗑️ Ta bort senaste dialogen",
"🗑️ 删除最旧对话": "🗑️ Ta bort äldsta dialogen",
"🧹 新的对话": "🧹 Ny Dialog",
"gpt5_description": "Den bästa modellen för kodning och agentuppgifter över domäner. 400 000 tokens kontextfönster och upp till 128 000 tokens utdata.",
"gpt5mini_description": "En snabbare och mer kostnadseffektiv version av GPT‑5 för väldefinierade uppgifter. 400 000 tokens kontextfönster och upp till 128 000 tokens utdata.",
"gpt5nano_description": "Den snabbaste och mest kostnadseffektiva versionen av GPT‑5. 400 000 tokens kontextfönster och upp till 128 000 tokens utdata.",
"no_permission_to_update_description": "Du har inte behörighet att uppdatera. Vänligen kontakta administratören. Administratören konfigureras genom att lägga till användarnamnet i admin_list i konfigurationsfilen config.json."
}
================================================
FILE: locale/vi_VN.json
================================================
{
"获取资源错误": "Lỗi khi lấy tài nguyên",
"该模型不支持多模态输入": "Mô hình này không hỗ trợ đầu vào đa phương tiện",
" 中。": "Giữa.",
" 为: ": "Cho:",
" 吗?": " ?",
"# ⚠️ 务必谨慎更改 ⚠️": "# ⚠️ Lưu ý: Thay đổi yêu cầu cẩn thận. ⚠️",
"**发送消息** 或 **提交key** 以显示额度": "**Gửi tin nhắn** hoặc **Gửi khóa(key)** để hiển thị số dư",
"**本月使用金额** ": "**Số tiền sử dụng trong tháng** ",
"**获取API使用情况失败**": "**Lỗi khi lấy thông tin sử dụng API**",
"**获取API使用情况失败**,sensitive_id错误或已过期": "**Lỗi khi lấy thông tin sử dụng API**, sensitive_id sai hoặc đã hết hạn",
"**获取API使用情况失败**,需在填写`config.json`中正确填写sensitive_id": "**Lỗi khi lấy thông tin sử dụng API**, cần điền đúng sensitive_id trong tệp `config.json`",
"== API 配置 ==": "== Cấu hình API ==",
"== 基础配置 ==": "== Cấu hình cơ bản ==",
"== 高级配置 ==": "== Cấu hình Nâng cao ==",
"API key为空,请检查是否输入正确。": "Khóa API trống, vui lòng kiểm tra xem đã nhập đúng chưa.",
"API密钥更改为了": "Khóa API đã được thay đổi thành",
"IP地址信息正在获取中,请稍候...": "Đang lấy thông tin địa chỉ IP, vui lòng chờ...",
"JSON解析错误,收到的内容: ": "Lỗi phân tích JSON, nội dung nhận được: ",
"SSL错误,无法获取对话。": "Lỗi SSL, không thể nhận cuộc trò chuyện.",
"Token 计数: ": "Số lượng Token: ",
"☹️发生了错误:": "☹️Lỗi: ",
"⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置": "⚠️ Để đảm bảo an toàn cho API-Key, vui lòng chỉnh sửa cài đặt mạng trong tệp cấu hình `config.json`.",
"⚠️请先删除知识库中的历史文件,再尝试上传!": "⚠️ Vui lòng xóa tệp lịch sử trong cơ sở kiến thức trước khi tải lên!",
"。": "。",
"。你仍然可以使用聊天功能。": ". Bạn vẫn có thể sử dụng chức năng trò chuyện.",
"上传": "Tải lên",
"上传了": "Tải lên thành công.",
"上传到 OpenAI 后自动填充": "Tự động điền sau khi tải lên OpenAI",
"上传到OpenAI": "Tải lên OpenAI",
"上传文件": "Tải lên tệp",
"不支持的文件: ": "Tệp không được hỗ trợ:",
"中。": "Trong.",
"中,包含了可用设置项及其简要说明。请查看 wiki 获取更多信息:": "Trong đó chứa các mục cài đặt có sẵn và mô tả ngắn gọn của chúng. Vui lòng xem wiki để biết thêm thông tin:",
"仅供查看": "Chỉ xem",
"从Prompt模板中加载": "Tải từ mẫu Prompt",
"从列表中加载对话": "Tải cuộc trò chuyện từ danh sách",
"代理地址": "Địa chỉ proxy",
"代理错误,无法获取对话。": "Lỗi proxy, không thể nhận cuộc trò chuyện.",
"你没有权限访问 GPT4,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)": "Bạn không có quyền truy cập GPT-4, [tìm hiểu thêm](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/843)",
"你没有选择任何对话历史": "Bạn chưa chọn bất kỳ lịch sử trò chuyện nào.",
"你的": "Tôi không hiểu.",
"你真的要删除 ": "Bạn có chắc chắn muốn xóa ",
"你设置了 ": "Bạn đã thiết lập了",
"你选择了不设置 ": "Bạn đã chọn không thiết lập",
"你选择了不设置用户账户。": "Bạn đã chọn không thiết lập tài khoản người dùng.",
"使用在线搜索": "Sử dụng tìm kiếm trực tuyến",
"停止符,用英文逗号隔开...": "Nhập dấu dừng, cách nhau bằng dấu phẩy...",
"关于": "Về",
"关闭": "Đóng",
"准备数据集": "Chuẩn bị tập dữ liệu",
"切换亮暗色主题": "Chuyển đổi chủ đề sáng/tối",
"删除对话历史成功": "Xóa lịch sử cuộc trò chuyện thành công.",
"删除这轮问答": "Xóa cuộc trò chuyện này",
"刷新状态": "Làm mới tình trạng",
"剩余配额不足,[进一步了解](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)": "剩余配额 không đủ, [Nhấn vào đây để biết thêm](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98#you-exceeded-your-current-quota-please-check-your-plan-and-billing-details)",
"加载Prompt模板": "Tải mẫu Prompt",
"单轮对话": "Cuộc trò chuyện một lượt",
"历史记录(JSON)": "Tệp lịch sử (JSON)",
"参数": "Tham số",
"双栏pdf": "PDF hai cột",
"取消": "Hủy",
"取消所有任务": "Hủy tất cả các nhiệm vụ",
"可选,用于区分不同的模型": "Tùy chọn, sử dụng để phân biệt các mô hình khác nhau",
"启用的工具:": "Công cụ đã bật: ",
"在": "trong",
"在工具箱中管理知识库文件": "Quản lý tệp cơ sở kiến thức trong hộp công cụ",
"在线搜索": "Tìm kiếm trực tuyến",
"在这里输入": "Nhập vào đây",
"在这里输入System Prompt...": "Nhập System Prompt ở đây...",
"多账号模式已开启,无需输入key,可直接开始对话": "Chế độ nhiều tài khoản đã được bật, không cần nhập key, bạn có thể bắt đầu cuộc trò chuyện trực tiếp",
"好": "OK",
"实时传输回答": "Truyền đầu ra trực tiếp",
"对话": "Cuộc trò chuyện",
"对话历史": "Lịch sử cuộc trò chuyện",
"对话历史记录": "Lịch sử Cuộc trò chuyện",
"对话命名方式": "Phương thức đặt tên lịch sử trò chuyện",
"导出为 Markdown": "Xuất ra Markdown",
"川虎Chat": "Chuanhu Chat",
"川虎Chat 🚀": "Chuanhu Chat 🚀",
"工具箱": "Hộp công cụ",
"已经被删除啦": "Đã bị xóa rồi.",
"开始实时传输回答……": "Bắt đầu truyền đầu ra trực tiếp...",
"开始训练": "Bắt đầu đào tạo",
"微调": "Feeling-tuning",
"总结": "Tóm tắt",
"总结完成": "Hoàn thành tóm tắt",
"您使用的就是最新版!": "Bạn đang sử dụng phiên bản mới nhất!",
"您的IP区域:": "Khu vực IP của bạn: ",
"您的IP区域:未知。": "Khu vực IP của bạn: Không xác định.",
"您输入的 API 密钥为:": "Khóa API bạn đã nhập là:",
"找到了缓存的索引文件,加载中……": "Tìm thấy tập tin chỉ mục của bộ nhớ cache, đang tải...",
"拓展": "Mở rộng",
"搜索(支持正则)...": "Tìm kiếm (hỗ trợ regex)...",
"数据集预览": "Xem trước tập dữ liệu",
"文件ID": "ID Tệp",
"新对话 ": "Cuộc trò chuyện mới ",
"新建对话保留Prompt": "Tạo Cuộc trò chuyện mới và giữ Prompt nguyên vẹn",
"是否设置 HTTP 代理?[Y/N]:": "Bạn có muốn thiết lập proxy HTTP không? [C/K]:",
"是否设置 OpenAI API 密钥?[Y/N]:": "Bạn có muốn cài đặt mã khóa API của OpenAI không? [Y/N]:",
"是否设置用户账户?设置后,用户需要登陆才可访问。输入 Yes(y) 或 No(n),默认No:": "Bạn có muốn thiết lập tài khoản người dùng không? Sau khi thiết lập, người dùng cần đăng nhập để truy cập. Nhập Yes(y) hoặc No(n), mặc định là No:",
"暂时未知": "Tạm thời chưa xác định",
"更新": "Cập nhật",
"更新失败,请尝试[手动更新](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)": "Cập nhật thất bại, vui lòng thử [cập nhật thủ công](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#手动更新)",
"更新成功,请重启本程序": "Cập nhật thành công, vui lòng khởi động lại chương trình này",
"未命名对话历史记录": "Lịch sử Cuộc trò chuyện không đặt tên",
"未设置代理...": "Không có proxy...",
"本月使用金额": "Số tiền sử dụng trong tháng",
"构建索引中……": "Xây dựng chỉ mục...",
"查看[使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35)": "Xem [hướng dẫn sử dụng](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#微调-gpt-35) để biết thêm chi tiết",
"根据日期时间": "Theo ngày và giờ",
"模型": "Mô hình",
"模型名称后缀": "Hậu tố Tên Mô hình",
"模型自动总结(消耗tokens)": "Tự động tóm tắt bằng LLM (Tiêu thụ token)",
"模型设置为了:": "Mô hình đã được đặt thành: ",
"正在尝试更新...": "Đang cố gắng cập nhật...",
"正在尝试重启...": "Đang cố gắng khởi động lại...",
"正在获取IP地址信息,请稍候...": "Đang lấy thông tin địa chỉ IP, vui lòng đợi...",
"正在进行首次设置,请按照提示进行配置,配置将会被保存在": "Đang thiết lập lần đầu, vui lòng làm theo hướng dẫn để cấu hình, cài đặt sẽ được lưu vào",
"没有找到任何支持的文档。": "Không tìm thấy tài liệu hỗ trợ nào.",
"添加训练好的模型到模型列表": "Thêm mô hình đã đào tạo vào danh sách mô hình",
"状态": "Tình trạng",
"现在开始设置其他在线模型的API Key": "Bây giờ bắt đầu thiết lập API Key cho các mô hình trực tuyến khác",
"现在开始进行交互式配置。碰到不知道该怎么办的设置项时,请直接按回车键跳过,程序会自动选择合适的默认值。": "Bắt đầu cấu hình tương tác ngay bây giờ. Khi gặp các mục cài đặt không biết phải làm gì, hãy nhấn Enter để bỏ qua, chương trình sẽ tự động chọn giá trị mặc định phù hợp.",
"现在开始进行交互式配置:": "Bắt đầu cấu hình tương tác ngay bây giờ:",
"现在开始进行软件功能设置": "Bắt đầu cài đặt chức năng phần mềm ngay bây giờ.",
"生成内容总结中……": "Đang tạo tóm tắt nội dung...",
"用于定位滥用行为": "Sử dụng để xác định hành vi lạm dụng",
"用户标识符": "Định danh người dùng",
"由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发<br />访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本": "Phát triển bởi Bilibili [土川虎虎虎](https://space.bilibili.com/29125536), [明昭MZhao](https://space.bilibili.com/24807452) và [Keldos](https://github.com/Keldos-Li)\n\nTải mã nguồn mới nhất từ [GitHub](https://github.com/GaiZhenbiao/ChuanhuChatGPT)",
"由于下面的原因,Google 拒绝返回 Gemini 的回答:\\n\\n": "Vì lý do dưới đây, Google từ chối trả lời của Gemini:",
"知识库": "Cơ sở kiến thức",
"知识库文件": "Tệp cơ sở kiến thức",
"立即重启": "Khởi động lại ngay",
"第一条提问": "Theo câu hỏi đầu tiên",
"索引已保存至本地!": "Chỉ số đã được lưu cục bộ!",
"索引构建失败!": "Xây dựng chỉ mục thất bại!",
"索引构建完成": "Xây dựng chỉ mục hoàn tất",
"索引构建完成!": "Xây dựng chỉ mục đã hoàn thành!",
"网络": "Mạng",
"获取API使用情况失败:": "Lỗi khi lấy thông tin sử dụng API:",
"获取IP地理位置失败。原因:": "Không thể lấy vị trí địa lý của IP. Nguyên nhân: ",
"获取对话时发生错误,请查看后台日志": "Xảy ra lỗi khi nhận cuộc trò chuyện, kiểm tra nhật ký nền",
"覆盖gradio.oauth /logout路由": "Ghi đè tuyến đường gradio.oauth / logout.",
"训练": "Đào tạo",
"训练状态": "Tình trạng đào tạo",
"训练轮数(Epochs)": "Số lượt đào tạo (Epochs)",
"设置": "Cài đặt",
"设置保存文件名": "Đặt tên tệp lưu",
"设置完成。现在请重启本程序。": "Đã thiết lập xong. Vui lòng khởi động lại chương trình này.",
"设置文件名: 默认为.json,可选为.md": "Đặt tên tệp: mặc định là .json, tùy chọn là .md",
"识别公式": "Nhận dạng công thức",
"详情": "Chi tiết",
"请先输入用户名,输入空行结束添加用户:": "Vui lòng nhập tên người dùng trước, nhấn Enter để kết thúc thêm người dùng:",
"请先选择Ollama后端模型\\n\\n": "Vui lòng chọn mô hình backend của Ollama trước\\n",
"请查看 config_example.json,配置 Azure OpenAI": "Vui lòng xem tệp config_example.json để cấu hình Azure OpenAI",
"请检查网络连接,或者API-Key是否有效。": "Vui lòng kiểm tra kết nối mạng hoặc xem xét tính hợp lệ của API-Key.",
"请输入 ": "Vui lòng nhập.",
gitextract_9kyxuu8j/
├── .github/
│ ├── CONTRIBUTING.md
│ ├── ISSUE_TEMPLATE/
│ │ ├── config.yml
│ │ ├── feature-request.yml
│ │ ├── report-bug.yml
│ │ ├── report-docker.yml
│ │ ├── report-localhost.yml
│ │ ├── report-others.yml
│ │ └── report-server.yml
│ ├── pull_request_template.md
│ └── workflows/
│ ├── Build_Docker.yml
│ └── Release_docker.yml
├── .gitignore
├── CITATION.cff
├── ChuanhuChatbot.py
├── Dockerfile
├── LICENSE
├── README.md
├── config_example.json
├── configs/
│ └── ds_config_chatbot.json
├── locale/
│ ├── en_US.json
│ ├── extract_locale.py
│ ├── ja_JP.json
│ ├── ko_KR.json
│ ├── ru_RU.json
│ ├── sv_SE.json
│ ├── vi_VN.json
│ └── zh_CN.json
├── modules/
│ ├── __init__.py
│ ├── config.py
│ ├── index_func.py
│ ├── models/
│ │ ├── Azure.py
│ │ ├── ChatGLM.py
│ │ ├── ChuanhuAgent.py
│ │ ├── Claude.py
│ │ ├── DALLE3.py
│ │ ├── ERNIE.py
│ │ ├── GoogleGemini.py
│ │ ├── GoogleGemma.py
│ │ ├── GooglePaLM.py
│ │ ├── Groq.py
│ │ ├── LLaMA.py
│ │ ├── MOSS.py
│ │ ├── Ollama.py
│ │ ├── OpenAIInstruct.py
│ │ ├── OpenAIVision.py
│ │ ├── Qwen.py
│ │ ├── StableLM.py
│ │ ├── XMChat.py
│ │ ├── __init__.py
│ │ ├── base_model.py
│ │ ├── configuration_moss.py
│ │ ├── inspurai.py
│ │ ├── midjourney.py
│ │ ├── minimax.py
│ │ ├── modeling_moss.py
│ │ ├── models.py
│ │ ├── spark.py
│ │ └── tokenization_moss.py
│ ├── overwrites.py
│ ├── pdf_func.py
│ ├── presets.py
│ ├── repo.py
│ ├── shared.py
│ ├── train_func.py
│ ├── utils.py
│ ├── webui.py
│ └── webui_locale.py
├── readme/
│ ├── README_en.md
│ ├── README_ja.md
│ ├── README_ko.md
│ └── README_ru.md
├── requirements.txt
├── requirements_advanced.txt
├── run_Linux.sh
├── run_Windows.bat
├── run_macOS.command
└── web_assets/
├── html/
│ ├── appearance_switcher.html
│ ├── billing_info.html
│ ├── chatbot_header_btn.html
│ ├── chatbot_more.html
│ ├── chatbot_placeholder.html
│ ├── close_btn.html
│ ├── footer.html
│ ├── func_nav.html
│ ├── header_title.html
│ ├── update.html
│ └── web_config.html
├── javascript/
│ ├── ChuanhuChat.js
│ ├── chat-history.js
│ ├── chat-list.js
│ ├── external-scripts.js
│ ├── fake-gradio.js
│ ├── file-input.js
│ ├── localization.js
│ ├── message-button.js
│ ├── sliders.js
│ ├── updater.js
│ ├── user-info.js
│ ├── utils.js
│ └── webui.js
├── manifest.json
└── stylesheet/
├── ChuanhuChat.css
├── chatbot.css
├── custom-components.css
├── markdown.css
└── override-gradio.css
SYMBOL INDEX (594 symbols across 50 files)
FILE: ChuanhuChatbot.py
function create_new_model (line 28) | def create_new_model():
function create_greeting (line 504) | def create_greeting(request: gr.Request):
FILE: locale/extract_locale.py
function get_current_strings (line 19) | def get_current_strings():
function get_locale_strings (line 43) | def get_locale_strings(filename):
function sort_strings (line 52) | def sort_strings(existing_translations):
function auto_translate (line 71) | async def auto_translate(str, language):
function main (line 94) | async def main(auto=False):
FILE: modules/config.py
function load_config_to_environ (line 49) | def load_config_to_environ(key_list):
function retrieve_openai_api (line 218) | def retrieve_openai_api(api_key=None):
function retrieve_proxy (line 244) | def retrieve_proxy(proxy=None):
function update_doc_config (line 309) | def update_doc_config(two_column_pdf):
FILE: modules/index_func.py
function get_documents (line 11) | def get_documents(file_src):
function construct_index (line 106) | def construct_index(
FILE: modules/models/Azure.py
class Azure_OpenAI_Client (line 8) | class Azure_OpenAI_Client(Base_Chat_Langchain_Client):
method setup_model (line 9) | def setup_model(self):
FILE: modules/models/ChatGLM.py
class ChatGLM_Client (line 17) | class ChatGLM_Client(BaseLLMModel):
method __init__ (line 18) | def __init__(self, model_name, user_name="") -> None:
method _get_glm3_style_input (line 59) | def _get_glm3_style_input(self):
method _get_glm2_style_input (line 64) | def _get_glm2_style_input(self):
method _get_glm_style_input (line 76) | def _get_glm_style_input(self):
method get_answer_at_once (line 82) | def get_answer_at_once(self):
method get_answer_stream_iter (line 88) | def get_answer_stream_iter(self):
method deinitialize (line 100) | def deinitialize(self):
FILE: modules/models/ChuanhuAgent.py
class GoogleSearchInput (line 34) | class GoogleSearchInput(BaseModel):
class WebBrowsingInput (line 38) | class WebBrowsingInput(BaseModel):
class KnowledgeBaseQueryInput (line 42) | class KnowledgeBaseQueryInput(BaseModel):
class WebAskingInput (line 48) | class WebAskingInput(BaseModel):
class ChuanhuAgent_Client (line 55) | class ChuanhuAgent_Client(BaseLLMModel):
method __init__ (line 56) | def __init__(self, model_name, openai_api_key, user_name="") -> None:
method google_search_simple (line 140) | def google_search_simple(self, query):
method handle_file_upload (line 150) | def handle_file_upload(self, files, chatbot, language):
method prepare_inputs (line 163) | def prepare_inputs(
method query_index (line 171) | def query_index(self, query):
method summary (line 184) | def summary(self, text):
method fetch_url_content (line 191) | def fetch_url_content(self, url):
method summary_url (line 200) | def summary_url(self, url):
method ask_url (line 209) | def ask_url(self, url, question):
method get_answer_at_once (line 230) | def get_answer_at_once(self):
method get_answer_stream_iter (line 242) | def get_answer_stream_iter(self):
FILE: modules/models/Claude.py
class Claude_Client (line 8) | class Claude_Client(BaseLLMModel):
method __init__ (line 9) | def __init__(self, model_name, api_secret) -> None:
method _get_claude_style_history (line 16) | def _get_claude_style_history(self):
method get_answer_stream_iter (line 73) | def get_answer_stream_iter(self):
method get_answer_at_once (line 91) | def get_answer_at_once(self):
FILE: modules/models/DALLE3.py
class OpenAI_DALLE3_Client (line 8) | class OpenAI_DALLE3_Client(BaseLLMModel):
method __init__ (line 9) | def __init__(self, model_name, api_key, user_name="") -> None:
method _get_dalle3_prompt (line 17) | def _get_dalle3_prompt(self):
method get_answer_at_once (line 23) | def get_answer_at_once(self, stream=False):
method _refresh_header (line 62) | def _refresh_header(self):
FILE: modules/models/ERNIE.py
class ERNIE_Client (line 7) | class ERNIE_Client(BaseLLMModel):
method __init__ (line 8) | def __init__(self, model_name, api_key, secret_key) -> None:
method get_access_token (line 22) | def get_access_token(self):
method get_answer_stream_iter (line 38) | def get_answer_stream_iter(self):
method get_answer_at_once (line 70) | def get_answer_at_once(self):
FILE: modules/models/GoogleGemini.py
class GoogleGeminiClient (line 15) | class GoogleGeminiClient(BaseLLMModel):
method __init__ (line 16) | def __init__(self, model_name, api_key, user_name="") -> None:
method _encode_image_to_base64 (line 39) | def _encode_image_to_base64(self, image_path: str) -> str:
method _get_mime_type (line 44) | def _get_mime_type(self, image_path: str) -> str:
method _prepare_request_payload (line 61) | def _prepare_request_payload(self, stream: bool = False) -> Dict[str, ...
method _send_request (line 177) | def _send_request(self, payload: Dict[str, Any], stream: bool = False)...
method _process_streaming_response (line 203) | def _process_streaming_response(self, response: requests.Response) -> ...
method _process_response (line 237) | def _process_response(self, response: requests.Response) -> str:
method get_answer_at_once (line 260) | def get_answer_at_once(self):
method get_answer_stream_iter (line 272) | def get_answer_stream_iter(self):
FILE: modules/models/GoogleGemma.py
class GoogleGemmaClient (line 11) | class GoogleGemmaClient(BaseLLMModel):
method __init__ (line 12) | def __init__(self, model_name, api_key, user_name="") -> None:
method deinitialize (line 52) | def deinitialize(self):
method _get_gemma_style_input (line 59) | def _get_gemma_style_input(self):
method get_answer_at_once (line 71) | def get_answer_at_once(self):
method get_answer_stream_iter (line 84) | def get_answer_stream_iter(self):
FILE: modules/models/GooglePaLM.py
class Google_PaLM_Client (line 5) | class Google_PaLM_Client(BaseLLMModel):
method __init__ (line 6) | def __init__(self, model_name, api_key, user_name="") -> None:
method _get_palm_style_input (line 9) | def _get_palm_style_input(self):
method get_answer_at_once (line 18) | def get_answer_at_once(self):
FILE: modules/models/Groq.py
class Groq_Client (line 19) | class Groq_Client(BaseLLMModel):
method __init__ (line 20) | def __init__(self, model_name, api_key, user_name="") -> None:
method _get_groq_style_input (line 33) | def _get_groq_style_input(self):
method get_answer_at_once (line 37) | def get_answer_at_once(self):
method get_answer_stream_iter (line 46) | def get_answer_stream_iter(self):
FILE: modules/models/LLaMA.py
class LLaMA_Client (line 20) | class LLaMA_Client(BaseLLMModel):
method __init__ (line 21) | def __init__(self, model_name, lora_path=None, user_name="") -> None:
method _get_llama_style_input (line 49) | def _get_llama_style_input(self):
method get_answer_at_once (line 72) | def get_answer_at_once(self):
method get_answer_stream_iter (line 83) | def get_answer_stream_iter(self):
FILE: modules/models/MOSS.py
class MOSS_Client (line 27) | class MOSS_Client(BaseLLMModel):
method __init__ (line 28) | def __init__(self, model_name, user_name="") -> None:
method _get_main_instruction (line 100) | def _get_main_instruction(self):
method _get_moss_style_inputs (line 103) | def _get_moss_style_inputs(self):
method get_answer_at_once (line 112) | def get_answer_at_once(self):
method get_answer_stream_iter (line 133) | def get_answer_stream_iter(self):
method preprocess (line 139) | def preprocess(self, raw_text: str) -> Tuple[torch.Tensor, torch.Tensor]:
method forward (line 156) | def forward(
method streaming_topk_search (line 195) | def streaming_topk_search(
method top_k_top_p_filtering (line 303) | def top_k_top_p_filtering(self, logits, top_k, top_p, filter_value=-fl...
method infer_ (line 331) | def infer_(
method __call__ (line 358) | def __call__(self, input):
FILE: modules/models/Ollama.py
class OllamaClient (line 15) | class OllamaClient(BaseLLMModel):
method __init__ (line 16) | def __init__(self, model_name, user_name="", ollama_host="", backend_m...
method get_model_list (line 22) | def get_model_list(self):
method update_token_limit (line 26) | def update_token_limit(self):
method get_answer_stream_iter (line 43) | def get_answer_stream_iter(self):
FILE: modules/models/OpenAIInstruct.py
class OpenAI_Instruct_Client (line 9) | class OpenAI_Instruct_Client(BaseLLMModel):
method __init__ (line 10) | def __init__(self, model_name, api_key, user_name="") -> None:
method _get_instruct_style_input (line 13) | def _get_instruct_style_input(self):
method get_answer_at_once (line 17) | def get_answer_at_once(self):
FILE: modules/models/OpenAIVision.py
class OpenAIVisionClient (line 25) | class OpenAIVisionClient(BaseLLMModel):
method __init__ (line 26) | def __init__(
method get_answer_stream_iter (line 45) | def get_answer_stream_iter(self):
method get_answer_at_once (line 74) | def get_answer_at_once(self):
method count_token (line 82) | def count_token(self, user_input):
method count_image_tokens (line 91) | def count_image_tokens(self, width: int, height: int):
method billing_info (line 98) | def billing_info(self):
method _get_gpt4v_style_history (line 140) | def _get_gpt4v_style_history(self):
method _get_response (line 170) | def _get_response(self, stream=False):
method _refresh_header (line 226) | def _refresh_header(self):
method _get_billing_data (line 233) | def _get_billing_data(self, billing_url):
method _decode_chat_response (line 249) | def _decode_chat_response(self, response):
method set_key (line 289) | def set_key(self, new_access_key):
method _single_query_at_once (line 294) | def _single_query_at_once(self, history, temperature=1.0):
method auto_name_chat_history (line 317) | def auto_name_chat_history(self, name_chat_method, user_question, sing...
FILE: modules/models/Qwen.py
class Qwen_Client (line 10) | class Qwen_Client(BaseLLMModel):
method __init__ (line 11) | def __init__(self, model_name, user_name="") -> None:
method generation_config (line 26) | def generation_config(self):
method _get_glm_style_input (line 42) | def _get_glm_style_input(self):
method get_answer_at_once (line 54) | def get_answer_at_once(self):
method get_answer_stream_iter (line 60) | def get_answer_stream_iter(self):
FILE: modules/models/StableLM.py
class StopOnTokens (line 14) | class StopOnTokens(StoppingCriteria):
method __call__ (line 15) | def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTen...
class StableLM_Client (line 23) | class StableLM_Client(BaseLLMModel):
method __init__ (line 24) | def __init__(self, model_name, user_name="") -> None:
method _get_stablelm_style_input (line 49) | def _get_stablelm_style_input(self):
method _generate (line 57) | def _generate(self, text, bad_text=None):
method get_answer_at_once (line 63) | def get_answer_at_once(self):
method get_answer_stream_iter (line 67) | def get_answer_stream_iter(self):
FILE: modules/models/XMChat.py
class XMChat (line 19) | class XMChat(BaseLLMModel):
method __init__ (line 20) | def __init__(self, api_key, user_name=""):
method reset (line 33) | def reset(self, remain_system_prompt=False):
method image_to_base64 (line 38) | def image_to_base64(self, image_path):
method try_read_image (line 67) | def try_read_image(self, filepath):
method like (line 83) | def like(self):
method dislike (line 93) | def dislike(self):
method prepare_inputs (line 103) | def prepare_inputs(self, real_inputs, use_websearch, files, reply_lang...
method handle_file_upload (line 109) | def handle_file_upload(self, files, chatbot, language):
method get_answer_at_once (line 135) | def get_answer_at_once(self):
FILE: modules/models/base_model.py
class CallbackToIterator (line 41) | class CallbackToIterator:
method __init__ (line 42) | def __init__(self):
method callback (line 47) | def callback(self, result):
method __iter__ (line 52) | def __iter__(self):
method __next__ (line 55) | def __next__(self):
method finish (line 64) | def finish(self):
function get_action_description (line 70) | def get_action_description(action):
class ChuanhuCallbackHandler (line 82) | class ChuanhuCallbackHandler(BaseCallbackHandler):
method __init__ (line 83) | def __init__(self, callback) -> None:
method on_agent_action (line 87) | def on_agent_action(
method on_tool_end (line 92) | def on_tool_end(
method on_agent_finish (line 112) | def on_agent_finish(
method on_llm_new_token (line 118) | def on_llm_new_token(
class ModelType (line 138) | class ModelType(Enum):
method get_type (line 166) | def get_type(cls, model_name: str):
function download (line 235) | def download(repo_id, filename, retry=10):
class BaseLLMModel (line 263) | class BaseLLMModel:
method __init__ (line 264) | def __init__(
method get_answer_stream_iter (line 324) | def get_answer_stream_iter(self):
method get_answer_at_once (line 335) | def get_answer_at_once(self):
method billing_info (line 349) | def billing_info(self):
method count_token (line 354) | def count_token(self, user_input):
method stream_next_chatbot (line 359) | def stream_next_chatbot(self, inputs, chatbot, fake_input=None, displa...
method next_chatbot_at_once (line 393) | def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, displ...
method handle_file_upload (line 415) | def handle_file_upload(self, files, chatbot, language):
method summarize_index (line 446) | def summarize_index(self, files, chatbot, language):
method prepare_inputs (line 480) | def prepare_inputs(
method predict (line 595) | def predict(
method retry (line 739) | def retry(
method interrupt (line 792) | def interrupt(self):
method recover (line 795) | def recover(self):
method set_token_upper_limit (line 798) | def set_token_upper_limit(self, new_upper_limit):
method set_temperature (line 802) | def set_temperature(self, new_temperature):
method set_top_p (line 806) | def set_top_p(self, new_top_p):
method set_n_choices (line 810) | def set_n_choices(self, new_n_choices):
method set_stop_sequence (line 814) | def set_stop_sequence(self, new_stop_sequence: str):
method set_max_tokens (line 819) | def set_max_tokens(self, new_max_tokens):
method set_presence_penalty (line 823) | def set_presence_penalty(self, new_presence_penalty):
method set_frequency_penalty (line 827) | def set_frequency_penalty(self, new_frequency_penalty):
method set_logit_bias (line 831) | def set_logit_bias(self, logit_bias):
method encoded_logit_bias (line 835) | def encoded_logit_bias(self):
method set_user_identifier (line 848) | def set_user_identifier(self, new_user_identifier):
method set_system_prompt (line 852) | def set_system_prompt(self, new_system_prompt):
method set_key (line 856) | def set_key(self, new_access_key):
method set_single_turn (line 865) | def set_single_turn(self, new_single_turn):
method set_streaming (line 869) | def set_streaming(self, new_streaming):
method reset (line 873) | def reset(self, remain_system_prompt=False):
method delete_first_conversation (line 915) | def delete_first_conversation(self):
method delete_last_conversation (line 921) | def delete_last_conversation(self, chatbot):
method token_message (line 939) | def token_message(self, token_lst=None):
method rename_chat_history (line 952) | def rename_chat_history(self, filename):
method auto_name_chat_history (line 972) | def auto_name_chat_history(
method auto_save (line 984) | def auto_save(self, chatbot=None):
method export_markdown (line 988) | def export_markdown(self, filename, chatbot):
method upload_chat_history (line 995) | def upload_chat_history(self, new_history_file_content=None):
method load_chat_history (line 1023) | def load_chat_history(self, new_history_file_path=None):
method delete_chat_history (line 1132) | def delete_chat_history(self, filename):
method auto_load (line 1163) | def auto_load(self):
method new_auto_history_filename (line 1167) | def new_auto_history_filename(self):
method like (line 1170) | def like(self):
method dislike (line 1174) | def dislike(self):
method deinitialize (line 1178) | def deinitialize(self):
method clear_cuda_cache (line 1182) | def clear_cuda_cache(self):
method get_base64_image (line 1189) | def get_base64_image(self, image_path):
method get_image_type (line 1201) | def get_image_type(self, image_path):
class Base_Chat_Langchain_Client (line 1208) | class Base_Chat_Langchain_Client(BaseLLMModel):
method __init__ (line 1209) | def __init__(self, model_name, user_name=""):
method setup_model (line 1214) | def setup_model(self):
method _get_langchain_style_history (line 1218) | def _get_langchain_style_history(self):
method get_answer_at_once (line 1227) | def get_answer_at_once(self):
method get_answer_stream_iter (line 1235) | def get_answer_stream_iter(self):
FILE: modules/models/configuration_moss.py
class MossConfig (line 10) | class MossConfig(PretrainedConfig):
method __init__ (line 75) | def __init__(
FILE: modules/models/inspurai.py
class Example (line 17) | class Example:
method __init__ (line 20) | def __init__(self, inp, out):
method get_input (line 25) | def get_input(self):
method get_output (line 29) | def get_output(self):
method get_id (line 33) | def get_id(self):
method as_dict (line 37) | def as_dict(self):
class Yuan (line 45) | class Yuan:
method __init__ (line 50) | def __init__(self,
method set_account (line 84) | def set_account(self, api_key):
method add_example (line 88) | def add_example(self, ex):
method delete_example (line 94) | def delete_example(self, id):
method get_example (line 99) | def get_example(self, id):
method get_all_examples (line 103) | def get_all_examples(self):
method get_prime_text (line 107) | def get_prime_text(self):
method get_engine (line 112) | def get_engine(self):
method get_temperature (line 116) | def get_temperature(self):
method get_max_tokens (line 120) | def get_max_tokens(self):
method craft_query (line 124) | def craft_query(self, prompt):
method format_example (line 133) | def format_example(self, ex):
method response (line 139) | def response(self,
method del_special_chars (line 163) | def del_special_chars(self, msg):
method submit_API (line 169) | def submit_API(self, prompt, trun=[]):
class YuanAPI (line 209) | class YuanAPI:
method __init__ (line 216) | def __init__(self, user, phone):
method code_md5 (line 221) | def code_md5(str):
method rest_get (line 229) | def rest_get(url, header, timeout, show_error=False):
method header_generation (line 239) | def header_generation(self):
method submit_request (line 246) | def submit_request(self, query, temperature, topP, topK, max_tokens, e...
method reply_request (line 265) | def reply_request(self, requestId, cycle_count=5):
class Yuan_Client (line 281) | class Yuan_Client(BaseLLMModel):
method __init__ (line 283) | def __init__(self, model_name, api_key, user_name="", system_prompt=No...
method set_text_prefix (line 292) | def set_text_prefix(self, option, value):
method get_answer_at_once (line 298) | def get_answer_at_once(self):
FILE: modules/models/midjourney.py
class Midjourney_Client (line 23) | class Midjourney_Client(XMChat):
class FetchDataPack (line 25) | class FetchDataPack:
method __init__ (line 38) | def __init__(self, action, prefix_content, task_id, timeout=900):
method __init__ (line 46) | def __init__(self, model_name, api_key, user_name=""):
method use_mj_self_proxy_url (line 69) | def use_mj_self_proxy_url(self, img_url):
method split_image (line 78) | def split_image(self, image_url):
method auth_mj (line 98) | def auth_mj(self):
method request_mj (line 105) | def request_mj(self, path: str, action: str, data: str, retries=3):
method fetch_status (line 133) | def fetch_status(self, fetch_data: FetchDataPack):
method handle_file_upload (line 205) | def handle_file_upload(self, files, chatbot, language):
method reset (line 220) | def reset(self, remain_system_prompt=False):
method get_answer_at_once (line 225) | def get_answer_at_once(self):
method get_answer_stream_iter (line 289) | def get_answer_stream_iter(self):
method get_help (line 355) | def get_help(self):
FILE: modules/models/minimax.py
class MiniMax_Client (line 14) | class MiniMax_Client(BaseLLMModel):
method __init__ (line 20) | def __init__(self, model_name, api_key, user_name="", system_prompt=No...
method get_answer_at_once (line 31) | def get_answer_at_once(self):
method get_answer_stream_iter (line 57) | def get_answer_stream_iter(self):
method _get_response (line 68) | def _get_response(self, stream=False):
method _decode_chat_response (line 130) | def _decode_chat_response(self, response):
FILE: modules/models/modeling_moss.py
function create_sinusoidal_positions (line 37) | def create_sinusoidal_positions(num_pos: int, dim: int) -> torch.Tensor:
function rotate_every_two (line 44) | def rotate_every_two(x: torch.Tensor) -> torch.Tensor:
function apply_rotary_pos_emb (line 52) | def apply_rotary_pos_emb(tensor: torch.Tensor, sin: torch.Tensor, cos: t...
class MossAttention (line 58) | class MossAttention(nn.Module):
method __init__ (line 59) | def __init__(self, config):
method _split_heads (line 89) | def _split_heads(self, x, n_head, dim_head, mp_num):
method _merge_heads (line 94) | def _merge_heads(self, tensor, num_attention_heads, attn_head_size):
method _attn (line 107) | def _attn(
method forward (line 148) | def forward(
class MossMLP (line 227) | class MossMLP(nn.Module):
method __init__ (line 228) | def __init__(self, intermediate_size, config): # in MLP: intermediate...
method forward (line 238) | def forward(self, hidden_states: Optional[torch.FloatTensor]) -> torch...
class MossBlock (line 247) | class MossBlock(nn.Module):
method __init__ (line 248) | def __init__(self, config):
method forward (line 255) | def forward(
class MossPreTrainedModel (line 290) | class MossPreTrainedModel(PreTrainedModel):
method __init__ (line 301) | def __init__(self, *inputs, **kwargs):
method _init_weights (line 304) | def _init_weights(self, module):
method _set_gradient_checkpointing (line 320) | def _set_gradient_checkpointing(self, module, value=False):
class MossModel (line 390) | class MossModel(MossPreTrainedModel):
method __init__ (line 391) | def __init__(self, config):
method get_input_embeddings (line 407) | def get_input_embeddings(self):
method set_input_embeddings (line 410) | def set_input_embeddings(self, new_embeddings):
method forward (line 419) | def forward(
class MossForCausalLM (line 583) | class MossForCausalLM(MossPreTrainedModel):
method __init__ (line 586) | def __init__(self, config):
method get_output_embeddings (line 594) | def get_output_embeddings(self):
method set_output_embeddings (line 597) | def set_output_embeddings(self, new_embeddings):
method prepare_inputs_for_generation (line 600) | def prepare_inputs_for_generation(self, input_ids, past_key_values=Non...
method forward (line 633) | def forward(
method _reorder_cache (line 700) | def _reorder_cache(
FILE: modules/models/models.py
function get_model (line 17) | def get_model(
FILE: modules/models/spark.py
class Ws_Param (line 21) | class Ws_Param(object):
method __init__ (line 24) | def __init__(self, APPID, APIKey, APISecret, Spark_url):
method create_url (line 33) | def create_url(self):
class Spark_Client (line 67) | class Spark_Client(BaseLLMModel):
method __init__ (line 68) | def __init__(self, model_name, appid, api_key, api_secret, user_name="...
method on_error (line 79) | def on_error(self, ws, error):
method on_close (line 83) | def on_close(self, ws, one, two):
method on_open (line 87) | def on_open(self, ws):
method run (line 90) | def run(self, ws, *args):
method on_message (line 97) | def on_message(self, ws, message):
method gen_params (line 100) | def gen_params(self):
method get_answer_stream_iter (line 118) | def get_answer_stream_iter(self):
FILE: modules/models/tokenization_moss.py
function bytes_to_unicode (line 50) | def bytes_to_unicode():
function get_pairs (line 74) | def get_pairs(word):
class MossTokenizer (line 88) | class MossTokenizer(PreTrainedTokenizer):
method __init__ (line 132) | def __init__(
method vocab_size (line 178) | def vocab_size(self):
method get_vocab (line 181) | def get_vocab(self):
method bpe (line 184) | def bpe(self, token):
method build_inputs_with_special_tokens (line 226) | def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=No...
method _tokenize (line 239) | def _tokenize(self, text):
method _convert_token_to_id (line 249) | def _convert_token_to_id(self, token):
method _convert_id_to_token (line 253) | def _convert_id_to_token(self, index):
method convert_tokens_to_string (line 257) | def convert_tokens_to_string(self, tokens):
method save_vocabulary (line 263) | def save_vocabulary(self, save_directory: str, filename_prefix: Option...
method prepare_for_tokenization (line 292) | def prepare_for_tokenization(self, text, is_split_into_words=False, **...
method decode (line 298) | def decode(
method truncate (line 342) | def truncate(self, completion, truncate_before_pattern):
FILE: modules/overwrites.py
function postprocess (line 14) | def postprocess(
function postprocess_chat_messages (line 45) | def postprocess_chat_messages(
function init_with_class_name_as_elem_classes (line 70) | def init_with_class_name_as_elem_classes(original_func):
function multipart_internal_write (line 87) | def multipart_internal_write(self, data: bytes, length: int) -> int:
function patch_gradio (line 488) | def patch_gradio():
FILE: modules/pdf_func.py
function prepare_table_config (line 6) | def prepare_table_config(crop_page):
function get_text_outside_table (line 28) | def get_text_outside_table(crop_page):
function get_title_with_cropped_page (line 51) | def get_title_with_cropped_page(first_page):
function get_column_cropped_pages (line 68) | def get_column_cropped_pages(pages, two_column=True):
function parse_pdf (line 81) | def parse_pdf(filename, two_column = True):
FILE: modules/repo.py
function run (line 25) | def run(
function run_pip (line 57) | def run_pip(command, desc=None, pref=None, live=default_command_live):
function commit_hash (line 71) | def commit_hash():
function commit_html (line 80) | def commit_html():
function tag_html (line 91) | def tag_html():
function repo_tag_html (line 112) | def repo_tag_html():
function versions_html (line 118) | def versions_html():
function version_time (line 130) | def version_time():
function get_current_branch (line 161) | def get_current_branch():
function get_latest_release (line 172) | def get_latest_release():
function get_tag_commit_hash (line 189) | def get_tag_commit_hash(tag):
function repo_need_stash (line 202) | def repo_need_stash():
function background_update (line 216) | def background_update():
FILE: modules/shared.py
function format_openai_host (line 6) | def format_openai_host(api_host: str):
class State (line 19) | class State:
method interrupt (line 29) | def interrupt(self):
method recover (line 32) | def recover(self):
method set_api_host (line 35) | def set_api_host(self, api_host: str):
method reset_api_host (line 40) | def reset_api_host(self):
method reset_all (line 49) | def reset_all(self):
method set_api_key_queue (line 53) | def set_api_key_queue(self, api_key_list):
method switching_api_key (line 59) | def switching_api_key(self, func):
FILE: modules/train_func.py
function excel_to_jsonl (line 17) | def excel_to_jsonl(filepath, preview=False):
function jsonl_save_to_disk (line 58) | def jsonl_save_to_disk(jsonl, filepath):
function estimate_cost (line 66) | def estimate_cost(ds):
function handle_dataset_selection (line 76) | def handle_dataset_selection(file_src):
function upload_to_openai (line 88) | def upload_to_openai(file_src):
function build_event_description (line 103) | def build_event_description(id, status, trained_tokens, name=i18n("暂时未知")):
function start_training (line 115) | def start_training(file_id, suffix, epochs):
function get_training_status (line 125) | def get_training_status():
function handle_dataset_clear (line 129) | def handle_dataset_clear():
function add_to_models (line 132) | def add_to_models():
function cancel_all_jobs (line 158) | def cancel_all_jobs():
FILE: modules/utils.py
class DataframeData (line 36) | class DataframeData(TypedDict):
function predict (line 41) | def predict(current_model, *args):
function billing_info (line 47) | def billing_info(current_model):
function set_key (line 51) | def set_key(current_model, *args):
function load_chat_history (line 55) | def load_chat_history(current_model, *args):
function delete_chat_history (line 59) | def delete_chat_history(current_model, *args):
function interrupt (line 63) | def interrupt(current_model, *args):
function reset (line 67) | def reset(current_model, *args):
function retry (line 71) | def retry(current_model, *args):
function delete_first_conversation (line 77) | def delete_first_conversation(current_model, *args):
function delete_last_conversation (line 81) | def delete_last_conversation(current_model, *args):
function set_system_prompt (line 85) | def set_system_prompt(current_model, *args):
function rename_chat_history (line 89) | def rename_chat_history(current_model, *args):
function auto_name_chat_history (line 93) | def auto_name_chat_history(current_model, *args):
function export_markdown (line 97) | def export_markdown(current_model, *args):
function upload_chat_history (line 101) | def upload_chat_history(current_model, *args):
function set_token_upper_limit (line 105) | def set_token_upper_limit(current_model, *args):
function set_temperature (line 109) | def set_temperature(current_model, *args):
function set_top_p (line 113) | def set_top_p(current_model, *args):
function set_n_choices (line 117) | def set_n_choices(current_model, *args):
function set_stop_sequence (line 121) | def set_stop_sequence(current_model, *args):
function set_max_tokens (line 125) | def set_max_tokens(current_model, *args):
function set_presence_penalty (line 129) | def set_presence_penalty(current_model, *args):
function set_frequency_penalty (line 133) | def set_frequency_penalty(current_model, *args):
function set_logit_bias (line 137) | def set_logit_bias(current_model, *args):
function set_user_identifier (line 141) | def set_user_identifier(current_model, *args):
function set_single_turn (line 145) | def set_single_turn(current_model, *args):
function set_streaming (line 148) | def set_streaming(current_model, *args):
function handle_file_upload (line 152) | def handle_file_upload(current_model, *args):
function handle_summarize_index (line 156) | def handle_summarize_index(current_model, *args):
function like (line 160) | def like(current_model, *args):
function dislike (line 164) | def dislike(current_model, *args):
function count_token (line 168) | def count_token(input_str):
function markdown_to_html_with_syntax_highlight (line 176) | def markdown_to_html_with_syntax_highlight(md_str): # deprecated
function normalize_markdown (line 198) | def normalize_markdown(md_text: str) -> str: # deprecated
function convert_mdtext (line 222) | def convert_mdtext(md_text): # deprecated
function remove_html_tags (line 246) | def remove_html_tags(chatbot):
function clip_rawtext (line 282) | def clip_rawtext(chat_message, need_escape=True):
function convert_bot_before_marked (line 308) | def convert_bot_before_marked(chat_message):
function convert_user_before_marked (line 333) | def convert_user_before_marked(chat_message):
function escape_markdown (line 340) | def escape_markdown(text):
function convert_asis (line 372) | def convert_asis(userinput): # deprecated
function detect_converted_mark (line 379) | def detect_converted_mark(userinput): # deprecated
function detect_language (line 389) | def detect_language(code): # deprecated
function construct_text (line 399) | def construct_text(role, text):
function construct_user (line 403) | def construct_user(text):
function construct_image (line 406) | def construct_image(path):
function construct_system (line 410) | def construct_system(text):
function construct_assistant (line 414) | def construct_assistant(text):
function save_file (line 418) | def save_file(filename, model):
function save_md_file (line 480) | def save_md_file(json_file_path):
function sorted_by_pinyin (line 492) | def sorted_by_pinyin(list):
function sorted_by_last_modified_time (line 496) | def sorted_by_last_modified_time(list, dir):
function get_file_names_by_type (line 502) | def get_file_names_by_type(dir, filetypes=[".json"]):
function get_file_names_by_pinyin (line 512) | def get_file_names_by_pinyin(dir, filetypes=[".json"]):
function get_file_names_dropdown_by_pinyin (line 520) | def get_file_names_dropdown_by_pinyin(dir, filetypes=[".json"]):
function get_file_names_by_last_modified_time (line 525) | def get_file_names_by_last_modified_time(dir, filetypes=[".json"]):
function get_history_names (line 533) | def get_history_names(user_name=""):
function get_first_history_name (line 548) | def get_first_history_name(user_name=""):
function get_history_list (line 553) | def get_history_list(user_name=""):
function init_history_list (line 558) | def init_history_list(user_name="", prepend=None):
function filter_history (line 567) | def filter_history(user_name, keyword):
function load_template (line 576) | def load_template(filename, mode=0):
function get_template_names (line 603) | def get_template_names():
function get_template_dropdown (line 608) | def get_template_dropdown():
function get_template_content (line 614) | def get_template_content(templates, selection, original_system_prompt):
function reset_textbox (line 622) | def reset_textbox():
function reset_default (line 627) | def reset_default():
function change_api_host (line 633) | def change_api_host(host):
function change_proxy (line 640) | def change_proxy(proxy):
function hide_middle_chars (line 648) | def hide_middle_chars(s):
function submit_key (line 660) | def submit_key(key):
function replace_today (line 667) | def replace_today(prompt):
function get_geoip (line 676) | def get_geoip():
function find_n (line 724) | def find_n(lst, max_num):
function start_outputing (line 738) | def start_outputing():
function end_outputing (line 743) | def end_outputing():
function cancel_outputing (line 750) | def cancel_outputing():
function transfer_input (line 755) | def transfer_input(inputs):
function update_chuanhu (line 767) | def update_chuanhu(username):
function add_source_numbers (line 788) | def add_source_numbers(lst, source_name="Source", use_source=True):
function add_details (line 798) | def add_details(lst):
function sheet_to_string (line 806) | def sheet_to_string(sheet, sheet_name=None):
function excel_to_string (line 818) | def excel_to_string(file_path):
function get_last_day_of_month (line 833) | def get_last_day_of_month(any_day):
function get_model_source (line 840) | def get_model_source(model_name, alternative_source):
function refresh_ui_elements_on_load (line 845) | def refresh_ui_elements_on_load(current_model, selected_model_name, user...
function toggle_like_btn_visibility (line 850) | def toggle_like_btn_visibility(selected_model_name):
function get_corresponding_file_type_by_model_name (line 857) | def get_corresponding_file_type_by_model_name(selected_model_name):
function new_auto_history_filename (line 868) | def new_auto_history_filename(username):
function get_history_filepath (line 882) | def get_history_filepath(username):
function beautify_err_msg (line 893) | def beautify_err_msg(err_msg):
function auth_from_conf (line 911) | def auth_from_conf(username, password):
function get_file_hash (line 935) | def get_file_hash(file_src=None, file_paths=None):
function myprint (line 949) | def myprint(**args):
function replace_special_symbols (line 953) | def replace_special_symbols(string, replace_string=" "):
class ConfigType (line 962) | class ConfigType(Enum):
class ConfigItem (line 970) | class ConfigItem:
method __init__ (line 971) | def __init__(self, key, name, default=None, type=ConfigType.String) ->...
function generate_prompt_string (line 978) | def generate_prompt_string(config_item):
function generate_result_string (line 1001) | def generate_result_string(config_item, config_value):
class SetupWizard (line 1012) | class SetupWizard:
method __init__ (line 1013) | def __init__(self, file_path="config.json") -> None:
method set (line 1044) | def set(self, config_items: List[ConfigItem], prompt: str):
method set_users (line 1104) | def set_users(self):
method __setitem__ (line 1121) | def __setitem__(self, setting_key: str, value):
method __getitem__ (line 1124) | def __getitem__(self, setting_key: str):
method save (line 1127) | def save(self):
function setup_wizard (line 1132) | def setup_wizard():
function reboot_chuanhu (line 1500) | def reboot_chuanhu():
function setPlaceholder (line 1506) | def setPlaceholder(model_name: str | None = "", model: BaseLLMModel | No...
function download_file (line 1570) | def download_file(path):
FILE: modules/webui.py
function get_html (line 14) | def get_html(filename):
function webpath (line 21) | def webpath(fn):
function javascript_html (line 30) | def javascript_html():
function css_html (line 38) | def css_html():
function list_scripts (line 44) | def list_scripts(scriptdirname, extension):
function reload_javascript (line 54) | def reload_javascript():
FILE: modules/webui_locale.py
class I18nAuto (line 6) | class I18nAuto:
method __init__ (line 7) | def __init__(self):
method change_language (line 33) | def change_language(self, language):
method __call__ (line 46) | def __call__(self, key):
FILE: web_assets/javascript/ChuanhuChat.js
constant MAX_HISTORY_LENGTH (line 4) | const MAX_HISTORY_LENGTH = 32;
function addInit (line 51) | function addInit() {
function initialize (line 76) | function initialize() {
function gradioApp (line 145) | function gradioApp() {
function showConfirmationDialog (line 157) | function showConfirmationDialog(a, file, c) {
function selectHistory (line 167) | function selectHistory() {
function disableSendBtn (line 215) | function disableSendBtn() {
function checkModel (line 222) | function checkModel() {
function bindChatbotPlaceholderButtons (line 266) | function bindChatbotPlaceholderButtons() {
function toggleDarkMode (line 288) | function toggleDarkMode(isEnabled) {
function adjustDarkMode (line 299) | function adjustDarkMode() {
function btnToggleDarkMode (line 311) | function btnToggleDarkMode() {
function setScrollShadow (line 316) | function setScrollShadow() {
function setPopupBoxPosition (line 342) | function setPopupBoxPosition() {
function setChatbotHeight (line 358) | function setChatbotHeight() {
function setChatbotScroll (line 381) | function setChatbotScroll() {
function setAutocomplete (line 386) | function setAutocomplete() {
function clearChatbot (line 392) | function clearChatbot(a, b) {
function chatbotContentChanged (line 398) | function chatbotContentChanged(attempt = 1, force = false) {
function makeML (line 502) | function makeML(str) {
FILE: web_assets/javascript/chat-history.js
function saveHistoryHtml (line 6) | function saveHistoryHtml() {
function loadHistoryHtml (line 14) | function loadHistoryHtml() {
function clearHistoryHtml (line 79) | function clearHistoryHtml() {
FILE: web_assets/javascript/chat-list.js
function setChatListHeader (line 4) | function setChatListHeader() {
function setChatList (line 15) | function setChatList() {
function disableChatListClick (line 53) | function disableChatListClick() {
function enableChatListClick (line 63) | function enableChatListClick() {
function exportBtnCheck (line 78) | function exportBtnCheck() {
function saveChatHistory (line 85) | function saveChatHistory(a, b, c, d) {
function isValidFileName (line 104) | function isValidFileName(fileName) {
FILE: web_assets/javascript/fake-gradio.js
function newChatClick (line 5) | function newChatClick() {
function jsonDownloadClick (line 8) | function jsonDownloadClick() {
function mdDownloadClick (line 11) | function mdDownloadClick() {
function setUploader (line 16) | function setUploader() {
function transUpload (line 43) | function transUpload() {
function setCheckboxes (line 59) | function setCheckboxes() {
function bgChangeSingleSession (line 81) | function bgChangeSingleSession() {
function bgChangeOnlineSearch (line 86) | function bgChangeOnlineSearch() {
function updateCheckboxes (line 92) | function updateCheckboxes() {
function transEventListeners (line 98) | function transEventListeners(target, source, events) {
function bgSelectHistory (line 111) | function bgSelectHistory(a,b){
function bgRebootChuanhu (line 117) | function bgRebootChuanhu() {
FILE: web_assets/javascript/file-input.js
function setPasteUploader (line 7) | function setPasteUploader() {
function setDragUploader (line 34) | function setDragUploader() {
function upload_files (line 75) | async function upload_files(files) {
function draggingHint (line 105) | function draggingHint() {
FILE: web_assets/javascript/localization.js
function setLoclize (line 20) | function setLoclize() {
function i18n (line 36) | function i18n(msg) {
FILE: web_assets/javascript/message-button.js
function convertBotMessage (line 4) | function convertBotMessage(gradioButtonMsg) {
function addChuanhuButton (line 54) | function addChuanhuButton(botElement) {
function setLatestMessage (line 187) | function setLatestMessage() {
function addLatestMessageButtons (line 205) | function addLatestMessageButtons(botElement) {
function addGeneratingLoader (line 274) | function addGeneratingLoader(botElement) {
FILE: web_assets/javascript/updater.js
function setUpdater (line 5) | function setUpdater() {
function getLatestRelease (line 50) | async function getLatestRelease() {
function updateLatestVersion (line 83) | async function updateLatestVersion() {
function getUpdateInfo (line 177) | function getUpdateInfo() {
function bgUpdateChuanhu (line 184) | function bgUpdateChuanhu() {
function cancelUpdate (line 199) | function cancelUpdate() {
function openUpdateToast (line 202) | function openUpdateToast() {
function closeUpdateToast (line 207) | function closeUpdateToast() {
function manualCheckUpdate (line 215) | function manualCheckUpdate() {
function updateSuccessHtml (line 222) | function updateSuccessHtml() {
function noUpdate (line 229) | function noUpdate(message="") {
function noUpdateHtml (line 234) | function noUpdateHtml(message="") {
function getUpdateStatus (line 250) | function getUpdateStatus() {
function disableUpdateBtns (line 259) | function disableUpdateBtns() {
function enableUpdateBtns (line 265) | function enableUpdateBtns() {
function disableUpdateBtn_enableCancelBtn (line 271) | function disableUpdateBtn_enableCancelBtn() {
FILE: web_assets/javascript/user-info.js
function getUserInfo (line 8) | function getUserInfo() {
function showOrHideUserInfo (line 36) | function showOrHideUserInfo() {
FILE: web_assets/javascript/utils.js
function isImgUrl (line 3) | function isImgUrl(url) {
function escapeMarkdown (line 18) | function escapeMarkdown(text) {
function downloadHistory (line 56) | function downloadHistory(gradioUsername, historyname, format=".json") {
function downloadFile (line 66) | function downloadFile(fileUrl, filename = "", format = "", retryTimeout ...
function statusDisplayMessage (line 111) | function statusDisplayMessage(message) {
function bindFancyBox (line 116) | function bindFancyBox() {
function rebootingChuanhu (line 126) | function rebootingChuanhu() {
function restart_reload (line 150) | function restart_reload() {
function requestGet (line 166) | function requestGet(url, data, handler, errorHandler) {
FILE: web_assets/javascript/webui.js
function openSettingBox (line 2) | function openSettingBox() {
function openTrainingBox (line 11) | function openTrainingBox() {
function openChatMore (line 19) | function openChatMore() {
function closeChatMore (line 24) | function closeChatMore() {
function showMask (line 30) | function showMask(obj) {
function chatMoreBtnClick (line 63) | function chatMoreBtnClick() {
function closeBtnClick (line 71) | function closeBtnClick(obj) {
function closeBox (line 80) | function closeBox() {
function closeSide (line 89) | function closeSide(sideArea) {
function openSide (line 103) | function openSide(sideArea) {
function menuClick (line 116) | function menuClick() {
function toolboxClick (line 132) | function toolboxClick() {
function adjustSide (line 154) | function adjustSide() {
function adjustMask (line 198) | function adjustMask() {
function checkChatbotWidth (line 228) | function checkChatbotWidth() {
function checkChatMoreMask (line 254) | function checkChatMoreMask() {
function showKnowledgeBase (line 261) | function showKnowledgeBase(){
function letThisSparkle (line 276) | function letThisSparkle(element, sparkleTime = 3000) {
function switchToolBoxTab (line 281) | function switchToolBoxTab(tabIndex) {
Condensed preview — 106 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (880K chars).
[
{
"path": ".github/CONTRIBUTING.md",
"chars": 1403,
"preview": "# 如何做出贡献\n\n感谢您对 **川虎Chat** 的关注!感谢您投入时间为我们的项目做出贡献!\n\n在开始之前,您可以阅读我们的以下简短提示。更多信息您可以点击链接查阅。\n\n## GitHub 新手?\n\n以下是 GitHub 的一些资源,如"
},
{
"path": ".github/ISSUE_TEMPLATE/config.yml",
"chars": 148,
"preview": "blank_issues_enabled: \ncontact_links:\n - name: 讨论区\n url: https://github.com/GaiZhenbiao/ChuanhuChatGPT/discussions\n "
},
{
"path": ".github/ISSUE_TEMPLATE/feature-request.yml",
"chars": 884,
"preview": "name: 功能请求\ndescription: \"请求更多功能!\"\ntitle: \"[功能请求]: \"\nlabels: [\"feature request\"]\nbody:\n - type: markdown\n attributes:"
},
{
"path": ".github/ISSUE_TEMPLATE/report-bug.yml",
"chars": 2144,
"preview": "name: 报告BUG\ndescription: \"报告一个bug,且您确信这是bug而不是您的问题\"\ntitle: \"[Bug]: \"\nlabels: [\"bug\"]\nbody:\n - type: markdown\n attrib"
},
{
"path": ".github/ISSUE_TEMPLATE/report-docker.yml",
"chars": 2206,
"preview": "name: Docker部署错误\ndescription: \"报告使用 Docker 部署时的问题或错误\"\ntitle: \"[Docker]: \"\nlabels: [\"question\",\"docker deployment\"]\nbody:"
},
{
"path": ".github/ISSUE_TEMPLATE/report-localhost.yml",
"chars": 2407,
"preview": "name: 本地部署错误\ndescription: \"报告本地部署时的问题或错误(小白首选)\"\ntitle: \"[本地部署]: \"\nlabels: [\"question\",\"localhost deployment\"]\nbody:\n - "
},
{
"path": ".github/ISSUE_TEMPLATE/report-others.yml",
"chars": 1951,
"preview": "name: 其他错误\ndescription: \"报告其他问题(如 Hugging Face 中的 Space 等)\"\ntitle: \"[其他]: \"\nlabels: [\"question\"]\nbody:\n - type: markdow"
},
{
"path": ".github/ISSUE_TEMPLATE/report-server.yml",
"chars": 2193,
"preview": "name: 服务器部署错误\ndescription: \"报告在远程服务器上部署时的问题或错误\"\ntitle: \"[远程部署]: \"\nlabels: [\"question\",\"server deployment\"]\nbody:\n - typ"
},
{
"path": ".github/pull_request_template.md",
"chars": 1389,
"preview": "<!--\n这是一个拉取请求模板。本文段处于注释中,请您先查看本注释,在您提交时该段文字将不会显示。\nThis is a pull request template. This paragraph is in the comments, pl"
},
{
"path": ".github/workflows/Build_Docker.yml",
"chars": 1450,
"preview": "name: Build Docker when Push\n\non:\n push:\n branches:\n - \"main\"\n\njobs:\n docker:\n runs-on: ubuntu-latest\n s"
},
{
"path": ".github/workflows/Release_docker.yml",
"chars": 1645,
"preview": "name: Build and Push Docker when Release\n\non:\n release:\n types: [published]\n workflow_dispatch:\n\njobs:\n docker:\n "
},
{
"path": ".gitignore",
"chars": 2014,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": "CITATION.cff",
"chars": 691,
"preview": "cff-version: 1.2.0\ntitle: Chuanhu Chat\nmessage: >-\n If you use this software, please cite it using these\n metadata.\nty"
},
{
"path": "ChuanhuChatbot.py",
"chars": 42529,
"preview": "# -*- coding:utf-8 -*-\nimport logging\nlogging.basicConfig(\n level=logging.INFO,\n format=\"%(asctime)s [%(levelname)"
},
{
"path": "Dockerfile",
"chars": 1275,
"preview": "FROM python:3.10-slim-buster as builder\n\n# Install build essentials, Rust, and additional dependencies\nRUN apt-get updat"
},
{
"path": "LICENSE",
"chars": 35149,
"preview": " GNU GENERAL PUBLIC LICENSE\n Version 3, 29 June 2007\n\n Copyright (C) 2007 Free "
},
{
"path": "README.md",
"chars": 7595,
"preview": "<div align=\"right\">\n <!-- 语言: -->\n 简体中文 | <a title=\"English\" href=\"./readme/README_en.md\">English</a> | <a title=\"Japa"
},
{
"path": "config_example.json",
"chars": 5293,
"preview": "{\n // 各配置具体说明,见 [https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程#配置-configjson]\n\n //== API 配置 ==\n \"open"
},
{
"path": "configs/ds_config_chatbot.json",
"chars": 343,
"preview": "{\n \"fp16\": {\n \"enabled\": false\n },\n \"bf16\": {\n \"enabled\": true\n },\n \"comms_logger\": {\n "
},
{
"path": "locale/en_US.json",
"chars": 26037,
"preview": "{\n \"API Key 列表\": \"API Key List\",\n \"Azure OpenAI Chat 模型 Deployment 名称\": \"Azure OpenAI Chat Model Deployment Name\","
},
{
"path": "locale/extract_locale.py",
"chars": 5338,
"preview": "import asyncio\nimport logging\nimport os\nimport re\nimport sys\n\nimport aiohttp\nimport commentjson\nimport commentjson as js"
},
{
"path": "locale/ja_JP.json",
"chars": 9239,
"preview": "{\n \"获取资源错误\": \"リソースの取得エラー\",\n \"该模型不支持多模态输入\": \"このモデルはマルチモーダル入力に対応していません。\",\n \" 中。\": \"中。\",\n \" 为: \": \"対:\",\n \" 吗"
},
{
"path": "locale/ko_KR.json",
"chars": 9133,
"preview": "{\n \"获取资源错误\": \"Error fetching resources\",\n \"该模型不支持多模态输入\": \"이 모델은 다중 모달 입력을 지원하지 않습니다.\",\n \" 中。\": \"가운데입니다.\",\n \""
},
{
"path": "locale/ru_RU.json",
"chars": 12477,
"preview": "{\n \"获取资源错误\": \"Ошибка при получении ресурса\",\n \"该模型不支持多模态输入\": \"Эта модель не поддерживает многомодальный ввод\",\n "
},
{
"path": "locale/sv_SE.json",
"chars": 12039,
"preview": "{\n \"获取资源错误\": \"Fel vid hämtning av resurser\",\n \"该模型不支持多模态输入\": \"Den här modellen stöder inte multitmodal inmatning.\""
},
{
"path": "locale/vi_VN.json",
"chars": 12295,
"preview": "{\n \"获取资源错误\": \"Lỗi khi lấy tài nguyên\",\n \"该模型不支持多模态输入\": \"Mô hình này không hỗ trợ đầu vào đa phương tiện\",\n \" 中。"
},
{
"path": "locale/zh_CN.json",
"chars": 3045,
"preview": "{\n \"gpt3.5turbo_description\": \"GPT-3.5 Turbo 是由 OpenAI 开发的一款仅限文本的大型语言模型。它基于 GPT-3 模型,并已经在大量数据上进行了微调。最新版本的 GPT-3.5 Tur"
},
{
"path": "modules/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "modules/config.py",
"chars": 13225,
"preview": "from collections import defaultdict\nfrom contextlib import contextmanager\nimport os\nimport logging\nimport sys\nimport com"
},
{
"path": "modules/index_func.py",
"chars": 6235,
"preview": "import PyPDF2\nfrom langchain_community.embeddings.huggingface import HuggingFaceEmbeddings\nfrom langchain_community.vect"
},
{
"path": "modules/models/Azure.py",
"chars": 767,
"preview": "from langchain.chat_models import AzureChatOpenAI, ChatOpenAI\nimport os\n\nfrom .base_model import Base_Chat_Langchain_Cli"
},
{
"path": "modules/models/ChatGLM.py",
"chars": 3701,
"preview": "from __future__ import annotations\n\nimport logging\nimport os\nimport platform\n\nimport gc\nimport torch\nimport colorama\n\nfr"
},
{
"path": "modules/models/ChuanhuAgent.py",
"chars": 11908,
"preview": "import logging\nimport os\nfrom itertools import islice\nfrom threading import Thread\n\nimport gradio as gr\nimport requests\n"
},
{
"path": "modules/models/Claude.py",
"chars": 4320,
"preview": "from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT\nfrom ..presets import *\nfrom ..utils import *\n\nfrom .base_model"
},
{
"path": "modules/models/DALLE3.py",
"chars": 2908,
"preview": "import logging\nfrom .base_model import BaseLLMModel\nfrom .. import shared\nimport requests\nfrom ..presets import *\nfrom ."
},
{
"path": "modules/models/ERNIE.py",
"chars": 3365,
"preview": "from ..presets import *\nfrom ..utils import *\n\nfrom .base_model import BaseLLMModel\n\n\nclass ERNIE_Client(BaseLLMModel):\n"
},
{
"path": "modules/models/GoogleGemini.py",
"chars": 11138,
"preview": "import base64\nimport json\nimport logging\nimport os\nimport textwrap\nimport requests\nfrom typing import List, Dict, Any, G"
},
{
"path": "modules/models/GoogleGemma.py",
"chars": 3915,
"preview": "import logging\nfrom threading import Thread\n\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, "
},
{
"path": "modules/models/GooglePaLM.py",
"chars": 1197,
"preview": "from .base_model import BaseLLMModel\nimport google.generativeai as palm\n\n\nclass Google_PaLM_Client(BaseLLMModel):\n de"
},
{
"path": "modules/models/Groq.py",
"chars": 1710,
"preview": "import json\nimport logging\nimport textwrap\nimport uuid\n\nimport os\nfrom groq import Groq\nimport gradio as gr\nimport PIL\ni"
},
{
"path": "modules/models/LLaMA.py",
"chars": 3242,
"preview": "from __future__ import annotations\n\nimport json\nimport os\nfrom llama_cpp import Llama\n\nfrom ..index_func import *\nfrom ."
},
{
"path": "modules/models/MOSS.py",
"chars": 15354,
"preview": "# 代码主要来源于 https://github.com/OpenLMLab/MOSS/blob/main/moss_inference.py\n\nimport os\nimport torch\nimport warnings\nimport p"
},
{
"path": "modules/models/Ollama.py",
"chars": 1868,
"preview": "import json\nimport logging\nimport textwrap\nimport uuid\n\nfrom ollama import Client\n\nfrom modules.presets import i18n\n\nfro"
},
{
"path": "modules/models/OpenAIInstruct.py",
"chars": 898,
"preview": "from openai import OpenAI\n\nclient = OpenAI()\nfrom .base_model import BaseLLMModel\nfrom .. import shared\nfrom ..config im"
},
{
"path": "modules/models/OpenAIVision.py",
"chars": 13621,
"preview": "from __future__ import annotations\n\nimport json\nimport logging\nimport traceback\nimport base64\nfrom math import ceil\n\nimp"
},
{
"path": "modules/models/Qwen.py",
"chars": 2672,
"preview": "from transformers import AutoModelForCausalLM, AutoTokenizer\nimport os\nfrom transformers.generation import GenerationCon"
},
{
"path": "modules/models/StableLM.py",
"chars": 4108,
"preview": "import torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, StoppingCriteria, StoppingCriteriaL"
},
{
"path": "modules/models/XMChat.py",
"chars": 4862,
"preview": "from __future__ import annotations\n\nimport base64\nimport json\nimport logging\nimport os\nimport uuid\nfrom io import BytesI"
},
{
"path": "modules/models/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "modules/models/base_model.py",
"chars": 47943,
"preview": "from __future__ import annotations\n\nimport base64\nimport json\nimport time\nimport logging\nimport os\nimport shutil\nimport "
},
{
"path": "modules/models/configuration_moss.py",
"chars": 4985,
"preview": "\"\"\" Moss model configuration\"\"\"\n\nfrom transformers.utils import logging\nfrom transformers.configuration_utils import Pre"
},
{
"path": "modules/models/inspurai.py",
"chars": 12932,
"preview": "# 代码主要来源于 https://github.com/Shawn-Inspur/Yuan-1.0/blob/main/yuan_api/inspurai.py\n\nimport hashlib\nimport json\nimport os\n"
},
{
"path": "modules/models/midjourney.py",
"chars": 15704,
"preview": "import base64\nimport io\nimport json\nimport logging\nimport os\nimport pathlib\nimport tempfile\nimport time\nfrom datetime im"
},
{
"path": "modules/models/minimax.py",
"chars": 6033,
"preview": "import json\nimport os\n\nimport colorama\nimport requests\nimport logging\n\nfrom modules.models.base_model import BaseLLMMode"
},
{
"path": "modules/models/modeling_moss.py",
"chars": 30187,
"preview": "\"\"\" PyTorch Moss model.\"\"\"\n\nfrom typing import Optional, Tuple, Union\n\nimport torch\nimport torch.utils.checkpoint\nfrom t"
},
{
"path": "modules/models/models.py",
"chars": 9942,
"preview": "from __future__ import annotations\n\nimport logging\nimport os\n\nimport colorama\nimport commentjson as cjson\n\nfrom modules "
},
{
"path": "modules/models/spark.py",
"chars": 5115,
"preview": "import _thread as thread\nimport base64\nimport datetime\nimport hashlib\nimport hmac\nimport json\nfrom collections import de"
},
{
"path": "modules/models/tokenization_moss.py",
"chars": 14778,
"preview": "\"\"\"Tokenization classes for Moss\"\"\"\n\nimport json\nimport os\nimport numpy as np\nimport regex as re\n\nfrom functools import "
},
{
"path": "modules/overwrites.py",
"chars": 20939,
"preview": "from __future__ import annotations\n\nimport gradio as gr\nimport multipart\nfrom multipart.multipart import MultipartState,"
},
{
"path": "modules/pdf_func.py",
"chars": 5997,
"preview": "from types import SimpleNamespace\nimport pdfplumber\nimport logging\nfrom langchain.docstore.document import Document\n\ndef"
},
{
"path": "modules/presets.py",
"chars": 23065,
"preview": "# -*- coding:utf-8 -*-\nimport os\nfrom pathlib import Path\nimport gradio as gr\nfrom .webui_locale import I18nAuto\n\ni18n ="
},
{
"path": "modules/repo.py",
"chars": 11536,
"preview": "# -*- coding:utf-8 -*-\nimport os\nimport sys\nimport subprocess\nfrom functools import lru_cache\nimport logging\nimport grad"
},
{
"path": "modules/shared.py",
"chars": 2702,
"preview": "from modules.presets import CHAT_COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST, OPENAI_API_BASE, IMAGES_COMPL"
},
{
"path": "modules/train_func.py",
"chars": 5530,
"preview": "import os\nimport logging\nimport traceback\n\nfrom openai import OpenAI\n\nclient = OpenAI(api_key=os.getenv(\"OPENAI_API_KEY\""
},
{
"path": "modules/utils.py",
"chars": 51321,
"preview": "# -*- coding:utf-8 -*-\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, T"
},
{
"path": "modules/webui.py",
"chars": 3866,
"preview": "\nfrom collections import namedtuple\nimport os\nimport gradio as gr\n\nfrom . import shared\n\n# with open(\"./assets/ChuanhuCh"
},
{
"path": "modules/webui_locale.py",
"chars": 2401,
"preview": "import os\nimport locale\nimport logging\nimport commentjson as json\n\nclass I18nAuto:\n def __init__(self):\n if os"
},
{
"path": "readme/README_en.md",
"chars": 10693,
"preview": "<div align=\"right\">\n <!-- Language: -->\n <a title=\"Chinese\" href=\"../README.md\">简体中文</a> | English | <a title=\"Japanes"
},
{
"path": "readme/README_ja.md",
"chars": 8105,
"preview": "<div align=\"right\">\n <!-- Language: -->\n <a title=\"Chinese\" href=\"../README.md\">简体中文</a> | <a title=\"English\" href=\"RE"
},
{
"path": "readme/README_ko.md",
"chars": 8720,
"preview": "<div align=\"right\">\n <!-- Language: -->\n <a title=\"Chinese\" href=\"../README.md\">简体中文</a> | <a title=\"English\" href=\"R"
},
{
"path": "readme/README_ru.md",
"chars": 11048,
"preview": "<div align=\"right\">\n <!-- Language: -->\n <a title=\"Chinese\" href=\"../README.md\">简体中文</a> | <a title=\"English\" href=\"RE"
},
{
"path": "requirements.txt",
"chars": 622,
"preview": "gradio==4.29.0\ngradio_client==0.16.1\nprimp==0.5.5\npypinyin\ntiktoken\nsocksio\ntqdm\ncolorama\ngooglesearch-python\nPygments\no"
},
{
"path": "requirements_advanced.txt",
"chars": 166,
"preview": "transformers\nhuggingface_hub\ntorch\ncpm-kernels\nsentence_transformers\naccelerate\nsentencepiece\nllama-cpp-python\ntransform"
},
{
"path": "run_Linux.sh",
"chars": 503,
"preview": "#!/bin/bash\n\n# 获取脚本所在目录\nscript_dir=$(dirname \"$(readlink -f \"$0\")\")\n\n# 将工作目录更改为脚本所在目录\ncd \"$script_dir\" || exit\n\n# 检查Git仓"
},
{
"path": "run_Windows.bat",
"chars": 464,
"preview": "@echo off\necho Opening ChuanhuChatGPT...\n\nif not exist \"%~dp0\\ChuanhuChat\\Scripts\" (\n echo Creating venv...\n pytho"
},
{
"path": "run_macOS.command",
"chars": 503,
"preview": "#!/bin/bash\n\n# 获取脚本所在目录\nscript_dir=$(dirname \"$(readlink -f \"$0\")\")\n\n# 将工作目录更改为脚本所在目录\ncd \"$script_dir\" || exit\n\n# 检查Git仓"
},
{
"path": "web_assets/html/appearance_switcher.html",
"chars": 224,
"preview": "<div class=\"switch-checkbox\" id=\"apSwitch\">\n <label class=\"apSwitch\">\n <input type=\"checkbox\" id=\"apSwitch-che"
},
{
"path": "web_assets/html/billing_info.html",
"chars": 308,
"preview": "<b>{label}</b>\n<div class=\"progress-bar\">\n <div class=\"progress\" style=\"width: {usage_percent}%;\">\n <span clas"
},
{
"path": "web_assets/html/chatbot_header_btn.html",
"chars": 16314,
"preview": "<div id=\"header-btn-groups\">\n <div class=\"btn-bar-group\" style=\"margin-left: -12px; transform: scale(0.85); transform"
},
{
"path": "web_assets/html/chatbot_more.html",
"chars": 8018,
"preview": "<div>\n <div id=\"chatbot-input-more-area\">\n <span class=\"chatbot-input-more-label-group\">\n <div clas"
},
{
"path": "web_assets/html/chatbot_placeholder.html",
"chars": 544,
"preview": "<div id=\"chatbot-placeholder-pl\">\n<div id=\"chatbot-placeholder-header\">\n <img src=\"{chatbot_ph_logo}\" alt=\"avatar\" cl"
},
{
"path": "web_assets/html/close_btn.html",
"chars": 366,
"preview": "<button onclick='closeBtnClick(\"{obj}\")'>\n <svg class=\"icon-need-hover\" stroke=\"currentColor\" fill=\"none\" stroke-widt"
},
{
"path": "web_assets/html/footer.html",
"chars": 39,
"preview": "<div class=\"versions\">{versions}</div>\n"
},
{
"path": "web_assets/html/func_nav.html",
"chars": 22759,
"preview": "<div id=\"menu-footer-btn-bar\" class=\"is-gpt\">\n <div class=\"btn-bar-group\">\n <button id=\"chuanhu-setting-btn\" o"
},
{
"path": "web_assets/html/header_title.html",
"chars": 442,
"preview": "<div style=\"display:inline-flex;\">\n <button id=\"chuanhu-menu-btn\" onclick='menuClick()' class=\"chuanhu-ui-btn hover-r"
},
{
"path": "web_assets/html/update.html",
"chars": 1851,
"preview": "<div id=\"toast-update\">\n <div id=\"check-chuanhu-update\">\n <p style=\"display:none\">\n <span id=\"curre"
},
{
"path": "web_assets/html/web_config.html",
"chars": 1210,
"preview": "<div aria-label=\"config-div\" style=\"display:none;\">\n <!-- app config -->\n <div id=\"app_config\">\n <span id=\""
},
{
"path": "web_assets/javascript/ChuanhuChat.js",
"chars": 18594,
"preview": "\n// ChuanhuChat core javascript\n\nconst MAX_HISTORY_LENGTH = 32;\n\nvar key_down_history = [];\nvar currentIndex = -1;\n\nvar "
},
{
"path": "web_assets/javascript/chat-history.js",
"chars": 3800,
"preview": "\nvar historyLoaded = false;\nvar loadhistorytime = 0; // for debugging\n\n\nfunction saveHistoryHtml() {\n var historyHtml"
},
{
"path": "web_assets/javascript/chat-list.js",
"chars": 7508,
"preview": "var currentChatName = null;\nvar isChatListRecentlyEnabled = false;\n\nfunction setChatListHeader() {\n var grHistoryRefr"
},
{
"path": "web_assets/javascript/external-scripts.js",
"chars": 29,
"preview": "\n// external javascript here\n"
},
{
"path": "web_assets/javascript/fake-gradio.js",
"chars": 4310,
"preview": "\n// Fake gradio components!\n\n// buttons\nfunction newChatClick() {\n gradioApp().querySelector('#empty-btn').click();\n}"
},
{
"path": "web_assets/javascript/file-input.js",
"chars": 3930,
"preview": "\n// paste和upload部分参考:\n// https://github.com/binary-husky/gpt_academic/tree/master/themes/common.js\n// @Kilig947\n\n\nfuncti"
},
{
"path": "web_assets/javascript/localization.js",
"chars": 1609,
"preview": "\n// i18n\n\nconst language = navigator.language.slice(0,2);\n\nvar forView_i18n;\nvar deleteConfirm_i18n_pref;\nvar deleteConf"
},
{
"path": "web_assets/javascript/message-button.js",
"chars": 21336,
"preview": "\n// 为 bot 消息添加复制与切换显示按钮 以及最新消息加上重新生成,删除最新消息,嗯。\n\nfunction convertBotMessage(gradioButtonMsg) {\n return;\n // should "
},
{
"path": "web_assets/javascript/sliders.js",
"chars": 124,
"preview": "// 该功能被做到gradio的官方版本中了\n// https://github.com/gradio-app/gradio/pull/5535\n// https://github.com/gradio-app/gradio/issues/"
},
{
"path": "web_assets/javascript/updater.js",
"chars": 12397,
"preview": "var updateInfoGotten = false;\nvar isLatestVersion = localStorage.getItem('isLatestVersion') === \"true\" || false;\nvar sho"
},
{
"path": "web_assets/javascript/user-info.js",
"chars": 2147,
"preview": "\n// var userLogged = false;\nvar usernameGotten = false;\nvar usernameTmp = null;\nvar username = null;\n\n\nfunction getUserI"
},
{
"path": "web_assets/javascript/utils.js",
"chars": 5555,
"preview": "\n\nfunction isImgUrl(url) {\n const imageExtensions = /\\.(jpg|jpeg|png|gif|bmp|webp)$/i;\n if (url.startsWith('data:i"
},
{
"path": "web_assets/javascript/webui.js",
"chars": 10637,
"preview": "\nfunction openSettingBox() {\n chuanhuPopup.classList.add('showBox');\n popupWrapper.classList.add('showBox');\n s"
},
{
"path": "web_assets/manifest.json",
"chars": 666,
"preview": "{\n \"name\": \"川虎Chat\",\n \"short_name\": \"川虎Chat\",\n \"description\": \"川虎Chat - 为ChatGPT等多种LLM提供了一个轻快好用的Web图形界面和众多附加功能 "
},
{
"path": "web_assets/stylesheet/ChuanhuChat.css",
"chars": 35858,
"preview": ":root {\n --vh: 1vh;\n\n --chatbot-color-light: #000000;\n --chatbot-color-dark: #FFFFFF;\n --chatbot-background-"
},
{
"path": "web_assets/stylesheet/chatbot.css",
"chars": 10700,
"preview": "\nhr.append-display {\n margin: 8px 0 !important;\n border: none;\n height: 1px;\n border-top-width: 0 !important"
},
{
"path": "web_assets/stylesheet/custom-components.css",
"chars": 9510,
"preview": "\n/* user-info */\n#user-info.block {\n white-space: nowrap;\n position: absolute;\n right: max(32px, env(safe-area-"
},
{
"path": "web_assets/stylesheet/markdown.css",
"chars": 3078,
"preview": "\n/* list from gradio 4.26, recover what silly gradio has done*/ \n.message {\n\n --chatbot-body-text-size: 14px; /* grad"
},
{
"path": "web_assets/stylesheet/override-gradio.css",
"chars": 4140,
"preview": ".gradio-container {\n max-width: unset !important;\n padding: 0 !important;\n}\n\n/* 解决container=False时的错误填充 */\ndiv.for"
}
]
About this extraction
This page contains the full source code of the GaiZhenbiao/ChuanhuChatGPT GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 106 files (817.5 KB), approximately 237.7k tokens, and a symbol index with 594 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.