Full Code of binary-husky/gpt_academic for AI

master d6bde0fa5437 cached
418 files
6.9 MB
1.8M tokens
2251 symbols
2 requests
Download .txt
Showing preview only (7,281K chars total). Download the full file or copy to clipboard to get everything.
Repository: binary-husky/gpt_academic
Branch: master
Commit: d6bde0fa5437
Files: 418
Total size: 6.9 MB

Directory structure:
gitextract_m7znd4_k/

├── .dockerignore
├── .gitattributes
├── .gitignore
├── .pre-commit-config.yaml
├── Dockerfile
├── LICENSE
├── README.md
├── check_proxy.py
├── config.py
├── core_functional.py
├── crazy_functional.py
├── crazy_functions/
│   ├── Academic_Conversation.py
│   ├── Arxiv_Downloader.py
│   ├── Audio_Assistant.py
│   ├── Audio_Summary.py
│   ├── Commandline_Assistant.py
│   ├── Conversation_To_File.py
│   ├── Document_Conversation.py
│   ├── Document_Conversation_Wrap.py
│   ├── Document_Optimize.py
│   ├── Dynamic_Function_Generate.py
│   ├── Google_Scholar_Assistant_Legacy.py
│   ├── Helpers.py
│   ├── Image_Generate.py
│   ├── Image_Generate_Wrap.py
│   ├── Interactive_Func_Template.py
│   ├── Interactive_Mini_Game.py
│   ├── Internet_GPT.py
│   ├── Internet_GPT_Bing_Legacy.py
│   ├── Internet_GPT_Legacy.py
│   ├── Internet_GPT_Wrap.py
│   ├── Latex_Function.py
│   ├── Latex_Function_Wrap.py
│   ├── Latex_Project_Polish.py
│   ├── Latex_Project_Translate_Legacy.py
│   ├── Markdown_Translate.py
│   ├── Math_Animation_Gen.py
│   ├── Mermaid_Figure_Gen.py
│   ├── Multi_Agent_Legacy.py
│   ├── Multi_LLM_Query.py
│   ├── PDF_QA.py
│   ├── PDF_Summary.py
│   ├── PDF_Translate.py
│   ├── PDF_Translate_Nougat.py
│   ├── PDF_Translate_Wrap.py
│   ├── Paper_Abstract_Writer.py
│   ├── Paper_Reading.py
│   ├── Program_Comment_Gen.py
│   ├── Rag_Interface.py
│   ├── Social_Helper.py
│   ├── SourceCode_Analyse.py
│   ├── SourceCode_Analyse_JupyterNotebook.py
│   ├── SourceCode_Comment.py
│   ├── SourceCode_Comment_Wrap.py
│   ├── Vectorstore_QA.py
│   ├── VideoResource_GPT.py
│   ├── Void_Terminal.py
│   ├── Word_Summary.py
│   ├── __init__.py
│   ├── agent_fns/
│   │   ├── auto_agent.py
│   │   ├── echo_agent.py
│   │   ├── general.py
│   │   ├── persistent.py
│   │   ├── pipe.py
│   │   ├── python_comment_agent.py
│   │   ├── python_comment_compare.html
│   │   └── watchdog.py
│   ├── ast_fns/
│   │   └── comment_remove.py
│   ├── crazy_utils.py
│   ├── diagram_fns/
│   │   └── file_tree.py
│   ├── doc_fns/
│   │   ├── AI_review_doc.py
│   │   ├── __init__.py
│   │   ├── batch_file_query_doc.py
│   │   ├── content_folder.py
│   │   ├── conversation_doc/
│   │   │   ├── excel_doc.py
│   │   │   ├── html_doc.py
│   │   │   ├── markdown_doc.py
│   │   │   ├── pdf_doc.py
│   │   │   ├── txt_doc.py
│   │   │   ├── word2pdf.py
│   │   │   └── word_doc.py
│   │   ├── read_fns/
│   │   │   ├── __init__.py
│   │   │   ├── docx_reader.py
│   │   │   ├── excel_reader.py
│   │   │   ├── markitdown/
│   │   │   │   └── markdown_reader.py
│   │   │   ├── unstructured_all/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── paper_metadata_extractor.py
│   │   │   │   ├── paper_structure_extractor.py
│   │   │   │   └── unstructured_md.py
│   │   │   └── web_reader.py
│   │   └── text_content_loader.py
│   ├── game_fns/
│   │   ├── game_ascii_art.py
│   │   ├── game_interactive_story.py
│   │   └── game_utils.py
│   ├── gen_fns/
│   │   └── gen_fns_shared.py
│   ├── ipc_fns/
│   │   └── mp.py
│   ├── json_fns/
│   │   ├── pydantic_io.py
│   │   └── select_tool.py
│   ├── latex_fns/
│   │   ├── latex_actions.py
│   │   ├── latex_pickle_io.py
│   │   └── latex_toolbox.py
│   ├── live_audio/
│   │   ├── aliyunASR.py
│   │   └── audio_io.py
│   ├── media_fns/
│   │   └── get_media.py
│   ├── multi_stage/
│   │   └── multi_stage_utils.py
│   ├── paper_fns/
│   │   ├── __init__.py
│   │   ├── auto_git/
│   │   │   ├── handlers/
│   │   │   │   ├── base_handler.py
│   │   │   │   ├── code_handler.py
│   │   │   │   ├── repo_handler.py
│   │   │   │   ├── topic_handler.py
│   │   │   │   └── user_handler.py
│   │   │   ├── query_analyzer.py
│   │   │   └── sources/
│   │   │       └── github_source.py
│   │   ├── document_structure_extractor.py
│   │   ├── file2file_doc/
│   │   │   ├── __init__.py
│   │   │   ├── html_doc.py
│   │   │   ├── markdown_doc.py
│   │   │   ├── txt_doc.py
│   │   │   ├── word2pdf.py
│   │   │   └── word_doc.py
│   │   ├── github_search.py
│   │   ├── journal_paper_recom.py
│   │   ├── paper_download.py
│   │   ├── reduce_aigc.py
│   │   └── wiki/
│   │       └── wikipedia_api.py
│   ├── pdf_fns/
│   │   ├── breakdown_pdf_txt.py
│   │   ├── breakdown_txt.py
│   │   ├── parse_pdf.py
│   │   ├── parse_pdf_grobid.py
│   │   ├── parse_pdf_legacy.py
│   │   ├── parse_pdf_via_doc2x.py
│   │   ├── parse_word.py
│   │   ├── report_gen_html.py
│   │   ├── report_template.html
│   │   └── report_template_v2.html
│   ├── plugin_template/
│   │   └── plugin_class_template.py
│   ├── prompts/
│   │   └── internet.py
│   ├── rag_fns/
│   │   ├── llama_index_worker.py
│   │   ├── milvus_worker.py
│   │   ├── rag_file_support.py
│   │   └── vector_store_index.py
│   ├── review_fns/
│   │   ├── __init__.py
│   │   ├── conversation_doc/
│   │   │   ├── endnote_doc.py
│   │   │   ├── excel_doc.py
│   │   │   ├── html_doc.py
│   │   │   ├── markdown_doc.py
│   │   │   ├── reference_formatter.py
│   │   │   ├── word2pdf.py
│   │   │   └── word_doc.py
│   │   ├── data_sources/
│   │   │   ├── __init__.py
│   │   │   ├── adsabs_source.py
│   │   │   ├── arxiv_source.py
│   │   │   ├── base_source.py
│   │   │   ├── cas_if.json
│   │   │   ├── crossref_source.py
│   │   │   ├── elsevier_source.py
│   │   │   ├── github_source.py
│   │   │   ├── journal_metrics.py
│   │   │   ├── openalex_source.py
│   │   │   ├── pubmed_source.py
│   │   │   ├── scihub_source.py
│   │   │   ├── scopus_source.py
│   │   │   ├── semantic_source.py
│   │   │   └── unpaywall_source.py
│   │   ├── handlers/
│   │   │   ├── base_handler.py
│   │   │   ├── latest_handler.py
│   │   │   ├── paper_handler.py
│   │   │   ├── qa_handler.py
│   │   │   ├── recommend_handler.py
│   │   │   └── review_handler.py
│   │   ├── paper_processor/
│   │   │   └── paper_llm_ranker.py
│   │   ├── prompts/
│   │   │   ├── adsabs_prompts.py
│   │   │   ├── arxiv_prompts.py
│   │   │   ├── crossref_prompts.py
│   │   │   ├── paper_prompts.py
│   │   │   ├── pubmed_prompts.py
│   │   │   └── semantic_prompts.py
│   │   ├── query_analyzer.py
│   │   └── query_processor.py
│   ├── vector_fns/
│   │   ├── __init__.py
│   │   ├── general_file_loader.py
│   │   └── vector_database.py
│   ├── vt_fns/
│   │   ├── vt_call_plugin.py
│   │   ├── vt_modify_config.py
│   │   └── vt_state.py
│   ├── word_dfa/
│   │   └── dfa_algo.py
│   └── 高级功能函数模板.py
├── docker-compose.yml
├── docs/
│   ├── DOCUMENTATION_PLAN.md
│   ├── GithubAction+AllCapacity
│   ├── GithubAction+ChatGLM+Moss
│   ├── GithubAction+JittorLLMs
│   ├── GithubAction+NoLocal
│   ├── GithubAction+NoLocal+AudioAssistant
│   ├── GithubAction+NoLocal+Latex
│   ├── GithubAction+NoLocal+Vectordb
│   ├── README.Arabic.md
│   ├── README.English.md
│   ├── README.French.md
│   ├── README.German.md
│   ├── README.Italian.md
│   ├── README.Japanese.md
│   ├── README.Korean.md
│   ├── README.Portuguese.md
│   ├── README.Russian.md
│   ├── WindowsRun.bat
│   ├── WithFastapi.md
│   ├── customization/
│   │   ├── custom_buttons.md
│   │   ├── plugin_development.md
│   │   └── theme_customization.md
│   ├── deployment/
│   │   ├── cloud_deploy.md
│   │   ├── docker.md
│   │   └── reverse_proxy.md
│   ├── features/
│   │   ├── academic/
│   │   │   ├── arxiv_download.md
│   │   │   ├── arxiv_translation.md
│   │   │   ├── batch_file_query.md
│   │   │   ├── google_scholar.md
│   │   │   ├── latex_polish.md
│   │   │   ├── latex_proofread.md
│   │   │   ├── paper_reading.md
│   │   │   ├── pdf_nougat.md
│   │   │   ├── pdf_qa.md
│   │   │   ├── pdf_summary.md
│   │   │   ├── pdf_translation.md
│   │   │   ├── tex_abstract.md
│   │   │   └── word_summary.md
│   │   ├── agents/
│   │   │   ├── code_interpreter.md
│   │   │   └── void_terminal.md
│   │   ├── basic_functions.md
│   │   ├── basic_operations.md
│   │   ├── conversation/
│   │   │   ├── conversation_save.md
│   │   │   ├── image_generation.md
│   │   │   ├── internet_search.md
│   │   │   ├── mermaid_gen.md
│   │   │   ├── multi_model_query.md
│   │   │   └── voice_assistant.md
│   │   └── programming/
│   │       ├── batch_comment_gen.md
│   │       ├── code_analysis.md
│   │       ├── code_comment.md
│   │       ├── custom_code_analysis.md
│   │       ├── jupyter_analysis.md
│   │       └── markdown_translate.md
│   ├── get_started/
│   │   ├── configuration.md
│   │   ├── installation.md
│   │   └── quickstart.md
│   ├── index.md
│   ├── javascripts/
│   │   ├── animations.js
│   │   ├── code-copy.js
│   │   ├── code-zoom.js
│   │   ├── nav-scroll-fix.js
│   │   ├── responsive.js
│   │   ├── search-fix.js
│   │   └── tabbed-code.js
│   ├── models/
│   │   ├── azure.md
│   │   ├── custom_models.md
│   │   ├── local_models.md
│   │   ├── openai.md
│   │   ├── overview.md
│   │   └── transit_api.md
│   ├── plugin_with_secondary_menu.md
│   ├── reference/
│   │   ├── changelog.md
│   │   └── config_reference.md
│   ├── requirements.txt
│   ├── self_analysis.md
│   ├── stylesheets/
│   │   ├── animations.css
│   │   ├── code-enhancements.css
│   │   ├── feature-cards.css
│   │   ├── flowchart.css
│   │   ├── jupyter-simple.css
│   │   ├── mermaid.css
│   │   ├── mkdocstrings.css
│   │   ├── nav-scroll-fix.css
│   │   ├── readability-enhancements.css
│   │   ├── responsive.css
│   │   ├── syntax-highlight.css
│   │   ├── tabbed-code.css
│   │   ├── table-enhancements.css
│   │   └── workflow.css
│   ├── translate_english.json
│   ├── translate_japanese.json
│   ├── translate_std.json
│   ├── translate_traditionalchinese.json
│   ├── troubleshooting/
│   │   ├── faq.md
│   │   ├── model_errors.md
│   │   └── network_issues.md
│   ├── use_audio.md
│   ├── use_azure.md
│   ├── use_tts.md
│   └── use_vllm.md
├── main.py
├── mkdocs.yml
├── multi_language.py
├── request_llms/
│   ├── README.md
│   ├── bridge_all.py
│   ├── bridge_chatglm.py
│   ├── bridge_chatglm3.py
│   ├── bridge_chatglm4.py
│   ├── bridge_chatglmft.py
│   ├── bridge_chatglmonnx.py
│   ├── bridge_chatgpt.py
│   ├── bridge_chatgpt_vision.py
│   ├── bridge_claude.py
│   ├── bridge_cohere.py
│   ├── bridge_deepseekcoder.py
│   ├── bridge_google_gemini.py
│   ├── bridge_internlm.py
│   ├── bridge_jittorllms_llama.py
│   ├── bridge_jittorllms_pangualpha.py
│   ├── bridge_jittorllms_rwkv.py
│   ├── bridge_llama2.py
│   ├── bridge_moonshot.py
│   ├── bridge_moss.py
│   ├── bridge_newbingfree.py
│   ├── bridge_ollama.py
│   ├── bridge_openrouter.py
│   ├── bridge_qianfan.py
│   ├── bridge_qwen.py
│   ├── bridge_qwen_local.py
│   ├── bridge_skylark2.py
│   ├── bridge_spark.py
│   ├── bridge_stackclaude.py
│   ├── bridge_taichu.py
│   ├── bridge_tgui.py
│   ├── bridge_zhipu.py
│   ├── chatglmoonx.py
│   ├── com_google.py
│   ├── com_qwenapi.py
│   ├── com_skylark2api.py
│   ├── com_sparkapi.py
│   ├── com_taichu.py
│   ├── com_zhipuglm.py
│   ├── edge_gpt_free.py
│   ├── embed_models/
│   │   ├── bge_llm.py
│   │   ├── bridge_all_embed.py
│   │   └── openai_embed.py
│   ├── key_manager.py
│   ├── local_llm_class.py
│   ├── oai_std_model_template.py
│   ├── queued_pipe.py
│   ├── requirements_chatglm.txt
│   ├── requirements_chatglm4.txt
│   ├── requirements_chatglm_onnx.txt
│   ├── requirements_jittorllms.txt
│   ├── requirements_moss.txt
│   ├── requirements_newbing.txt
│   ├── requirements_qwen.txt
│   ├── requirements_qwen_local.txt
│   └── requirements_slackclaude.txt
├── requirements.txt
├── shared_utils/
│   ├── advanced_markdown_format.py
│   ├── char_visual_effect.py
│   ├── colorful.py
│   ├── config_loader.py
│   ├── connect_void_terminal.py
│   ├── context_clip_policy.py
│   ├── cookie_manager.py
│   ├── doc_loader_dynamic.py
│   ├── docker_as_service_api.py
│   ├── fastapi_server.py
│   ├── fastapi_stream_server.py
│   ├── handle_upload.py
│   ├── key_pattern_manager.py
│   ├── logging.py
│   ├── map_names.py
│   ├── nltk_downloader.py
│   └── text_mask.py
├── tests/
│   ├── __init__.py
│   ├── init_test.py
│   ├── test_academic_conversation.py
│   ├── test_anim_gen.py
│   ├── test_bilibili_down.py
│   ├── test_doc2x.py
│   ├── test_embed.py
│   ├── test_key_pattern_manager.py
│   ├── test_latex_auto_correct.py
│   ├── test_llms.py
│   ├── test_markdown.py
│   ├── test_markdown_format.py
│   ├── test_media.py
│   ├── test_plugins.py
│   ├── test_python_auto_docstring.py
│   ├── test_rag.py
│   ├── test_safe_pickle.py
│   ├── test_save_chat_to_html.py
│   ├── test_searxng.py
│   ├── test_social_helper.py
│   ├── test_tts.py
│   ├── test_utils.py
│   └── test_vector_plugins.py
├── themes/
│   ├── base64.mjs
│   ├── common.css
│   ├── common.js
│   ├── common.py
│   ├── contrast.css
│   ├── contrast.py
│   ├── cookies.py
│   ├── default.css
│   ├── default.py
│   ├── gradios.py
│   ├── green.css
│   ├── green.js
│   ├── green.py
│   ├── gui_advanced_plugin_class.py
│   ├── gui_floating_menu.py
│   ├── gui_toolbar.py
│   ├── init.js
│   ├── theme.js
│   ├── theme.py
│   ├── tts.js
│   ├── waifu_plugin/
│   │   ├── autoload.js
│   │   ├── live2d.js
│   │   ├── source
│   │   ├── waifu-tips.js
│   │   ├── waifu-tips.json
│   │   └── waifu.css
│   └── welcome.js
├── toolbox.py
└── version

================================================
FILE CONTENTS
================================================

================================================
FILE: .dockerignore
================================================
.venv
.github
.vscode
gpt_log
tests
README.md


================================================
FILE: .gitattributes
================================================
*.h linguist-detectable=false
*.cpp linguist-detectable=false
*.tex linguist-detectable=false
*.cs linguist-detectable=false
*.tps linguist-detectable=false


================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot
github
.github
TEMP
TRASH

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# macOS files
.DS_Store

.vscode
.idea

history
ssr_conf
config_private.py
gpt_log
private.md
private_upload
other_llms
cradle*
debug*
private*
crazy_functions/test_project/pdf_and_word
crazy_functions/test_samples
request_llms/jittorllms
multi-language
request_llms/moss
media
flagged
request_llms/ChatGLM-6b-onnx-u8s8
test.*
temp.*
objdump*
*.min.*.js
TODO
experimental_mods
search_results
gg.docx
unstructured_reader.py
wandb


================================================
FILE: .pre-commit-config.yaml
================================================
repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.4.0
    hooks:
      - id: trailing-whitespace
      - id: end-of-file-fixer
      - id: check-yaml
      - id: check-added-large-files
      - id: check-ast
      - id: check-json
      - id: check-merge-conflict
      - id: detect-private-key

  - repo: https://github.com/myint/autoflake
    rev: v2.2.0
    hooks:
      - id: autoflake
        args: [
          --in-place,
          --remove-all-unused-imports,
          --ignore-init-module-imports
        ]

  # - repo: https://github.com/pre-commit/mirrors-mypy
  #   rev: v1.7.0
  #   hooks:
  #     - id: mypy
  #       args: [
  #         --ignore-missing-imports,
  #         --disable-error-code=var-annotated,
  #         --disable-error-code=union-attr,
  #         --disable-error-code=no-redef,
  #         --disable-error-code=assignment,
  #         --disable-error-code=has-type,
  #         --disable-error-code=attr-defined,
  #         --disable-error-code=import-untyped,
  #         --disable-error-code=truthy-function,
  #         --follow-imports=skip,
  #         --explicit-package-bases,
  #       ]


================================================
FILE: Dockerfile
================================================
# 此Dockerfile适用于“无本地模型”的迷你运行环境构建
# 如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml
# - 如何构建: 先修改 `config.py`, 然后 `docker build -t gpt-academic . `
# - 如何运行(Linux下): `docker run --rm -it --net=host gpt-academic `
# - 如何运行(其他操作系统,选择任意一个固定端口50923): `docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic `

FROM ghcr.io/astral-sh/uv:python3.12-bookworm

# 非必要步骤,更换pip源 (以下三行,可以删除)
RUN echo '[global]' > /etc/pip.conf && \
    echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
    echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf

# 语音输出功能(以下1,2行更换阿里源,第3,4行安装ffmpeg,都可以删除)
RUN sed -i 's/deb.debian.org/mirrors.aliyun.com/g' /etc/apt/sources.list.d/debian.sources && \
    sed -i 's/security.debian.org/mirrors.aliyun.com/g' /etc/apt/sources.list.d/debian.sources && \
    apt-get update
RUN apt-get install ffmpeg -y
RUN apt-get clean

# 进入工作路径(必要)
WORKDIR /gpt

# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下两行,可以删除)
COPY requirements.txt ./
RUN uv venv --python=3.12 && uv pip install --verbose -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
ENV PATH="/gpt/.venv/bin:$PATH"
RUN python -c 'import loguru'

# 装载项目文件,安装剩余依赖(必要)
COPY . .
RUN uv pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/

# # 非必要步骤,用于预热模块(可以删除)
RUN python -c 'from check_proxy import warm_up_modules; warm_up_modules()'

ENV CGO_ENABLED=0

# 启动(必要)
CMD ["bash", "-c", "python main.py"]


================================================
FILE: LICENSE
================================================
                    GNU GENERAL PUBLIC LICENSE
                       Version 3, 29 June 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU General Public License is a free, copyleft license for
software and other kinds of works.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.  We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors.  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights.  Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received.  You must make sure that they, too, receive
or can get the source code.  And you must show them these terms so they
know their rights.

  Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.

  For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software.  For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.

  Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so.  This is fundamentally incompatible with the aim of
protecting users' freedom to change the software.  The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable.  Therefore, we
have designed this version of the GPL to prohibit the practice for those
products.  If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.

  Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary.  To prevent this, the GPL assures that
patents cannot be used to render the program non-free.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Use with the GNU Affero General Public License.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see <https://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>
    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.

  The GNU General Public License does not permit incorporating your program
into proprietary programs.  If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library.  If this is what you want to do, use the GNU Lesser General
Public License instead of this License.  But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.


================================================
FILE: README.md
================================================
> [!IMPORTANT]
>
> `master主分支`最新动态(2026.1.25): 新GUI前端测试中,Coming Soon<br/>
> `master主分支`最新动态(2025.8.23): Dockerfile构建效率大幅优化<br/>
> 2025.2.2: 三分钟快速接入最强qwen2.5-max[视频](https://www.bilibili.com/video/BV1LeFuerEG4)<br/>
> 2025.2.1: 支持自定义字体<br/>
> 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/drive/folders/14kR-3V-lIbvGxri4AHc8TpiA1fqsw7SK?usp=sharing)的文件服务器<br/>
> 2024.5.1: 加入Doc2x翻译PDF论文的功能,[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)<br/>
> 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型! SoVits语音克隆模块,[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)<br/>
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。<br/>

<br>

<div align=center>
<h1 aligh="center">
<img src="docs/logo.png" width="40"> GPT 学术优化 (GPT Academic)
</h1>

[![Github][Github-image]][Github-url]
[![License][License-image]][License-url]
[![Releases][Releases-image]][Releases-url]
[![Installation][Installation-image]][Installation-url]
[![Wiki][Wiki-image]][Wiki-url]
[![PR][PRs-image]][PRs-url]

[Github-image]: https://img.shields.io/badge/github-12100E.svg?style=flat-square
[License-image]: https://img.shields.io/github/license/binary-husky/gpt_academic?label=License&style=flat-square&color=orange
[Releases-image]: https://img.shields.io/github/release/binary-husky/gpt_academic?label=Release&style=flat-square&color=blue
[Installation-image]: https://img.shields.io/badge/dynamic/json?color=blue&url=https://raw.githubusercontent.com/binary-husky/gpt_academic/master/version&query=$.version&label=Installation&style=flat-square
[Wiki-image]: https://img.shields.io/badge/wiki-项目文档-black?style=flat-square
[PRs-image]: https://img.shields.io/badge/PRs-welcome-pink?style=flat-square

[Github-url]: https://github.com/binary-husky/gpt_academic
[License-url]: https://github.com/binary-husky/gpt_academic/blob/master/LICENSE
[Releases-url]: https://github.com/binary-husky/gpt_academic/releases
[Installation-url]: https://github.com/binary-husky/gpt_academic#installation
[Wiki-url]: https://github.com/binary-husky/gpt_academic/wiki
[PRs-url]: https://github.com/binary-husky/gpt_academic/pulls


</div>
<br>

**如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或插件,欢迎发pull requests!**

If you like this project, please give it a Star.
Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
<br>

> [!NOTE]
> 1.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
>    [![常规安装方法](https://img.shields.io/static/v1?label=&message=常规安装方法&color=gray)](#installation)  [![一键安装脚本](https://img.shields.io/static/v1?label=&message=一键安装脚本&color=gray)](https://github.com/binary-husky/gpt_academic/releases)  [![配置说明](https://img.shields.io/static/v1?label=&message=配置说明&color=gray)](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) [![wiki](https://img.shields.io/static/v1?label=&message=wiki&color=gray)]([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
>
> 2.本项目兼容并鼓励尝试国内中文大语言基座模型如通义千问,智谱GLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。

<br><br>

<div align="center">

功能(⭐= 近期新增功能) | 描述
--- | ---
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
批量注释生成 | [插件] 一键批量生成函数注释
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README.English.md)了吗?就是出自他的手笔
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
</div>


- 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换)
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/279702205-d81137c3-affd-4cd1-bb5e-b15610389762.gif" width="700" >
</div>

<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/70ff1ec5-e589-4561-a29e-b831079b37fb.gif" width="700" >
</div>


- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放剪贴板
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
</div>

- 润色/纠错
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
</div>

- 如果输出包含公式,会以tex形式和渲染形式同时显示,方便复制和阅读
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
</div>

- 懒得看项目代码?直接把整个工程炫ChatGPT嘴里
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
</div>

- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + GPT4)
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
</div>

<br><br>

# Installation

```mermaid
flowchart TD
    A{"安装方法"} --> W1("I 🔑直接运行 (Windows, Linux or MacOS)")
    W1 --> W11["1 Python pip包管理依赖"]
    W1 --> W12["2 Anaconda包管理依赖(推荐⭐)"]

    A --> W2["II 🐳使用Docker (Windows, Linux or MacOS)"]

    W2 --> k1["1 部署项目全部能力的大镜像(推荐⭐)"]
    W2 --> k2["2 仅在线模型(GPT, GLM4等)镜像"]
    W2 --> k3["3 在线模型 + Latex的大镜像"]

    A --> W4["IV 🚀其他部署方法"]
    W4 --> C1["1 Windows/MacOS 一键安装运行脚本(推荐⭐)"]
    W4 --> C2["2 Huggingface, Sealos远程部署"]
    W4 --> C4["3 其他 ..."]
```

### 安装方法I:直接运行 (Windows, Linux or MacOS)

1. 下载项目

    ```sh
    git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
    cd gpt_academic
    ```

2. 配置API_KEY等变量

    在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。

    「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,从而确保自动更新时不会丢失配置 」。

    「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。


3. 安装依赖
    ```sh
    # (选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11)备注:使用官方pip源或者阿里pip源, 临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
    python -m pip install -r requirements.txt

    # (选择II: 使用Anaconda)步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
    conda create -n gptac_venv python=3.11    # 创建anaconda环境
    conda activate gptac_venv                 # 激活anaconda环境
    python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤

    # (选择III: 使用uv):
    uv venv --python=3.11   # 创建虚拟环境
    source ./.venv/bin/activate # 激活虚拟环境
    uv pip install --verbose -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ # 安装依赖
    ```


<details><summary>如果需要支持清华ChatGLM系列/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
<p>

【可选步骤】如果需要支持清华ChatGLM系列/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):

```sh
# 【可选步骤I】支持清华ChatGLM3。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt

# 【可选步骤II】支持清华ChatGLM4 注意:此模型至少需要24G显存
python -m pip install -r request_llms/requirements_chatglm4.txt
# 可使用modelscope下载ChatGLM4模型
# pip install modelscope
# modelscope download --model ZhipuAI/glm-4-9b-chat --local_dir ./THUDM/glm-4-9b-chat

# 【可选步骤III】支持复旦MOSS
python -m pip install -r request_llms/requirements_moss.txt
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss  # 注意执行此行代码时,必须处于项目根路径

# 【可选步骤IV】支持RWKV Runner
参考wiki:https://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner

# 【可选步骤V】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]

# 【可选步骤VI】支持本地模型INT8,INT4量化(这里所指的模型本身不是量化版本,目前deepseek-coder支持,后面测试后会加入更多模型量化选择)
pip install bitsandbyte
# windows用户安装bitsandbytes需要使用下面bitsandbytes-windows-webui
python -m pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui
pip install -U git+https://github.com/huggingface/transformers.git
pip install -U git+https://github.com/huggingface/accelerate.git
pip install peft
```

</p>
</details>



4. 运行
    ```sh
    python main.py
    ```

### 安装方法II:使用Docker

0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐该方法部署完整项目)
[![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)

    ``` sh
    # 修改docker-compose.yml,保留方案0并删除其他方案。然后运行:
    docker-compose up
    ```

1. 仅ChatGPT + GLM4 + 文心一言+spark等在线模型(推荐大多数人选择)
[![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
[![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
[![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)

    ``` sh
    # 修改docker-compose.yml,保留方案1并删除其他方案。然后运行:
    docker-compose up
    ```

P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。

2. ChatGPT + GLM3 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
[![chatglm](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)

    ``` sh
    # 修改docker-compose.yml,保留方案2并删除其他方案。然后运行:
    docker-compose up
    ```


### 安装方法III:其他部署方法
1. **Windows一键运行脚本**。
完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。脚本贡献来源:[oobabooga](https://github.com/oobabooga/one-click-installers)。

2. 使用第三方API、Azure等、文心一言、星火等,见[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)

3. 云服务器远程部署避坑指南。
请访问[云服务器远程部署wiki](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)

4. 在其他平台部署&二级网址部署
    - 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
    - 使用WSL2(Windows Subsystem for Linux 子系统)。请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
    - 如何在二级网址(如`http://localhost/subpath`)下运行。请访问[FastAPI运行说明](docs/WithFastapi.md)

<br><br>

# Advanced Usage
### I:自定义新的便捷按钮(学术快捷键)

现在已可以通过UI中的`界面外观`菜单中的`自定义菜单`添加新的便捷按钮。如果需要在代码中定义,请使用任意文本编辑器打开`core_functional.py`,添加如下条目即可:

```python
"超级英译中": {
    # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
    "Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n",

    # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。
    "Suffix": "",
},
```

<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
</div>

### II:自定义函数插件
编写强大的函数插件来执行任何你想得到的和想不到的任务。
本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。

<br><br>

# Updates
### I:动态

1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件,
另外在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。
Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史html存档缓存。
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
</div>

2. ⭐Latex/Arxiv论文翻译功能⭐
<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/002a1a75-ace0-4e6a-94e2-ec1406a746f1" height="250" > ===>
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/9fdcc391-f823-464f-9322-f8719677043b" height="250" >
</div>

3. 虚空终端(从自然语言输入中,理解用户意图+自动调用其他插件)

- 步骤一:输入 “ 请调用插件翻译PDF论文,地址为https://openreview.net/pdf?id=rJl0r3R9KX ”
- 步骤二:点击“虚空终端”

<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/66f1b044-e9ff-4eed-9126-5d4f3668f1ed" width="500" >
</div>

4. 模块化功能设计,简单的接口却能支持强大的功能
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
</div>

5. 译解其他开源项目
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" height="250" >
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" height="250" >
</div>

6. 装饰[live2d](https://github.com/fghrsh/live2d_demo)的小功能(默认关闭,需要修改`config.py`)
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
</div>

7. OpenAI图像生成
<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
</div>

8. 基于mermaid的流图、脑图绘制
<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/c518b82f-bd53-46e2-baf5-ad1b081c1da4" width="500" >
</div>

9. Latex全文校对纠错
<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" height="200" > ===>
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/476f66d9-7716-4537-b5c1-735372c25adb" height="200">
</div>

10. 语言、主题切换
<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/b6799499-b6fb-4f0c-9c8e-1b441872f4e8" width="500" >
</div>



### II:版本:
- version 3.80(TODO): 优化AutoGen插件主题并设计一系列衍生插件
- version 3.70: 引入Mermaid绘图,实现GPT画脑图等功能
- version 3.60: 引入AutoGen作为新一代插件的基石
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
- version 3.56: 支持动态追加基础功能按钮,新汇报PDF汇总页面
- version 3.55: 重构前端界面,引入悬浮窗口与菜单栏
- version 3.54: 新增动态代码解释器(Code Interpreter)(待完善)
- version 3.53: 支持动态选择不同界面主题,提高稳定性&解决多用户冲突问题
- version 3.50: 使用自然语言调用本项目的所有函数插件(虚空终端),支持插件分类,改进UI,设计新主题
- version 3.49: 支持百度千帆平台和文心一言
- version 3.48: 支持阿里达摩院通义千问,上海AI-Lab书生,讯飞星火
- version 3.46: 支持完全脱手操作的实时语音对话
- version 3.45: 支持自定义ChatGLM2微调模型
- version 3.44: 正式支持Azure,优化界面易用性
- version 3.4: +arxiv论文翻译、latex论文批改功能
- version 3.3: +互联网信息综合功能
- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
- version 3.0: 对chatglm和其他小型llm的支持
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题
- version 2.4: 新增PDF全文翻译功能; 新增输入区切换位置的功能
- version 2.3: 增强多线程交互性
- version 2.2: 函数插件支持热重载
- version 2.1: 可折叠式布局
- version 2.0: 引入模块化函数插件
- version 1.0: 基础功能

GPT Academic开发者QQ群:`610599535`

- 已知问题
    - 某些浏览器翻译插件干扰此软件前端的运行
    - 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**

```mermaid
timeline LR
    title GPT-Academic项目发展历程
    section 2.x
        1.0~2.2: 基础功能: 引入模块化函数插件: 可折叠式布局: 函数插件支持热重载
        2.3~2.5: 增强多线程交互性: 新增PDF全文翻译功能: 新增输入区切换位置的功能: 自更新
        2.6: 重构了插件结构: 提高了交互性: 加入更多插件
    section 3.x
        3.0~3.1: 对chatglm支持: 对其他小型llm支持: 支持同时问询多个gpt模型: 支持多个apikey负载均衡
        3.2~3.3: 函数插件支持更多参数接口: 保存对话功能: 解读任意语言代码: 同时询问任意的LLM组合: 互联网信息综合功能
        3.4: 加入arxiv论文翻译: 加入latex论文批改功能
        3.44: 正式支持Azure: 优化界面易用性
        3.46: 自定义ChatGLM2微调模型: 实时语音对话
        3.49: 支持阿里达摩院通义千问: 上海AI-Lab书生: 讯飞星火: 支持百度千帆平台 & 文心一言
        3.50: 虚空终端: 支持插件分类: 改进UI: 设计新主题
        3.53: 动态选择不同界面主题: 提高稳定性: 解决多用户冲突问题
        3.55: 动态代码解释器: 重构前端界面: 引入悬浮窗口与菜单栏
        3.56: 动态追加基础功能按钮: 新汇报PDF汇总页面
        3.57: GLM3, 星火v3: 支持文心一言v4: 修复本地模型的并发BUG
        3.60: 引入AutoGen
        3.70: 引入Mermaid绘图: 实现GPT画脑图等功能
        3.80(TODO): 优化AutoGen插件主题: 设计衍生插件

```


### III:主题
可以通过修改`THEME`选项(config.py)变更主题
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)


### IV:本项目的开发分支

1. `master` 分支: 主分支,稳定版
2. `frontier` 分支: 开发分支,测试版
3. 如何[接入其他大模型](request_llms/README.md)

### V:参考与学习

```
代码中参考了很多其他优秀项目中的设计,顺序不分先后:

# 清华ChatGLM2-6B:
https://github.com/THUDM/ChatGLM2-6B

# 清华JittorLLMs:
https://github.com/Jittor/JittorLLMs

# ChatPaper:
https://github.com/kaixindelele/ChatPaper

# Edge-GPT:
https://github.com/acheong08/EdgeGPT

# ChuanhuChatGPT:
https://github.com/GaiZhenbiao/ChuanhuChatGPT

# Oobabooga one-click installer:
https://github.com/oobabooga/one-click-installers

# More:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo
```


================================================
FILE: check_proxy.py
================================================
from loguru import logger

def check_proxy(proxies, return_ip=False):
    """
    检查代理配置并返回结果。

    Args:
        proxies (dict): 包含http和https代理配置的字典。
        return_ip (bool, optional): 是否返回代理的IP地址。默认为False。

    Returns:
        str or None: 检查的结果信息或代理的IP地址(如果`return_ip`为True)。
    """
    import requests
    proxies_https = proxies['https'] if proxies is not None else '无'
    ip = None
    try:
        response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)  # ⭐ 执行GET请求以获取代理信息
        data = response.json()
        if 'country_name' in data:
            country = data['country_name']
            result = f"代理配置 {proxies_https}, 代理所在地:{country}"
            if 'ip' in data:
                ip = data['ip']
        elif 'error' in data:
            alternative, ip = _check_with_backup_source(proxies)  # ⭐ 调用备用方法检查代理配置
            if alternative is None:
                result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
            else:
                result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
        else:
            result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"

        if not return_ip:
            logger.warning(result)
            return result
        else:
            return ip
    except:
        result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
        if not return_ip:
            logger.warning(result)
            return result
        else:
            return ip

def _check_with_backup_source(proxies):
    """
    通过备份源检查代理,并获取相应信息。

    Args:
        proxies (dict): 包含代理信息的字典。

    Returns:
        tuple: 代理信息(geo)和IP地址(ip)的元组。
    """
    import random, string, requests
    random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
    try:
        res_json = requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()  # ⭐ 执行代理检查和备份源请求
        return res_json['dns']['geo'], res_json['dns']['ip']
    except:
        return None, None

def backup_and_download(current_version, remote_version):
    """
    一键更新协议:备份当前版本,下载远程版本并解压缩。

    Args:
        current_version (str): 当前版本号。
        remote_version (str): 远程版本号。

    Returns:
        str: 新版本目录的路径。
    """
    from toolbox import get_conf
    import shutil
    import os
    import requests
    import zipfile
    os.makedirs(f'./history', exist_ok=True)
    backup_dir = f'./history/backup-{current_version}/'
    new_version_dir = f'./history/new-version-{remote_version}/'
    if os.path.exists(new_version_dir):
        return new_version_dir
    os.makedirs(new_version_dir)
    shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
    proxies = get_conf('proxies')
    try:    r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
    except: r = requests.get('https://public.agent-matrix.com/publish/master.zip', proxies=proxies, stream=True)
    zip_file_path = backup_dir+'/master.zip'  # ⭐ 保存备份文件的路径
    with open(zip_file_path, 'wb+') as f:
        f.write(r.content)
    dst_path = new_version_dir
    with zipfile.ZipFile(zip_file_path, "r") as zip_ref:
        for zip_info in zip_ref.infolist():
            dst_file_path = os.path.join(dst_path, zip_info.filename)
            if os.path.exists(dst_file_path):
                os.remove(dst_file_path)
            zip_ref.extract(zip_info, dst_path)
    return new_version_dir


def patch_and_restart(path):
    """
    一键更新协议:覆盖和重启

    Args:
        path (str): 新版本代码所在的路径

    注意事项:
        如果您的程序没有使用config_private.py私密配置文件,则会将config.py重命名为config_private.py以避免配置丢失。

    更新流程:
        - 复制最新版本代码到当前目录
        - 更新pip包依赖
        - 如果更新失败,则提示手动安装依赖库并重启
    """
    from distutils import dir_util
    import shutil
    import os
    import sys
    import time
    import glob
    from shared_utils.colorful import log亮黄, log亮绿, log亮红

    if not os.path.exists('config_private.py'):
        log亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
              '另外您可以随时在history子文件夹下找回旧版的程序。')
        shutil.copyfile('config.py', 'config_private.py')

    path_new_version = glob.glob(path + '/*-master')[0]
    dir_util.copy_tree(path_new_version, './')  # ⭐ 将最新版本代码复制到当前目录

    log亮绿('代码已经更新,即将更新pip包依赖……')
    for i in reversed(range(5)): time.sleep(1); log亮绿(i)

    try:
        import subprocess
        subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
    except:
        log亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')

    log亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
    log亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
    log亮绿(' ------------------------------ -----------------------------------')

    for i in reversed(range(8)): time.sleep(1); log亮绿(i)
    os.execl(sys.executable, sys.executable, *sys.argv)  # 重启程序


def get_current_version():
    """
    获取当前的版本号。

    Returns:
        str: 当前的版本号。如果无法获取版本号,则返回空字符串。
    """
    import json
    try:
        with open('./version', 'r', encoding='utf8') as f:
            current_version = json.loads(f.read())['version']  # ⭐ 从读取的json数据中提取版本号
    except:
        current_version = ""
    return current_version


def auto_update(raise_error=False):
    """
    一键更新协议:查询版本和用户意见

    Args:
        raise_error (bool, optional): 是否在出错时抛出错误。默认为 False。

    Returns:
        None
    """
    try:
        from toolbox import get_conf
        import requests
        import json
        proxies = get_conf('proxies')
        try:    response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
        except: response = requests.get("https://public.agent-matrix.com/publish/version", proxies=proxies, timeout=5)
        remote_json_data = json.loads(response.text)
        remote_version = remote_json_data['version']
        if remote_json_data["show_feature"]:
            new_feature = "新功能:" + remote_json_data["new_feature"]
        else:
            new_feature = ""
        with open('./version', 'r', encoding='utf8') as f:
            current_version = f.read()
            current_version = json.loads(current_version)['version']
        if (remote_version - current_version) >= 0.01-1e-5:
            from shared_utils.colorful import log亮黄
            log亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')  # ⭐ 在控制台打印新版本信息
            logger.info('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
            user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?')
            if user_instruction in ['Y', 'y']:
                path = backup_and_download(current_version, remote_version)  # ⭐ 备份并下载文件
                try:
                    patch_and_restart(path)  # ⭐ 执行覆盖并重启操作
                except:
                    msg = '更新失败。'
                    if raise_error:
                        from toolbox import trimmed_format_exc
                        msg += trimmed_format_exc()
                    logger.warning(msg)
            else:
                logger.info('自动更新程序:已禁用')
                return
        else:
            return
    except:
        msg = '自动更新程序:已禁用。建议排查:代理网络配置。'
        if raise_error:
            from toolbox import trimmed_format_exc
            msg += trimmed_format_exc()
        logger.info(msg)

def warm_up_modules():
    """
    预热模块,加载特定模块并执行预热操作。
    """
    logger.info('正在执行一些模块的预热 ...')
    from toolbox import ProxyNetworkActivate
    from request_llms.bridge_all import model_info
    with ProxyNetworkActivate("Warmup_Modules"):
        enc = model_info["gpt-3.5-turbo"]['tokenizer']
        enc.encode("模块预热", disallowed_special=())
        enc = model_info["gpt-4"]['tokenizer']
        enc.encode("模块预热", disallowed_special=())
        try_warm_up_vectordb()


# def try_warm_up_vectordb():
#     try:
#         import os
#         import nltk
#         target = os.path.expanduser('~/nltk_data')
#         logger.info(f'模块预热: nltk punkt (从Github下载部分文件到 {target})')
#         nltk.data.path.append(target)
#         nltk.download('punkt', download_dir=target)
#         logger.info('模块预热完成: nltk punkt')
#     except:
#         logger.exception('模块预热: nltk punkt 失败,可能需要手动安装 nltk punkt')
#         logger.error('模块预热: nltk punkt 失败,可能需要手动安装 nltk punkt')


def try_warm_up_vectordb():
    import os
    import nltk
    target = os.path.expanduser('~/nltk_data')
    nltk.data.path.append(target)
    try:
        # 尝试加载 punkt
        logger.info(f'nltk模块预热')
        nltk.data.find('tokenizers/punkt')
        nltk.data.find('tokenizers/punkt_tab')
        nltk.data.find('taggers/averaged_perceptron_tagger_eng')
        logger.info('nltk模块预热完成(读取本地缓存)')
    except:
        # 如果找不到,则尝试下载
        try:
            logger.info(f'模块预热: nltk punkt (从 Github 下载部分文件到 {target})')
            from shared_utils.nltk_downloader import Downloader
            _downloader = Downloader()
            _downloader.download('punkt', download_dir=target)
            _downloader.download('punkt_tab', download_dir=target)
            _downloader.download('averaged_perceptron_tagger_eng', download_dir=target)
            logger.info('nltk模块预热完成')
        except Exception:
            logger.exception('模块预热: nltk punkt 失败,可能需要手动安装 nltk punkt')


def warm_up_vectordb():
    """
    执行一些模块的预热操作。

    本函数主要用于执行一些模块的预热操作,确保在后续的流程中能够顺利运行。

    ⭐ 关键作用:预热模块

    Returns:
        None
    """
    logger.info('正在执行一些模块的预热 ...')
    from toolbox import ProxyNetworkActivate
    with ProxyNetworkActivate("Warmup_Modules"):
        import nltk
        with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt")


if __name__ == '__main__':
    import os
    os.environ['no_proxy'] = '*'  # 避免代理网络产生意外污染
    from toolbox import get_conf
    proxies = get_conf('proxies')
    check_proxy(proxies)


================================================
FILE: config.py
================================================
"""
    以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。
    读取优先级:环境变量 > config_private.py > config.py
    --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
    All the following configurations also support using environment variables to override,
    and the environment variable configuration format can be seen in docker-compose.yml.
    Configuration reading priority: environment variable > config_private.py > config.py
"""

# [step 1-1]>> ( 接入OpenAI模型家族 ) API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下,还需要填写组织(格式如org-123456789abcdefghijklmno的),请向下翻,找 API_ORG 设置项
API_KEY = "在此处填写APIKEY"    # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"

# [step 1-2]>> ( 强烈推荐!接入通义家族 & 大模型服务平台百炼 ) 接入通义千问在线大模型,api-key获取地址 https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY(用于接入qwen-max,dashscope-qwen3-14b,dashscope-deepseek-r1等)

# [step 1-3]>> ( 接入 deepseek-reasoner, 即 deepseek-r1 ) 深度求索(DeepSeek) API KEY,默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""

# [step 2]>> 改为True应用代理。如果使用本地或无地域限制的大模型时,此处不修改;如果直接在海外服务器部署,此处不修改
USE_PROXY = False
if USE_PROXY:
    """
    代理网络的地址,打开你的代理软件查看代理协议(socks5h / http)、地址(localhost)和端口(11284)
    填写格式是 [协议]://  [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
            <配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
    [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
    [地址] 填localhost或者127.0.0.1(localhost意思是代理软件安装在本机上)
    [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
    """
    proxies = {
        #          [协议]://  [地址]  :[端口]
        "http":  "socks5h://localhost:11284",  # 再例如  "http":  "http://127.0.0.1:7890",
        "https": "socks5h://localhost:11284",  # 再例如  "https": "http://127.0.0.1:7890",
    }
else:
    proxies = None

# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["qwen-max", "o1-mini", "o1-mini-2024-09-12", "o1", "o1-2024-12-17", "o1-preview", "o1-preview-2024-09-12",
                    "gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
                    "gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
                    "gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
                    "gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
                    "gemini-1.5-pro", "chatglm3", "chatglm4",
                    "deepseek-chat", "deepseek-coder", "deepseek-reasoner",
                    "volcengine-deepseek-r1-250120", "volcengine-deepseek-v3-241226",
                    "dashscope-deepseek-r1", "dashscope-deepseek-v3",
                    "dashscope-qwen3-14b", "dashscope-qwen3-235b-a22b", "dashscope-qwen3-32b",
                    ]

EMBEDDING_MODEL = "text-embedding-3-small"

# --- --- --- ---
# P.S. 其他可用的模型还包括
# AVAIL_LLM_MODELS = [
#   "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
#   "qianfan", "deepseekcoder",
#   "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4",
#   "qwen-turbo", "qwen-plus", "qwen-local",
#   "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
#   "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
#   "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
#   "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama",
#   "deepseek-chat" ,"deepseek-coder",
#   "gemini-1.5-flash",
#   "yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview",
#   "grok-beta",
# ]
# --- --- --- ---
# 此外,您还可以在接入one-api/vllm/ollama/Openroute时,
# 使用"one-api-*","vllm-*","ollama-*","openrouter-*"前缀直接使用非标准方式接入的模型,例如
# AVAIL_LLM_MODELS = ["one-api-claude-3-sonnet-20240229(max_token=100000)", "ollama-phi3(max_token=4096)","openrouter-openai/gpt-4o-mini","openrouter-openai/chatgpt-4o-latest"]
# --- --- --- ---


# --------------- 以下配置可以优化体验 ---------------

# 重新URL重新定向,实现更换API_URL的作用(高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
# 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions", "http://localhost:11434/api/chat": "在这里填写您ollama的URL"}
API_URL_REDIRECT = {}


# 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次
# 一言以蔽之:免费(5刀)用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview
DEFAULT_WORKER_NUM = 8


# 色彩主题, 可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
# 更多主题, 请查阅Gradio主题商店: https://huggingface.co/spaces/gradio/theme-gallery 可选 ["Gstaff/Xkcd", "NoCrypt/Miku", ...]
THEME = "Default"
AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"]

FONT = "Theme-Default-Font"
AVAIL_FONTS = [
    "默认值(Theme-Default-Font)",
    "宋体(SimSun)",
    "黑体(SimHei)",
    "楷体(KaiTi)",
    "仿宋(FangSong)",
    "华文细黑(STHeiti Light)",
    "华文楷体(STKaiti)",
    "华文仿宋(STFangsong)",
    "华文宋体(STSong)",
    "华文中宋(STZhongsong)",
    "华文新魏(STXinwei)",
    "华文隶书(STLiti)",
    # 备注:以下字体需要网络支持,您可以自定义任意您喜欢的字体,如下所示,需要满足的格式为 "字体昵称(字体英文真名@字体css下载链接)"
    "思源宋体(Source Han Serif CN VF@https://chinese-fonts-cdn.deno.dev/packages/syst/dist/SourceHanSerifCN/result.css)",
    "月星楷(Moon Stars Kai HW@https://chinese-fonts-cdn.deno.dev/packages/moon-stars-kai/dist/MoonStarsKaiHW-Regular/result.css)",
    "珠圆体(MaokenZhuyuanTi@https://chinese-fonts-cdn.deno.dev/packages/mkzyt/dist/猫啃珠圆体/result.css)",
    "平方萌萌哒(PING FANG MENG MNEG DA@https://chinese-fonts-cdn.deno.dev/packages/pfmmd/dist/平方萌萌哒/result.css)",
    "Helvetica",
    "ui-sans-serif",
    "sans-serif",
    "system-ui"
]


# 默认的系统提示词(system prompt)
INIT_SYS_PROMPT = "Serve me as a writing and programming assistant."


# 对话窗的高度 (仅在LAYOUT="TOP-DOWN"时生效)
CHATBOT_HEIGHT = 1115


# 代码高亮
CODE_HIGHLIGHT = True


# 窗口布局
LAYOUT = "LEFT-RIGHT"   # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)


# 暗色模式 / 亮色模式
DARK_MODE = True


# 发送请求到OpenAI后,等待多久判定为超时
TIMEOUT_SECONDS = 30


# 网页的端口, -1代表随机端口
WEB_PORT = -1


# 是否自动打开浏览器页面
AUTO_OPEN_BROWSER = True


# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制
MAX_RETRY = 2


# 插件分类默认选项
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']


# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"


# 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
# 也可以是具体的模型路径
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"


# 百度千帆(LLM_MODEL="qianfan")
BAIDU_CLOUD_API_KEY = ''
BAIDU_CLOUD_SECRET_KEY = ''
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot'    # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat", "ERNIE-Speed-128K", "ERNIE-Speed-8K", "ERNIE-Lite-8K"


# 如果使用ChatGLM3或ChatGLM4本地模型,请把 LLM_MODEL="chatglm3" 或LLM_MODEL="chatglm4",并在此处指定模型路径
CHATGLM_LOCAL_MODEL_PATH = "THUDM/glm-4-9b-chat" # 例如"/home/hmp/ChatGLM3-6B/"

# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"


# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本


# 设置gradio的并行线程数(不需要修改)
CONCURRENT_COUNT = 100


# 是否在提交时自动清空输入框
AUTO_CLEAR_TXT = False


# 加一个live2d装饰
ADD_WAIFU = False


# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个)
# [("username", "password"), ("username2", "password2"), ...]
AUTHENTICATION = []


# 如果需要在二级路径下运行(常规情况下,不要修改!!)
# (举例 CUSTOM_PATH = "/gpt_academic",可以让软件运行在 http://ip:port/gpt_academic/ 下。)
CUSTOM_PATH = "/"


# HTTPS 秘钥和证书(不需要修改)
SSL_KEYFILE = ""
SSL_CERTFILE = ""


# 极少数情况下,openai的官方KEY需要伴随组织编码(格式如org-xxxxxxxxxxxxxxxxxxxxxxxx)使用
API_ORG = ""


# 如果需要使用Slack Claude,使用教程详情见 request_llms/README.md
SLACK_CLAUDE_BOT_ID = ''
SLACK_CLAUDE_USER_TOKEN = ''


# 如果需要使用AZURE(方法一:单个azure模型部署)详情请见额外文档 docs\use_azure.md
AZURE_ENDPOINT = "https://你亲手写的api名称.openai.azure.com/"
AZURE_API_KEY = "填入azure openai api的密钥"    # 建议直接在API_KEY处填写,该选项即将被弃用
AZURE_ENGINE = "填入你亲手写的部署名"            # 读 docs\use_azure.md


# 如果需要使用AZURE(方法二:多个azure模型部署+动态切换)详情请见额外文档 docs\use_azure.md
AZURE_CFG_ARRAY = {}


# 阿里云实时语音识别 配置难度较高
# 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
ENABLE_AUDIO = False
ALIYUN_TOKEN=""     # 例如 f37f30e0f9934c34a992f6f64f7eba4f
ALIYUN_APPKEY=""    # 例如 RoPlZrM88DnAFkZK
ALIYUN_ACCESSKEY="" # (无需填写)
ALIYUN_SECRET=""    # (无需填写)


# GPT-SOVITS 文本转语音服务的运行地址(将语言模型的生成文本朗读出来)
TTS_TYPE = "EDGE_TTS" # EDGE_TTS / LOCAL_SOVITS_API / DISABLE
GPT_SOVITS_URL = ""
EDGE_TTS_VOICE = "zh-CN-XiaoxiaoNeural"


# 接入讯飞星火大模型 https://console.xfyun.cn/services/iat
XFYUN_APPID = "00000000"
XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"


# 接入智谱大模型
ZHIPUAI_API_KEY = ""
ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写


# Claude API KEY
ANTHROPIC_API_KEY = ""


# 月之暗面 API KEY
MOONSHOT_API_KEY = ""


# 零一万物(Yi Model) API KEY
YIMODEL_API_KEY = ""


# 接入火山引擎的在线大模型),api-key获取地址 https://console.volcengine.com/ark/region:ark+cn-beijing/endpoint
ARK_API_KEY = "00000000-0000-0000-0000-000000000000" # 火山引擎 API KEY


# 紫东太初大模型 https://ai-maas.wair.ac.cn
TAICHU_API_KEY = ""

# Grok API KEY
GROK_API_KEY = ""

# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
MATHPIX_APPID = ""
MATHPIX_APPKEY = ""


# DOC2X的PDF解析服务,注册账号并获取API KEY: https://doc2x.noedgeai.com/login
DOC2X_API_KEY = ""


# 自定义API KEY格式
CUSTOM_API_KEY_PATTERN = ""


# Google Gemini API-Key
GEMINI_API_KEY = ''


# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
HUGGINGFACE_ACCESS_TOKEN = "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"


# GROBID服务器地址(填写多个可以均衡负载),用于高质量地读取PDF文档
# 获取方法:复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
GROBID_URLS = [
    "https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
    "https://qingxu98-grobid4.hf.space","https://qingxu98-grobid5.hf.space", "https://qingxu98-grobid6.hf.space",
    "https://qingxu98-grobid7.hf.space", "https://qingxu98-grobid8.hf.space",
]


# Searxng互联网检索服务(这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里)
SEARXNG_URLS = [ f"https://kaletianlre-beardvs{i}dd.hf.space/" for i in range(1,5) ]


# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
ALLOW_RESET_CONFIG = False


# 在使用AutoGen插件时,是否使用Docker容器运行代码
AUTOGEN_USE_DOCKER = False


# 临时的上传文件夹位置,请尽量不要修改
PATH_PRIVATE_UPLOAD = "private_upload"


# 日志文件夹的位置,请尽量不要修改
PATH_LOGGING = "gpt_log"


# 存储翻译好的arxiv论文的路径,请尽量不要修改
ARXIV_CACHE_DIR = "gpt_log/arxiv_cache"


# 除了连接OpenAI之外,还有哪些场合允许使用代理,请尽量不要修改
WHEN_TO_USE_PROXY = ["Connect_OpenAI", "Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
                     "Warmup_Modules", "Nougat_Download", "AutoGen", "Connect_OpenAI_Embedding"]


# 启用插件热加载
PLUGIN_HOT_RELOAD = False


# 自定义按钮的最大数量限制
NUM_CUSTOM_BASIC_BTN = 4


# 媒体智能体的服务地址(这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里)
DAAS_SERVER_URLS = [ f"https://niuziniu-biligpt{i}.hf.space/stream" for i in range(1,5) ]


# 在互联网搜索组件中,负责将搜索结果整理成干净的Markdown
JINA_API_KEY = ""


# SEMANTIC SCHOLAR API KEY
SEMANTIC_SCHOLAR_KEY = ""


# 是否自动裁剪上下文长度(是否启动,默认不启动)
AUTO_CONTEXT_CLIP_ENABLE = False
# 目标裁剪上下文的token长度(如果超过这个长度,则会自动裁剪)
AUTO_CONTEXT_CLIP_TRIGGER_TOKEN_LEN = 30*1000
# 无条件丢弃x以上的轮数
AUTO_CONTEXT_MAX_ROUND = 64
# 在裁剪上下文时,倒数第x次对话能“最多”保留的上下文token的比例占 AUTO_CONTEXT_CLIP_TRIGGER_TOKEN_LEN 的多少
AUTO_CONTEXT_MAX_CLIP_RATIO = [0.80, 0.60, 0.45, 0.25, 0.20, 0.18, 0.16, 0.14, 0.12, 0.10, 0.08, 0.07, 0.06, 0.05, 0.04, 0.03, 0.02, 0.01]



# DO NOT USE, UNDER DEVELOPMENT
REROUTE_ALL_TO_ONE_API = False
ONE_API_URL = ""
ONE_API_KEY = "$API_KEY"


"""
--------------- 配置关联关系说明 ---------------

在线大模型配置关联关系示意图
│
├── "gpt-3.5-turbo" 等openai模型
│   ├── API_KEY
│   ├── CUSTOM_API_KEY_PATTERN(不常用)
│   ├── API_ORG(不常用)
│   └── API_URL_REDIRECT(不常用)
│
├── "azure-gpt-3.5" 等azure模型(单个azure模型,不需要动态切换)
│   ├── API_KEY
│   ├── AZURE_ENDPOINT
│   ├── AZURE_API_KEY
│   ├── AZURE_ENGINE
│   └── API_URL_REDIRECT
│
├── "azure-gpt-3.5" 等azure模型(多个azure模型,需要动态切换,高优先级)
│   └── AZURE_CFG_ARRAY
│
├── "spark" 星火认知大模型 spark & sparkv2
│   ├── XFYUN_APPID
│   ├── XFYUN_API_SECRET
│   └── XFYUN_API_KEY
│
├── "claude-3-opus-20240229" 等claude模型
│   └── ANTHROPIC_API_KEY
│
├── "stack-claude"
│   ├── SLACK_CLAUDE_BOT_ID
│   └── SLACK_CLAUDE_USER_TOKEN
│
├── "qianfan" 百度千帆大模型库
│   ├── BAIDU_CLOUD_QIANFAN_MODEL
│   ├── BAIDU_CLOUD_API_KEY
│   └── BAIDU_CLOUD_SECRET_KEY
│
├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
│   └── ZHIPUAI_API_KEY
│
├── "yi-34b-chat-0205", "yi-34b-chat-200k" 等零一万物(Yi Model)大模型
│   └── YIMODEL_API_KEY
│
├── "qwen-turbo" 等通义千问大模型
│   └──  DASHSCOPE_API_KEY
│
├── "Gemini"
│   └──  GEMINI_API_KEY
│
└── "one-api-...(max_token=...)" 用一种更方便的方式接入one-api多模型管理界面
    ├── AVAIL_LLM_MODELS
    ├── API_KEY
    └── API_URL_REDIRECT


本地大模型示意图
│
├── "chatglm4"
├── "chatglm3"
├── "chatglm"
├── "chatglm_onnx"
├── "chatglmft"
├── "internlm"
├── "moss"
├── "jittorllms_pangualpha"
├── "jittorllms_llama"
├── "deepseekcoder"
├── "qwen-local"
├──  RWKV的支持见Wiki
└── "llama2"


用户图形界面布局依赖关系示意图
│
├── CHATBOT_HEIGHT 对话窗的高度
├── CODE_HIGHLIGHT 代码高亮
├── LAYOUT 窗口布局
├── DARK_MODE 暗色模式 / 亮色模式
├── DEFAULT_FN_GROUPS 插件分类默认选项
├── THEME 色彩主题
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
├── ADD_WAIFU 加一个live2d装饰
└── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性


插件在线服务配置依赖关系示意图
│
├── 互联网检索
│   └── SEARXNG_URLS
│
├── 语音功能
│   ├── ENABLE_AUDIO
│   ├── ALIYUN_TOKEN
│   ├── ALIYUN_APPKEY
│   ├── ALIYUN_ACCESSKEY
│   └── ALIYUN_SECRET
│
└── PDF文档精准解析
    ├── GROBID_URLS
    ├── MATHPIX_APPID
    └── MATHPIX_APPKEY


"""


================================================
FILE: core_functional.py
================================================
# 'primary' 颜色对应 theme.py 中的 primary_hue
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
# 'stop' 颜色对应 theme.py 中的 color_er
import importlib
from toolbox import clear_line_break
from toolbox import apply_gpt_academic_string_mask_langbased
from toolbox import build_gpt_academic_masked_string_langbased
from textwrap import dedent

def get_core_functions():
    return {

        "学术语料润色": {
            # [1*] 前缀字符串,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等。
            #      这里填一个提示词字符串就行了,这里为了区分中英文情景搞复杂了一点
            "Prefix":   build_gpt_academic_masked_string_langbased(
                            text_show_english=
                                r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
                                r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
                                r"Firstly, you should provide the polished paragraph (in English). "
                                r"Secondly, you should list all your modification and explain the reasons to do so in markdown table.",
                            text_show_chinese=
                                r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,"
                                r"同时分解长句,减少重复,并提供改进建议。请先提供文本的更正版本,然后在markdown表格中列出修改的内容,并给出修改的理由:"
                        ) + "\n\n",
            # [2*] 后缀字符串,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
            "Suffix":   r"",
            # [3] 按钮颜色 (可选参数,默认 secondary)
            "Color":    r"secondary",
            # [4] 按钮是否可见 (可选参数,默认 True,即可见)
            "Visible": True,
            # [5] 是否在触发时清除历史 (可选参数,默认 False,即不处理之前的对话历史)
            "AutoClearHistory": False,
            # [6] 文本预处理 (可选参数,默认 None,举例:写个函数移除所有的换行符)
            "PreProcess": None,
            # [7] 模型选择 (可选参数。如不设置,则使用当前全局模型;如设置,则用指定模型覆盖全局模型。)
            # "ModelOverride": "gpt-3.5-turbo", # 主要用途:强制点击此基础功能按钮时,使用指定的模型。
        },


        "总结绘制脑图": {
            # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
            "Prefix":   '''"""\n\n''',
            # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
            "Suffix":
                # dedent() 函数用于去除多行字符串的缩进
                dedent("\n\n"+r'''
                    """

                    使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如:

                    以下是对以上文本的总结,以mermaid flowchart的形式展示:
                    ```mermaid
                    flowchart LR
                        A["节点名1"] --> B("节点名2")
                        B --> C{"节点名3"}
                        C --> D["节点名4"]
                        C --> |"箭头名1"| E["节点名5"]
                        C --> |"箭头名2"| F["节点名6"]
                    ```

                    注意:
                    (1)使用中文
                    (2)节点名字使用引号包裹,如["Laptop"]
                    (3)`|` 和 `"`之间不要存在空格
                    (4)根据情况选择flowchart LR(从左到右)或者flowchart TD(从上到下)
                '''),
        },


        "查找语法错误": {
            "Prefix":   r"Help me ensure that the grammar and the spelling is correct. "
                        r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
                        r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, "
                        r"put the original text the first column, "
                        r"put the corrected text in the second column and highlight the key words you fixed. "
                        r"Finally, please provide the proofreaded text.""\n\n"
                        r"Example:""\n"
                        r"Paragraph: How is you? Do you knows what is it?""\n"
                        r"| Original sentence | Corrected sentence |""\n"
                        r"| :--- | :--- |""\n"
                        r"| How **is** you? | How **are** you? |""\n"
                        r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n\n"
                        r"Below is a paragraph from an academic paper. "
                        r"You need to report all grammar and spelling mistakes as the example before."
                        + "\n\n",
            "Suffix":   r"",
            "PreProcess": clear_line_break,    # 预处理:清除换行符
        },


        "中译英": {
            "Prefix":   r"Please translate following sentence to English:" + "\n\n",
            "Suffix":   r"",
        },


        "学术英中互译": {
            "Prefix":   build_gpt_academic_masked_string_langbased(
                            text_show_chinese=
                                r"I want you to act as a scientific English-Chinese translator, "
                                r"I will provide you with some paragraphs in one language "
                                r"and your task is to accurately and academically translate the paragraphs only into the other language. "
                                r"Do not repeat the original provided paragraphs after translation. "
                                r"You should use artificial intelligence tools, "
                                r"such as natural language processing, and rhetorical knowledge "
                                r"and experience about effective writing techniques to reply. "
                                r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:",
                            text_show_english=
                                r"你是经验丰富的翻译,请把以下学术文章段落翻译成中文,"
                                r"并同时充分考虑中文的语法、清晰、简洁和整体可读性,"
                                r"必要时,你可以修改整个句子的顺序以确保翻译后的段落符合中文的语言习惯。"
                                r"你需要翻译的文本如下:"
                        ) + "\n\n",
            "Suffix":   r"",
        },


        "英译中": {
            "Prefix":   r"翻译成地道的中文:" + "\n\n",
            "Suffix":   r"",
            "Visible":  False,
        },


        "找图片": {
            "Prefix":   r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL,"
                        r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
            "Suffix":   r"",
            "Visible":  False,
        },


        "解释代码": {
            "Prefix":   r"请解释以下代码:" + "\n```\n",
            "Suffix":   "\n```\n",
        },


        "参考文献转Bib": {
            "Prefix":   r"Here are some bibliography items, please transform them into bibtex style."
                        r"Note that, reference styles maybe more than one kind, you should transform each item correctly."
                        r"Items need to be transformed:" + "\n\n",
            "Visible":  False,
            "Suffix":   r"",
        }
    }


def handle_core_functionality(additional_fn, inputs, history, chatbot):
    import core_functional
    importlib.reload(core_functional)    # 热更新prompt
    core_functional = core_functional.get_core_functions()
    addition = chatbot._cookies['customize_fn_overwrite']
    if additional_fn in addition:
        # 自定义功能
        inputs = addition[additional_fn]["Prefix"] + inputs + addition[additional_fn]["Suffix"]
        return inputs, history
    else:
        # 预制功能
        if "PreProcess" in core_functional[additional_fn]:
            if core_functional[additional_fn]["PreProcess"] is not None:
                inputs = core_functional[additional_fn]["PreProcess"](inputs)  # 获取预处理函数(如果有的话)
        # 为字符串加上上面定义的前缀和后缀。
        inputs = apply_gpt_academic_string_mask_langbased(
            string = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"],
            lang_reference = inputs,
        )
        if core_functional[additional_fn].get("AutoClearHistory", False):
            history = []
        return inputs, history

if __name__ == "__main__":
    t = get_core_functions()["总结绘制脑图"]
    print(t["Prefix"] + t["Suffix"])


================================================
FILE: crazy_functional.py
================================================
from toolbox import HotReload  # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效
from toolbox import trimmed_format_exc
from loguru import logger

def get_crazy_functions():
    from crazy_functions.Paper_Abstract_Writer import Paper_Abstract_Writer
    from crazy_functions.Program_Comment_Gen import 批量Program_Comment_Gen
    from crazy_functions.SourceCode_Analyse import 解析项目本身
    from crazy_functions.SourceCode_Analyse import 解析一个Python项目
    from crazy_functions.SourceCode_Analyse import 解析一个Matlab项目
    from crazy_functions.SourceCode_Analyse import 解析一个C项目的头文件
    from crazy_functions.SourceCode_Analyse import 解析一个C项目
    from crazy_functions.SourceCode_Analyse import 解析一个Golang项目
    from crazy_functions.SourceCode_Analyse import 解析一个Rust项目
    from crazy_functions.SourceCode_Analyse import 解析一个Java项目
    from crazy_functions.SourceCode_Analyse import 解析一个前端项目
    from crazy_functions.高级功能函数模板 import 高阶功能模板函数
    from crazy_functions.高级功能函数模板 import Demo_Wrap
    from crazy_functions.Latex_Project_Polish import Latex英文润色
    from crazy_functions.Multi_LLM_Query import 同时问询
    from crazy_functions.SourceCode_Analyse import 解析一个Lua项目
    from crazy_functions.SourceCode_Analyse import 解析一个CSharp项目
    from crazy_functions.Word_Summary import Word_Summary
    from crazy_functions.SourceCode_Analyse_JupyterNotebook import 解析ipynb文件
    from crazy_functions.Conversation_To_File import 载入对话历史存档
    from crazy_functions.Conversation_To_File import 对话历史存档
    from crazy_functions.Conversation_To_File import Conversation_To_File_Wrap
    from crazy_functions.Conversation_To_File import 删除所有本地对话历史记录
    from crazy_functions.Helpers import 清除缓存
    from crazy_functions.Markdown_Translate import Markdown英译中
    from crazy_functions.PDF_Summary import PDF_Summary
    from crazy_functions.PDF_Translate import 批量翻译PDF文档
    from crazy_functions.Google_Scholar_Assistant_Legacy import Google_Scholar_Assistant_Legacy
    from crazy_functions.PDF_QA import PDF_QA标准文件输入
    from crazy_functions.Latex_Project_Polish import Latex中文润色
    from crazy_functions.Latex_Project_Polish import Latex英文纠错
    from crazy_functions.Markdown_Translate import Markdown中译英
    from crazy_functions.Void_Terminal import Void_Terminal
    from crazy_functions.Mermaid_Figure_Gen import Mermaid_Gen
    from crazy_functions.PDF_Translate_Wrap import PDF_Tran
    from crazy_functions.Latex_Function import Latex英文纠错加PDF对比
    from crazy_functions.Latex_Function import Latex翻译中文并重新编译PDF
    from crazy_functions.Latex_Function import PDF翻译中文并重新编译PDF
    from crazy_functions.Latex_Function_Wrap import Arxiv_Localize
    from crazy_functions.Latex_Function_Wrap import PDF_Localize
    from crazy_functions.Internet_GPT import 连接网络回答问题
    from crazy_functions.Internet_GPT_Wrap import NetworkGPT_Wrap
    from crazy_functions.Image_Generate import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
    from crazy_functions.Image_Generate_Wrap import ImageGen_Wrap
    from crazy_functions.SourceCode_Comment import 注释Python项目
    from crazy_functions.SourceCode_Comment_Wrap import SourceCodeComment_Wrap
    from crazy_functions.VideoResource_GPT import 多媒体任务
    from crazy_functions.Document_Conversation import 批量文件询问
    from crazy_functions.Document_Conversation_Wrap import Document_Conversation_Wrap


    function_plugins = {
        "多媒体智能体": {
            "Group": "智能体",
            "Color": "stop",
            "AsButton": False,
            "Info": "【仅测试】多媒体任务",
            "Function": HotReload(多媒体任务),
        },
        "虚空终端": {
            "Group": "对话|编程|学术|智能体",
            "Color": "stop",
            "AsButton": True,
            "Info": "使用自然语言实现您的想法",
            "Function": HotReload(Void_Terminal),
        },
        "解析整个Python项目": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": True,
            "Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
            "Function": HotReload(解析一个Python项目),
        },
        "注释Python项目": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,
            "Info": "上传一系列python源文件(或者压缩包), 为这些代码添加docstring | 输入参数为路径",
            "Function": HotReload(注释Python项目),
            "Class": SourceCodeComment_Wrap,
        },
        "载入对话历史存档(先上传存档或输入路径)": {
            "Group": "对话",
            "Color": "stop",
            "AsButton": False,
            "Info": "载入对话历史存档 | 输入参数为路径",
            "Function": HotReload(载入对话历史存档),
        },
        "删除所有本地对话历史记录(谨慎操作)": {
            "Group": "对话",
            "AsButton": False,
            "Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
            "Function": HotReload(删除所有本地对话历史记录),
        },
        "清除所有缓存文件(谨慎操作)": {
            "Group": "对话",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
            "Function": HotReload(清除缓存),
        },
        "生成多种Mermaid图表(从当前对话或路径(.pdf/.md/.docx)中生产图表)": {
            "Group": "对话",
            "Color": "stop",
            "AsButton": False,
            "Info" : "基于当前对话或文件生成多种Mermaid图表,图表类型由模型判断",
            "Function": None,
            "Class": Mermaid_Gen
        },
        "Arxiv论文翻译": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": True,
            "Info": "ArXiv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
            "Function": HotReload(Latex翻译中文并重新编译PDF),  # 当注册Class后,Function旧接口仅会在“虚空终端”中起作用
            "Class": Arxiv_Localize,    # 新一代插件需要注册Class
        },
        "批量总结Word文档": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,
            "Info": "批量总结word文档 | 输入参数为路径",
            "Function": HotReload(Word_Summary),
        },
        "解析整个Matlab项目": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,
            "Info": "解析一个Matlab项目的所有源文件(.m) | 输入参数为路径",
            "Function": HotReload(解析一个Matlab项目),
        },
        "解析整个C++项目头文件": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
            "Function": HotReload(解析一个C项目的头文件),
        },
        "解析整个C++项目(.cpp/.hpp/.c/.h)": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h)| 输入参数为路径",
            "Function": HotReload(解析一个C项目),
        },
        "解析整个Go项目": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
            "Function": HotReload(解析一个Golang项目),
        },
        "解析整个Rust项目": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
            "Function": HotReload(解析一个Rust项目),
        },
        "解析整个Java项目": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
            "Function": HotReload(解析一个Java项目),
        },
        "解析整个前端项目(js,ts,css等)": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "解析一个前端项目的所有源文件(js,ts,css等) | 输入参数为路径",
            "Function": HotReload(解析一个前端项目),
        },
        "解析整个Lua项目": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
            "Function": HotReload(解析一个Lua项目),
        },
        "解析整个CSharp项目": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
            "Function": HotReload(解析一个CSharp项目),
        },
        "解析Jupyter Notebook文件": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,
            "Info": "解析Jupyter Notebook文件 | 输入参数为路径",
            "Function": HotReload(解析ipynb文件),
            "AdvancedArgs": True,  # 调用时,唤起高级参数输入区(默认False)
            "ArgsReminder": "若输入0,则不解析notebook中的Markdown块",  # 高级参数输入区的显示提示
        },
        "读Tex论文写摘要": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,
            "Info": "读取Tex论文并写摘要 | 输入参数为路径",
            "Function": HotReload(Paper_Abstract_Writer),
        },
        "翻译README或MD": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": True,
            "Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
            "Function": HotReload(Markdown英译中),
        },
        "翻译Markdown或README(支持Github链接)": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,
            "Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
            "Function": HotReload(Markdown英译中),
        },
        "批量生成函数注释": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "批量生成函数的注释 | 输入参数为路径",
            "Function": HotReload(批量Program_Comment_Gen),
        },
        "保存当前的对话": {
            "Group": "对话",
            "Color": "stop",
            "AsButton": True,
            "Info": "保存当前的对话 | 不需要输入参数",
            "Function": HotReload(对话历史存档),    # 当注册Class后,Function旧接口仅会在“Void_Terminal”中起作用
            "Class": Conversation_To_File_Wrap     # 新一代插件需要注册Class
        },
        "[多线程Demo]解析此项目本身(源码自译解)": {
            "Group": "对话|编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
            "Function": HotReload(解析项目本身),
        },
        "查互联网后回答": {
            "Group": "对话",
            "Color": "stop",
            "AsButton": True,  # 加入下拉菜单中
            # "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
            "Function": HotReload(连接网络回答问题),
            "Class": NetworkGPT_Wrap     # 新一代插件需要注册Class
        },
        "历史上的今天": {
            "Group": "对话",
            "Color": "stop",
            "AsButton": False,
            "Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数",
            "Function": None,
            "Class": Demo_Wrap, # 新一代插件需要注册Class
        },
        "PDF论文翻译": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": True,
            "Info": "精准翻译PDF论文为中文 | 输入参数为路径",
            "Function": HotReload(批量翻译PDF文档), # 当注册Class后,Function旧接口仅会在“Void_Terminal”中起作用
            "Class": PDF_Tran,  # 新一代插件需要注册Class
        },
        "询问多个GPT模型": {
            "Group": "对话",
            "Color": "stop",
            "AsButton": True,
            "Function": HotReload(同时问询),
        },
        "批量总结PDF文档": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "批量总结PDF文档的内容 | 输入参数为路径",
            "Function": HotReload(PDF_Summary),
        },
        "谷歌学术检索助手(输入谷歌学术搜索页url)": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
            "Function": HotReload(Google_Scholar_Assistant_Legacy),
        },
        "理解PDF文档内容 (模仿ChatPDF)": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
            "Function": HotReload(PDF_QA标准文件输入),
        },
        "英文Latex项目全文润色(输入路径或上传压缩包)": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
            "Function": HotReload(Latex英文润色),
        },

        "中文Latex项目全文润色(输入路径或上传压缩包)": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
            "Function": HotReload(Latex中文润色),
        },
        # 已经被新插件取代
        # "英文Latex项目全文纠错(输入路径或上传压缩包)": {
        #     "Group": "学术",
        #     "Color": "stop",
        #     "AsButton": False,  # 加入下拉菜单中
        #     "Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
        #     "Function": HotReload(Latex英文纠错),
        # },
        # 已经被新插件取代
        # "Latex项目全文中译英(输入路径或上传压缩包)": {
        #     "Group": "学术",
        #     "Color": "stop",
        #     "AsButton": False,  # 加入下拉菜单中
        #     "Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
        #     "Function": HotReload(Latex中译英)
        # },
        # 已经被新插件取代
        # "Latex项目全文英译中(输入路径或上传压缩包)": {
        #     "Group": "学术",
        #     "Color": "stop",
        #     "AsButton": False,  # 加入下拉菜单中
        #     "Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
        #     "Function": HotReload(Latex英译中)
        # },
        "批量Markdown中译英(输入路径或上传压缩包)": {
            "Group": "编程",
            "Color": "stop",
            "AsButton": False,  # 加入下拉菜单中
            "Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
            "Function": HotReload(Markdown中译英),
        },
        "Latex英文纠错+高亮修正位置 [需Latex]": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,
            "AdvancedArgs": True,
            "ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
            "Function": HotReload(Latex英文纠错加PDF对比),
        },
        "📚Arxiv论文精细翻译(输入arxivID)[需Latex]": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,
            "AdvancedArgs": True,
            "ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
                            r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
                            r'If the term "agent" is used in this section, it should be translated to "智能体". ',
            "Info": "ArXiv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
            "Function": HotReload(Latex翻译中文并重新编译PDF),  # 当注册Class后,Function旧接口仅会在“Void_Terminal”中起作用
            "Class": Arxiv_Localize,    # 新一代插件需要注册Class
        },
        "📚本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,
            "AdvancedArgs": True,
            "ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
                            r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
                            r'If the term "agent" is used in this section, it should be translated to "智能体". ',
            "Info": "本地Latex论文精细翻译 | 输入参数是路径",
            "Function": HotReload(Latex翻译中文并重新编译PDF),
        },
        "PDF翻译中文并重新编译PDF(上传PDF)[需Latex]": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,
            "AdvancedArgs": True,
            "ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
                            r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
                            r'If the term "agent" is used in this section, it should be translated to "智能体". ',
            "Info": "PDF翻译中文,并重新编译PDF | 输入参数为路径",
            "Function": HotReload(PDF翻译中文并重新编译PDF),   # 当注册Class后,Function旧接口仅会在“Void_Terminal”中起作用
            "Class": PDF_Localize   # 新一代插件需要注册Class
        },
        "批量文件询问 (支持自定义总结各种文件)": {
            "Group": "学术",
            "Color": "stop",
            "AsButton": False,
            "AdvancedArgs": False,
            "Info": "先上传文件,点击此按钮,进行提问",
            "Function": HotReload(批量文件询问),
            "Class": Document_Conversation_Wrap,
        },
    }

    function_plugins.update(
        {
            "🎨图片生成(DALLE2/DALLE3, 使用前切换到GPT系列模型)": {
                "Group": "对话",
                "Color": "stop",
                "AsButton": False,
                "Info": "使用 DALLE2/DALLE3 生成图片 | 输入参数字符串,提供图像的内容",
                "Function": HotReload(图片生成_DALLE2),   # 当注册Class后,Function旧接口仅会在“Void_Terminal”中起作用
                "Class": ImageGen_Wrap  # 新一代插件需要注册Class
            },
        }
    )

    function_plugins.update(
        {
            "🎨图片修改_DALLE2 (使用前请切换模型到GPT系列)": {
                "Group": "对话",
                "Color": "stop",
                "AsButton": False,
                "AdvancedArgs": False,  # 调用时,唤起高级参数输入区(默认False)
                # "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
                "Function": HotReload(图片修改_DALLE2),
            },
        }
    )








    try:
        from crazy_functions.Arxiv_Downloader import 下载arxiv论文并翻译摘要

        function_plugins.update(
            {
                "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
                    "Group": "学术",
                    "Color": "stop",
                    "AsButton": False,  # 加入下拉菜单中
                    # "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
                    "Function": HotReload(下载arxiv论文并翻译摘要),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")


    try:
        from crazy_functions.SourceCode_Analyse import 解析任意code项目

        function_plugins.update(
            {
                "解析项目源代码(手动指定和筛选源代码文件类型)": {
                    "Group": "编程",
                    "Color": "stop",
                    "AsButton": False,
                    "AdvancedArgs": True,  # 调用时,唤起高级参数输入区(默认False)
                    "ArgsReminder": '输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: "*.c, ^*.cpp, config.toml, ^*.toml"',  # 高级参数输入区的显示提示
                    "Function": HotReload(解析任意code项目),
                },
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    try:
        from crazy_functions.Multi_LLM_Query import 同时问询_指定模型

        function_plugins.update(
            {
                "询问多个GPT模型(手动指定询问哪些模型)": {
                    "Group": "对话",
                    "Color": "stop",
                    "AsButton": False,
                    "AdvancedArgs": True,  # 调用时,唤起高级参数输入区(默认False)
                    "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4",  # 高级参数输入区的显示提示
                    "Function": HotReload(同时问询_指定模型),
                },
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")



    try:
        from crazy_functions.Audio_Summary import Audio_Summary

        function_plugins.update(
            {
                "批量总结音视频(输入路径或上传压缩包)": {
                    "Group": "对话",
                    "Color": "stop",
                    "AsButton": False,
                    "AdvancedArgs": True,
                    "ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
                    "Info": "批量总结音频或视频 | 输入参数为路径",
                    "Function": HotReload(Audio_Summary),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    try:
        from crazy_functions.Math_Animation_Gen import 动画生成

        function_plugins.update(
            {
                "数学动画生成(Manim)": {
                    "Group": "对话",
                    "Color": "stop",
                    "AsButton": False,
                    "Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
                    "Function": HotReload(动画生成),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    try:
        from crazy_functions.Markdown_Translate import Markdown翻译指定语言

        function_plugins.update(
            {
                "Markdown翻译(指定翻译成何种语言)": {
                    "Group": "编程",
                    "Color": "stop",
                    "AsButton": False,
                    "AdvancedArgs": True,
                    "ArgsReminder": "请输入要翻译成哪种语言,默认为Chinese。",
                    "Function": HotReload(Markdown翻译指定语言),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    try:
        from crazy_functions.Vectorstore_QA import 知识库文件注入

        function_plugins.update(
            {
                "构建知识库(先上传文件素材,再运行此插件)": {
                    "Group": "对话",
                    "Color": "stop",
                    "AsButton": False,
                    "AdvancedArgs": True,
                    "ArgsReminder": "此处待注入的知识库名称id, 默认为default。文件进入知识库后可长期保存。可以通过再次调用本插件的方式,向知识库追加更多文档。",
                    "Function": HotReload(知识库文件注入),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    try:
        from crazy_functions.Vectorstore_QA import 读取知识库作答

        function_plugins.update(
            {
                "知识库文件注入(构建知识库后,再运行此插件)": {
                    "Group": "对话",
                    "Color": "stop",
                    "AsButton": False,
                    "AdvancedArgs": True,
                    "ArgsReminder": "待提取的知识库名称id, 默认为default, 您需要构建知识库后再运行此插件。",
                    "Function": HotReload(读取知识库作答),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    try:
        from crazy_functions.Interactive_Func_Template import 交互功能模板函数

        function_plugins.update(
            {
                "交互功能模板Demo函数(查找wallhaven.cc的壁纸)": {
                    "Group": "对话",
                    "Color": "stop",
                    "AsButton": False,
                    "Function": HotReload(交互功能模板函数),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")


    try:
        from toolbox import get_conf

        ENABLE_AUDIO = get_conf("ENABLE_AUDIO")
        if ENABLE_AUDIO:
            from crazy_functions.Audio_Assistant import Audio_Assistant

            function_plugins.update(
                {
                    "实时语音对话": {
                        "Group": "对话",
                        "Color": "stop",
                        "AsButton": True,
                        "Info": "这是一个时刻聆听着的语音对话助手 | 没有输入参数",
                        "Function": HotReload(Audio_Assistant),
                    }
                }
            )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    try:
        from crazy_functions.PDF_Translate_Nougat import 批量翻译PDF文档

        function_plugins.update(
            {
                "精准翻译PDF文档(NOUGAT)": {
                    "Group": "学术",
                    "Color": "stop",
                    "AsButton": False,
                    "Function": HotReload(批量翻译PDF文档),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    try:
        from crazy_functions.Dynamic_Function_Generate import Dynamic_Function_Generate

        function_plugins.update(
            {
                "动态代码解释器(CodeInterpreter)": {
                    "Group": "智能体",
                    "Color": "stop",
                    "AsButton": False,
                    "Function": HotReload(Dynamic_Function_Generate),
                }
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    # try:
    #     from crazy_functions.Multi_Agent_Legacy import Multi_Agent_Legacy终端
    #     function_plugins.update(
    #         {
    #             "AutoGenMulti_Agent_Legacy终端(仅供测试)": {
    #                 "Group": "智能体",
    #                 "Color": "stop",
    #                 "AsButton": False,
    #                 "Function": HotReload(Multi_Agent_Legacy终端),
    #             }
    #         }
    #     )
    # except:
    #     logger.error(trimmed_format_exc())
    #     logger.error("Load function plugin failed")

    try:
        from crazy_functions.Rag_Interface import Rag问答

        function_plugins.update(
            {
                "Rag智能召回": {
                    "Group": "对话",
                    "Color": "stop",
                    "AsButton": False,
                    "Info": "将问答数据记录到向量库中,作为长期参考。",
                    "Function": HotReload(Rag问答),
                },
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")

    # try:
    #     from crazy_functions.Document_Optimize import 自定义智能文档处理
    #     function_plugins.update(
    #         {
    #             "一键处理文档(支持自定义全文润色、降重等)": {
    #                 "Group": "学术",
    #                 "Color": "stop",
    #                 "AsButton": False,
    #                 "AdvancedArgs": True,
    #                 "ArgsReminder": "请输入处理指令和要求(可以详细描述),如:请帮我润色文本,要求幽默点。默认调用润色指令。",
    #                 "Info": "保留文档结构,智能处理文档内容 | 输入参数为文件路径",
    #                 "Function": HotReload(自定义智能文档处理)
    #             },
    #         }
    #     )
    # except:
    #     logger.error(trimmed_format_exc())
    #     logger.error("Load function plugin failed")



    try:
        from crazy_functions.Paper_Reading import 快速论文解读
        function_plugins.update(
            {
                "速读论文": {
                    "Group": "学术",
                    "Color": "stop",
                    "AsButton": False,
                    "Info": "上传一篇论文进行快速分析和解读 |  输入参数为论文路径或DOI/arXiv ID",
                    "Function": HotReload(快速论文解读),
                },
            }
        )
    except:
        logger.error(trimmed_format_exc())
        logger.error("Load function plugin failed")


    # try:
    #     from crazy_functions.高级功能函数模板 import 测试图表渲染
    #     function_plugins.update({
    #         "绘制逻辑关系(测试图表渲染)": {
    #             "Group": "智能体",
    #             "Color": "stop",
    #             "AsButton": True,
    #             "Function": HotReload(测试图表渲染)
    #         }
    #     })
    # except:
    #     logger.error(trimmed_format_exc())
    #     print('Load function plugin failed')


    """
    设置默认值:
    - 默认 Group = 对话
    - 默认 AsButton = True
    - 默认 AdvancedArgs = False
    - 默认 Color = secondary
    """
    for name, function_meta in function_plugins.items():
        if "Group" not in function_meta:
            function_plugins[name]["Group"] = "对话"
        if "AsButton" not in function_meta:
            function_plugins[name]["AsButton"] = True
        if "AdvancedArgs" not in function_meta:
            function_plugins[name]["AdvancedArgs"] = False
        if "Color" not in function_meta:
            function_plugins[name]["Color"] = "secondary"

    return function_plugins




def get_multiplex_button_functions():
    """多路复用主提交按钮的功能映射
    """
    return {
        "常规对话":
            "",

        "查互联网后回答":
            "查互联网后回答",

        "多模型对话":
            "询问多个GPT模型", # 映射到上面的 `询问多个GPT模型` 插件

        "智能召回 RAG":
            "Rag智能召回", # 映射到上面的 `Rag智能召回` 插件

        "多媒体查询":
            "多媒体智能体", # 映射到上面的 `多媒体智能体` 插件
    }


================================================
FILE: crazy_functions/Academic_Conversation.py
================================================
import re
import os
import asyncio
from typing import List, Dict, Tuple
from dataclasses import dataclass
from textwrap import dedent
from toolbox import CatchException, get_conf, update_ui, promote_file_to_downloadzone, get_log_folder, get_user
from toolbox import update_ui, CatchException, report_exception, write_history_to_file
from crazy_functions.review_fns.data_sources.semantic_source import SemanticScholarSource
from crazy_functions.review_fns.data_sources.arxiv_source import ArxivSource
from crazy_functions.review_fns.query_analyzer import QueryAnalyzer
from crazy_functions.review_fns.handlers.review_handler import 文献综述功能
from crazy_functions.review_fns.handlers.recommend_handler import 论文推荐功能
from crazy_functions.review_fns.handlers.qa_handler import 学术问答功能
from crazy_functions.review_fns.handlers.paper_handler import 单篇论文分析功能
from crazy_functions.Conversation_To_File import write_chat_to_file
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.review_fns.handlers.latest_handler import Arxiv最新论文推荐功能
from datetime import datetime

@CatchException
def 学术对话(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: List,
           history: List, system_prompt: str, user_request: str):
    """主函数"""

    # 初始化数据源
    arxiv_source = ArxivSource()
    semantic_source = SemanticScholarSource(
        api_key=get_conf("SEMANTIC_SCHOLAR_KEY")
    )

    # 初始化处理器
    handlers = {
        "review": 文献综述功能(arxiv_source, semantic_source, llm_kwargs),
        "recommend": 论文推荐功能(arxiv_source, semantic_source, llm_kwargs),
        "qa": 学术问答功能(arxiv_source, semantic_source, llm_kwargs),
        "paper": 单篇论文分析功能(arxiv_source, semantic_source, llm_kwargs),
        "latest": Arxiv最新论文推荐功能(arxiv_source, semantic_source, llm_kwargs),
    }

    # 分析查询意图
    chatbot.append([None, "正在分析研究主题和查询要求..."])
    yield from update_ui(chatbot=chatbot, history=history)

    query_analyzer = QueryAnalyzer()
    search_criteria = yield from query_analyzer.analyze_query(txt, chatbot, llm_kwargs)
    handler = handlers.get(search_criteria.query_type)
    if not handler:
        handler = handlers["qa"]  # 默认使用QA处理器

    # 处理查询
    chatbot.append([None, f"使用{handler.__class__.__name__}处理...,可能需要您耐心等待3~5分钟..."])
    yield from update_ui(chatbot=chatbot, history=history)

    final_prompt = asyncio.run(handler.handle(
        criteria=search_criteria,
        chatbot=chatbot,
        history=history,
        system_prompt=system_prompt,
        llm_kwargs=llm_kwargs,
        plugin_kwargs=plugin_kwargs
    ))

    if final_prompt:
        # 检查是否是道歉提示
        if "很抱歉,我们未能找到" in final_prompt:
            chatbot.append([txt, final_prompt])
            yield from update_ui(chatbot=chatbot, history=history)
            return
        # 在 final_prompt 末尾添加用户原始查询要求
        final_prompt += dedent(f"""
        Original user query: "{txt}"

        IMPORTANT NOTE :
        - Your response must directly address the user's original user query above
        - While following the previous guidelines, prioritize answering what the user specifically asked
        - Make sure your response format and content align with the user's expectations
        - Do not translate paper titles, keep them in their original language
        - Do not generate a reference list in your response - references will be handled separately
        """)

        # 使用最终的prompt生成回答
        response = yield from request_gpt_model_in_new_thread_with_ui_alive(
            inputs=final_prompt,
            inputs_show_user=txt,
            llm_kwargs=llm_kwargs,
            chatbot=chatbot,
            history=[],
            sys_prompt=f"You are a helpful academic assistant. Response in Chinese by default unless specified language is required in the user's query."
        )

        # 1. 获取文献列表
        papers_list = handler.ranked_papers  # 直接使用原始论文数据

        # 在新的对话中添加格式化的参考文献列表
        if papers_list:
            references = ""
            for idx, paper in enumerate(papers_list, 1):
                # 构建作者列表
                authors = paper.authors[:3]
                if len(paper.authors) > 3:
                    authors.append("et al.")
                authors_str = ", ".join(authors)

                # 构建期刊指标信息
                metrics = []
                if hasattr(paper, 'if_factor') and paper.if_factor:
                    metrics.append(f"IF: {paper.if_factor}")
                if hasattr(paper, 'jcr_division') and paper.jcr_division:
                    metrics.append(f"JCR: {paper.jcr_division}")
                if hasattr(paper, 'cas_division') and paper.cas_division:
                    metrics.append(f"中科院分区: {paper.cas_division}")
                metrics_str = f" [{', '.join(metrics)}]" if metrics else ""

                # 构建DOI链接
                doi_link = ""
                if paper.doi:
                    if "arxiv.org" in str(paper.doi):
                        doi_url = paper.doi
                    else:
                        doi_url = f"https://doi.org/{paper.doi}"
                    doi_link = f" <a href='{doi_url}' target='_blank'>DOI: {paper.doi}</a>"

                # 构建完整的引用
                reference = f"[{idx}] {authors_str}. *{paper.title}*"
                if paper.venue_name:
                    reference += f". {paper.venue_name}"
                if paper.year:
                    reference += f", {paper.year}"
                reference += metrics_str
                if doi_link:
                    reference += f".{doi_link}"
                reference += "  \n"

                references += reference

            # 添加新的对话显示参考文献
            chatbot.append(["参考文献如下:", references])
            yield from update_ui(chatbot=chatbot, history=history)


        # 2. 保存为不同格式
        from .review_fns.conversation_doc.word_doc import WordFormatter
        from .review_fns.conversation_doc.word2pdf import WordToPdfConverter
        from .review_fns.conversation_doc.markdown_doc import MarkdownFormatter
        from .review_fns.conversation_doc.html_doc import HtmlFormatter

        # 创建保存目录
        save_dir =  get_log_folder(get_user(chatbot),  plugin_name='chatscholar')

        if not os.path.exists(save_dir):
            os.makedirs(save_dir)

        # 生成文件名
        def get_safe_filename(txt, max_length=10):
            # 获取文本前max_length个字符作为文件名
            filename = txt[:max_length].strip()
            # 移除不安全的文件名字符
            filename = re.sub(r'[\\/:*?"<>|]', '', filename)
            # 如果文件名为空,使用时间戳
            if not filename:
                filename = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
            return filename

        base_filename = get_safe_filename(txt)

        result_files = []  # 收集所有生成的文件
        pdf_path = None  # 用于跟踪PDF是否成功生成

        # 保存为Markdown
        try:
            md_formatter = MarkdownFormatter()
            md_content = md_formatter.create_document(txt, response, papers_list)
            result_file_md = write_history_to_file(
                history=[md_content],
                file_basename=f"markdown_{base_filename}.md"
            )
            result_files.append(result_file_md)
        except Exception as e:
            print(f"Markdown保存失败: {str(e)}")

        # 保存为HTML
        try:
            html_formatter = HtmlFormatter()
            html_content = html_formatter.create_document(txt, response, papers_list)
            result_file_html = write_history_to_file(
                history=[html_content],
                file_basename=f"html_{base_filename}.html"
            )
            result_files.append(result_file_html)
        except Exception as e:
            print(f"HTML保存失败: {str(e)}")

        # 保存为Word
        try:
            word_formatter = WordFormatter()
            try:
                doc = word_formatter.create_document(txt, response, papers_list)
            except Exception as e:
                print(f"Word文档内容生成失败: {str(e)}")
                raise e

            try:
                result_file_docx = os.path.join(
                    os.path.dirname(result_file_md) if result_file_md else save_dir,
                    f"docx_{base_filename}.docx"
                )
                doc.save(result_file_docx)
                result_files.append(result_file_docx)
                print(f"Word文档已保存到: {result_file_docx}")

                # 转换为PDF
                try:
                    pdf_path = WordToPdfConverter.convert_to_pdf(result_file_docx)
                    if pdf_path:
                        result_files.append(pdf_path)
                        print(f"PDF文档已生成: {pdf_path}")
                except Exception as e:
                    print(f"PDF转换失败: {str(e)}")

            except Exception as e:
                print(f"Word文档保存失败: {str(e)}")
                raise e

        except Exception as e:
            print(f"Word格式化失败: {str(e)}")
            import traceback
            print(f"详细错误信息: {traceback.format_exc()}")

        # 保存为BibTeX格式
        try:
            from .review_fns.conversation_doc.reference_formatter import ReferenceFormatter
            ref_formatter = ReferenceFormatter()
            bibtex_content = ref_formatter.create_document(papers_list)

            # 在与其他文件相同目录下创建BibTeX文件
            result_file_bib = os.path.join(
                os.path.dirname(result_file_md) if result_file_md else save_dir,
                f"references_{base_filename}.bib"
            )

            # 直接写入文件
            with open(result_file_bib, 'w', encoding='utf-8') as f:
                f.write(bibtex_content)

            result_files.append(result_file_bib)
            print(f"BibTeX文件已保存到: {result_file_bib}")
        except Exception as e:
            print(f"BibTeX格式保存失败: {str(e)}")

        # 保存为EndNote格式
        try:
            from .review_fns.conversation_doc.endnote_doc import EndNoteFormatter
            endnote_formatter = EndNoteFormatter()
            endnote_content = endnote_formatter.create_document(papers_list)

            # 在与其他文件相同目录下创建EndNote文件
            result_file_enw = os.path.join(
                os.path.dirname(result_file_md) if result_file_md else save_dir,
                f"references_{base_filename}.enw"
            )

            # 直接写入文件
            with open(result_file_enw, 'w', encoding='utf-8') as f:
                f.write(endnote_content)

            result_files.append(result_file_enw)
            print(f"EndNote文件已保存到: {result_file_enw}")
        except Exception as e:
            print(f"EndNote格式保存失败: {str(e)}")

        # 添加所有文件到下载区
        success_files = []
        for file in result_files:
            try:
                promote_file_to_downloadzone(file, chatbot=chatbot)
                success_files.append(os.path.basename(file))
            except Exception as e:
                print(f"文件添加到下载区失败: {str(e)}")

        # 更新成功提示消息
        if success_files:
            chatbot.append(["保存对话记录成功,bib和enw文件支持导入到EndNote、Zotero、JabRef、Mendeley等文献管理软件,HTML文件支持在浏览器中打开,里面包含详细论文源信息", "对话已保存并添加到下载区,可以在下载区找到相关文件"])
        else:
            chatbot.append(["保存对话记录", "所有格式的保存都失败了,请检查错误日志。"])

        yield from update_ui(chatbot=chatbot, history=history)
    else:
        report_exception(chatbot, history, a=f"处理失败", b=f"请尝试其他查询")
        yield from update_ui(chatbot=chatbot, history=history)


================================================
FILE: crazy_functions/Arxiv_Downloader.py
================================================
import re, requests, unicodedata, os
from toolbox import update_ui, get_log_folder
from toolbox import write_history_to_file, promote_file_to_downloadzone
from toolbox import CatchException, report_exception, get_conf
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from loguru import logger

def download_arxiv_(url_pdf):
    if 'arxiv.org' not in url_pdf:
        if ('.' in url_pdf) and ('/' not in url_pdf):
            new_url = 'https://arxiv.org/abs/'+url_pdf
            logger.info('下载编号:', url_pdf, '自动定位:', new_url)
            # download_arxiv_(new_url)
            return download_arxiv_(new_url)
        else:
            logger.info('不能识别的URL!')
            return None
    if 'abs' in url_pdf:
        url_pdf = url_pdf.replace('abs', 'pdf')
        url_pdf = url_pdf + '.pdf'

    url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs')
    title, other_info = get_name(_url_=url_abs)

    paper_id = title.split()[0]  # '[1712.00559]'
    if '2' in other_info['year']:
        title = other_info['year'] + ' ' + title

    known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI']
    for k in known_conf:
        if k in other_info['comment']:
            title = k + ' ' + title

    download_dir = get_log_folder(plugin_name='arxiv')
    os.makedirs(download_dir, exist_ok=True)

    title_str = title.replace('?', '?')\
        .replace(':', ':')\
        .replace('\"', '“')\
        .replace('\n', '')\
        .replace('  ', ' ')\
        .replace('  ', ' ')

    requests_pdf_url = url_pdf
    file_path = download_dir+title_str

    logger.info('下载中')
    proxies = get_conf('proxies')
    r = requests.get(requests_pdf_url, proxies=proxies)
    with open(file_path, 'wb+') as f:
        f.write(r.content)
    logger.info('下载完成')

    x = "%s  %s %s.bib" % (paper_id, other_info['year'], other_info['authors'])
    x = x.replace('?', '?')\
        .replace(':', ':')\
        .replace('\"', '“')\
        .replace('\n', '')\
        .replace('  ', ' ')\
        .replace('  ', ' ')
    return file_path, other_info


def get_name(_url_):
    from bs4 import BeautifulSoup
    logger.info('正在获取文献名!')
    logger.info(_url_)

    proxies = get_conf('proxies')
    res = requests.get(_url_, proxies=proxies)

    bs = BeautifulSoup(res.text, 'html.parser')
    other_details = {}

    # get year
    try:
        year = bs.find_all(class_='dateline')[0].text
        year = re.search(r'(\d{4})', year, re.M | re.I).group(1)
        other_details['year'] = year
        abstract = bs.find_all(class_='abstract mathjax')[0].text
        other_details['abstract'] = abstract
    except:
        other_details['year'] = ''
        logger.info('年份获取失败')

    # get author
    try:
        authors = bs.find_all(class_='authors')[0].text
        authors = authors.split('Authors:')[1]
        other_details['authors'] = authors
    except:
        other_details['authors'] = ''
        logger.info('authors获取失败')

    # get comment
    try:
        comment = bs.find_all(class_='metatable')[0].text
        real_comment = None
        for item in comment.replace('\n', ' ').split('   '):
            if 'Comments' in item:
                real_comment = item
        if real_comment is not None:
            other_details['comment'] = real_comment
        else:
            other_details['comment'] = ''
    except:
        other_details['comment'] = ''
        logger.info('年份获取失败')

    title_str = BeautifulSoup(
        res.text, 'html.parser').find('title').contents[0]
    logger.info('获取成功:', title_str)
    # arxiv_recall[_url_] = (title_str+'.pdf', other_details)
    # with open('./arxiv_recall.pkl', 'wb') as f:
    #     pickle.dump(arxiv_recall, f)

    return title_str+'.pdf', other_details



@CatchException
def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):

    CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
    import glob
    import os

    # 基本信息:功能、贡献者
    chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO])
    yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

    # 尝试导入依赖,如果缺少依赖,则给出安装建议
    try:
        import bs4
    except:
        report_exception(chatbot, history,
            a = f"解析项目: {txt}",
            b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。")
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
        return

    # 清空历史,以免输入溢出
    history = []

    # 提取摘要,下载PDF文档
    try:
        pdf_path, info = download_arxiv_(txt)
    except:
        report_exception(chatbot, history,
            a = f"解析项目: {txt}",
            b = f"下载pdf文件未成功")
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
        return

    # 翻译摘要等
    i_say =            f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}"
    i_say_show_user =  f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}'
    chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
    yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
    msg = '正常'
    # ** gpt request **
    # 单线,获取文章meta信息
    gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
        inputs=i_say,
        inputs_show_user=i_say_show_user,
        llm_kwargs=llm_kwargs,
        chatbot=chatbot, history=[],
        sys_prompt="Your job is to collect information from materials and translate to Chinese。",
    )

    chatbot[-1] = (i_say_show_user, gpt_say)
    history.append(i_say_show_user); history.append(gpt_say)
    yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
    res = write_history_to_file(history)
    promote_file_to_downloadzone(res, chatbot=chatbot)
    promote_file_to_downloadzone(pdf_path, chatbot=chatbot)

    chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载"))
    yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面


================================================
FILE: crazy_functions/Audio_Assistant.py
================================================
from toolbox import update_ui
from toolbox import CatchException, get_conf, markdown_convertion
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.crazy_utils import input_clipping
from crazy_functions.agent_fns.watchdog import WatchDog
from crazy_functions.live_audio.aliyunASR import AliyunASR
from loguru import logger

import threading, time
import numpy as np
import json
import re


def chatbot2history(chatbot):
    history = []
    for c in chatbot:
        for q in c:
            if q in ["[ 请讲话 ]", "[ 等待GPT响应 ]", "[ 正在等您说完问题 ]"]:
                continue
            elif q.startswith("[ 正在等您说完问题 ]"):
                continue
            else:
                history.append(q.strip('<div class="markdown-body">').strip('</div>').strip('<p>').strip('</p>'))
    return history

def visualize_audio(chatbot, audio_shape):
    if len(chatbot) == 0: chatbot.append(["[ 请讲话 ]", "[ 正在等您说完问题 ]"])
    chatbot[-1] = list(chatbot[-1])
    p1 = '「'
    p2 = '」'
    chatbot[-1][-1] = re.sub(p1+r'(.*)'+p2, '', chatbot[-1][-1])
    chatbot[-1][-1] += (p1+f"`{audio_shape}`"+p2)

class AsyncGptTask():
    def __init__(self) -> None:
        self.observe_future = []
        self.observe_future_chatbot_index = []

    def gpt_thread_worker(self, i_say, llm_kwargs, history, sys_prompt, observe_window, index):
        try:
            MAX_TOKEN_ALLO = 2560
            i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO)
            gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt,
                                                            observe_window=observe_window[index], console_silence=True)
        except ConnectionAbortedError as token_exceed_err:
            logger.error('至少一个线程任务Token溢出而失败', e)
        except Exception as e:
            logger.error('至少一个线程任务意外失败', e)

    def add_async_gpt_task(self, i_say, chatbot_index, llm_kwargs, history, system_prompt):
        self.observe_future.append([""])
        self.observe_future_chatbot_index.append(chatbot_index)
        cur_index = len(self.observe_future)-1
        th_new = threading.Thread(target=self.gpt_thread_worker, args=(i_say, llm_kwargs, history, system_prompt, self.observe_future, cur_index))
        th_new.daemon = True
        th_new.start()

    def update_chatbot(self, chatbot):
        for of, ofci in zip(self.observe_future, self.observe_future_chatbot_index):
            try:
                chatbot[ofci] = list(chatbot[ofci])
                chatbot[ofci][1] = markdown_convertion(of[0])
            except:
                self.observe_future = []
                self.observe_future_chatbot_index = []
        return chatbot

class InterviewAssistant(AliyunASR):
    def __init__(self):
        self.capture_interval = 0.5 # second
        self.stop = False
        self.parsed_text = ""   # 下个句子中已经说完的部分, 由 test_on_result_chg() 写入
        self.parsed_sentence = ""   # 某段话的整个句子, 由 test_on_sentence_end() 写入
        self.buffered_sentence = ""    #
        self.audio_shape = ""   # 音频的可视化表现, 由 audio_convertion_thread() 写入
        self.event_on_result_chg = threading.Event()
        self.event_on_entence_end = threading.Event()
        self.event_on_commit_question = threading.Event()

    def __del__(self):
        self.stop = True
        self.stop_msg = ""
        self.commit_wd.kill_dog = True
        self.plugin_wd.kill_dog = True

    def init(self, chatbot):
        # 初始化音频采集线程
        self.captured_audio = np.array([])
        self.keep_latest_n_second = 10
        self.commit_after_pause_n_second = 2.0
        self.ready_audio_flagment = None
        self.stop = False
        self.plugin_wd = WatchDog(timeout=5, bark_fn=self.__del__, msg="程序终止")
        self.aut = threading.Thread(target=self.audio_convertion_thread, args=(chatbot._cookies['uuid'],))
        self.aut.daemon = True
        self.aut.start()
        # th2 = threading.Thread(target=self.audio2txt_thread, args=(chatbot._cookies['uuid'],))
        # th2.daemon = True
        # th2.start()

    def no_audio_for_a_while(self):
        if len(self.buffered_sentence) < 7: # 如果一句话小于7个字,暂不提交
            self.commit_wd.begin_watch()
        else:
            self.event_on_commit_question.set()

    def begin(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
        # main plugin function
        self.init(chatbot)
        chatbot.append(["[ 请讲话 ]", "[ 正在等您说完问题 ]"])
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
        self.plugin_wd.begin_watch()
        self.agt = AsyncGptTask()
        self.commit_wd = WatchDog(timeout=self.commit_after_pause_n_second, bark_fn=self.no_audio_for_a_while, interval=0.2)
        self.commit_wd.begin_watch()

        while not self.stop:
            self.event_on_result_chg.wait(timeout=0.25)  # run once every 0.25 second
            chatbot = self.agt.update_chatbot(chatbot)   # 将子线程的gpt结果写入chatbot
            history = chatbot2history(chatbot)
            yield from update_ui(chatbot=chatbot, history=history)      # 刷新界面
            self.plugin_wd.feed()

            if self.event_on_result_chg.is_set():
                # called when some words have finished
                self.event_on_result_chg.clear()
                chatbot[-1] = list(chatbot[-1])
                chatbot[-1][0] = self.buffered_sentence + self.parsed_text
                history = chatbot2history(chatbot)
                yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面
                self.commit_wd.feed()

            if self.event_on_entence_end.is_set():
                # called when a sentence has ended
                self.event_on_entence_end.clear()
                self.parsed_text = self.parsed_sentence
                self.buffered_sentence += self.parsed_text
                chatbot[-1] = list(chatbot[-1])
                chatbot[-1][0] = self.buffered_sentence
                history = chatbot2history(chatbot)
                yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面

            if self.event_on_commit_question.is_set():
                # called when a question should be commited
                self.event_on_commit_question.clear()
                if len(self.buffered_sentence) == 0: raise RuntimeError

                self.commit_wd.begin_watch()
                chatbot[-1] = list(chatbot[-1])
                chatbot[-1] = [self.buffered_sentence, "[ 等待GPT响应 ]"]
                yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
                # add gpt task 创建子线程请求gpt,避免线程阻塞
                history = chatbot2history(chatbot)
                self.agt.add_async_gpt_task(self.buffered_sentence, len(chatbot)-1, llm_kwargs, history, system_prompt)

                self.buffered_sentence = ""
                chatbot.append(["[ 请讲话 ]", "[ 正在等您说完问题 ]"])
                yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

            if not self.event_on_result_chg.is_set() and not self.event_on_entence_end.is_set() and not self.event_on_commit_question.is_set():
                visualize_audio(chatbot, self.audio_shape)
                yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

        if len(self.stop_msg) != 0:
            raise RuntimeError(self.stop_msg)



@CatchException
def Audio_Assistant(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
    # pip install -U openai-whisper
    chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
    yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

    # 尝试导入依赖,如果缺少依赖,则给出安装建议
    try:
        import nls
        from scipy import io
    except:
        chatbot.append(["导入依赖失败", "使用该模块需要额外依赖, 安装方法:```pip install --upgrade aliyun-python-sdk-core==2.13.3 pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git```"])
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
        return

    APPKEY = get_conf('ALIYUN_APPKEY')
    if APPKEY == "":
        chatbot.append(["导入依赖失败", "没有阿里云语音识别APPKEY和TOKEN, 详情见https://help.aliyun.com/document_detail/450255.html"])
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
        return

    yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
    ia = InterviewAssistant()
    yield from ia.begin(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)


================================================
FILE: crazy_functions/Audio_Summary.py
================================================
from toolbox import CatchException, report_exception, select_api_key, update_ui, get_conf
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from toolbox import write_history_to_file, promote_file_to_downloadzone, get_log_folder

def split_audio_file(filename, split_duration=1000):
    """
    根据给定的切割时长将音频文件切割成多个片段。

    Args:
        filename (str): 需要被切割的音频文件名。
        split_duration (int, optional): 每个切割音频片段的时长(以秒为单位)。默认值为1000。

    Returns:
        filelist (list): 一个包含所有切割音频片段文件路径的列表。

    """
    from moviepy.editor import AudioFileClip
    import os
    os.makedirs(f"{get_log_folder(plugin_name='audio')}/mp3/cut/", exist_ok=True)  # 创建存储切割音频的文件夹

    # 读取音频文件
    audio = AudioFileClip(filename)

    # 计算文件总时长和切割点
    total_duration = audio.duration
    split_points = list(range(0, int(total_duration), split_duration))
    split_points.append(int(total_duration))
    filelist = []

    # 切割音频文件
    for i in range(len(split_points) - 1):
        start_time = split_points[i]
        end_time = split_points[i + 1]
        split_audio = audio.subclip(start_time, end_time)
        split_audio.write_audiofile(f"{get_log_folder(plugin_name='audio')}/mp3/cut/{filename[0]}_{i}.mp3")
        filelist.append(f"{get_log_folder(plugin_name='audio')}/mp3/cut/{filename[0]}_{i}.mp3")

    audio.close()
    return filelist

def AnalyAudio(parse_prompt, file_manifest, llm_kwargs, chatbot, history):
    import os, requests
    from moviepy.editor import AudioFileClip
    from request_llms.bridge_all import model_info

    # 设置OpenAI密钥和模型
    api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
    chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint']

    whisper_endpoint = chat_endpoint.replace('chat/completions', 'audio/transcriptions')
    url = whisper_endpoint
    headers = {
        'Authorization': f"Bearer {api_key}"
    }

    os.makedirs(f"{get_log_folder(plugin_name='audio')}/mp3/", exist_ok=True)
    for index, fp in enumerate(file_manifest):
        audio_history = []
        # 提取文件扩展名
        ext = os.path.splitext(fp)[1]
        # 提取视频中的音频
        if ext not in [".mp3", ".wav", ".m4a", ".mpga"]:
            audio_clip = AudioFileClip(fp)
            audio_clip.write_audiofile(f"{get_log_folder(plugin_name='audio')}/mp3/output{index}.mp3")
            fp = f"{get_log_folder(plugin_name='audio')}/mp3/output{index}.mp3"
        # 调用whisper模型音频转文字
        voice = split_audio_file(fp)
        for j, i in enumerate(voice):
            with open(i, 'rb') as f:
                file_content = f.read()  # 读取文件内容到内存
                files = {
                    'file': (os.path.basename(i), file_content),
                }
                data = {
                    "model": "whisper-1",
                    "prompt": parse_prompt,
                    'response_format': "text"
                }

            chatbot.append([f"将 {i} 发送到openai音频解析终端 (whisper),当前参数:{parse_prompt}", "正在处理 ..."])
            yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面
            proxies = get_conf('proxies')
            response = requests.post(url, headers=headers, files=files, data=data, proxies=proxies).text

            chatbot.append(["音频解析结果", response])
            history.extend(["音频解析结果", response])
            yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面

            i_say = f'请对下面的音频片段做概述,音频内容是 ```{response}```'
            i_say_show_user = f'第{index + 1}段音频的第{j + 1} / {len(voice)}片段。'
            gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
                inputs=i_say,
                inputs_show_user=i_say_show_user,
                llm_kwargs=llm_kwargs,
                chatbot=chatbot,
                history=[],
                sys_prompt=f"总结音频。音频文件名{fp}"
            )

            chatbot[-1] = (i_say_show_user, gpt_say)
            history.extend([i_say_show_user, gpt_say])
            audio_history.extend([i_say_show_user, gpt_say])

        # 已经对该文章的所有片段总结完毕,如果文章被切分了
        result = "".join(audio_history)
        if len(audio_history) > 1:
            i_say = f"根据以上的对话,使用中文总结音频“{result}”的主要内容。"
            i_say_show_user = f'第{index + 1}段音频的主要内容:'
            gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
                inputs=i_say,
                inputs_show_user=i_say_show_user,
                llm_kwargs=llm_kwargs,
                chatbot=chatbot,
                history=audio_history,
                sys_prompt="总结文章。"
            )
            history.extend([i_say, gpt_say])
            audio_history.extend([i_say, gpt_say])

        res = write_history_to_file(history)
        promote_file_to_downloadzone(res, chatbot=chatbot)
        chatbot.append((f"第{index + 1}段音频完成了吗?", res))
        yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面

    # 删除中间文件夹
    import shutil
    shutil.rmtree(f"{get_log_folder(plugin_name='audio')}/mp3")
    res = write_history_to_file(history)
    promote_file_to_downloadzone(res, chatbot=chatbot)
    chatbot.append(("所有音频都总结完成了吗?", res))
    yield from update_ui(chatbot=chatbot, history=history)


@CatchException
def Audio_Summary(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, WEB_PORT):
    import glob, os

    # 基本信息:功能、贡献者
    chatbot.append([
        "函数插件功能?",
        "Audio_Summary内容,函数插件贡献者: dalvqw & BinaryHusky"])
    yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面

    try:
        from moviepy.editor import AudioFileClip
    except:
        report_exception(chatbot, history,
                         a=f"解析项目: {txt}",
                         b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade moviepy```。")
        yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面
        return

    # 清空历史,以免输入溢出
    history = []

    # 检测输入参数,如没有给定输入参数,直接退出
    if os.path.exists(txt):
        project_folder = txt
    else:
        if txt == "": txt = '空空如也的输入栏'
        report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
        yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面
        return

    # 搜索需要处理的文件清单
    extensions = ['.mp4', '.m4a', '.wav', '.mpga', '.mpeg', '.mp3', '.avi', '.mkv', '.flac', '.aac']

    if txt.endswith(tuple(extensions)):
        file_manifest = [txt]
    else:
        file_manifest = []
        for extension in extensions:
            file_manifest.extend(glob.glob(f'{project_folder}/**/*{extension}', recursive=True))

    # 如果没找到任何文件
    if len(file_manifest) == 0:
        report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何音频或视频文件: {txt}")
        yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面
        return

    # 开始正式执行任务
    if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
    parse_prompt = plugin_kwargs.get("advanced_arg", '将音频解析为简体中文')
    yield from AnalyAudio(parse_prompt, file_manifest, llm_kwargs, chatbot, history)

    yield from update_ui(chatbot=chatbot, history=history)  # 刷新界面


================================================
FILE: crazy_functions/Commandline_Assistant.py
================================================
from toolbox import CatchException, update_ui, gen_time_str
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.crazy_utils import input_clipping
import copy, json

@CatchException
def Commandline_Assistant(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
    """
    txt             输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
    llm_kwargs      gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
    plugin_kwargs   插件模型的参数, 暂时没有用武之地
    chatbot         聊天显示框的句柄, 用于显示给用户
    history         聊天历史, 前情提要
    system_prompt   给gpt的静默提醒
    user_request    当前用户的请求信息(IP地址等)
    """
    # 清空历史, 以免输入溢出
    history = []

    # 输入
    i_say = "请写bash命令实现以下功能:" + txt
    # 开始
    gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
        inputs=i_say, inputs_show_user=txt,
        llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
        sys_prompt="你是一个Linux大师级用户。注意,当我要求你写bash命令时,尽可能地仅用一行命令解决我的要求。"
    )
    yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新


================================================
FILE: crazy_functions/Conversation_To_File.py
================================================
import re
from toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_folder, get_user, update_ui_latest_msg
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
from loguru import logger

f_prefix = 'GPT-Academic对话存档'

def write_chat_to_file_legacy(chatbot, history=None, file_name=None):
    """
    将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
    """
    import os
    import time
    from themes.theme import advanced_css

    if (file_name is not None) and (file_name != "") and (not file_name.endswith('.html')): file_name += '.html'
    else: file_name = None

    if file_name is None:
        file_name = f_prefix + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
    fp = os.path.join(get_log_folder(get_user(chatbot), plugin_name='chat_history'), file_name)

    with open(fp, 'w', encoding='utf8') as f:
        from textwrap import dedent
        form = dedent("""
        <!DOCTYPE html><head><meta charset="utf-8"><title>对话存档</title><style>{CSS}</style></head>
        <body>
        <div class="test_temp1" style="width:10%; height: 500px; float:left;"></div>
        <div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;">
            <div class="chat-body" style="display: flex;justify-content: center;flex-direction: column;align-items: center;flex-wrap: nowrap;">
                {CHAT_PREVIEW}
                <div></div>
                <div></div>
                <div style="text-align: center;width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">对话(原始数据)</div>
                {HISTORY_PREVIEW}
            </div>
        </div>
        <div class="test_temp3" style="width:10%; height: 500px; float:left;"></div>
        </body>
        """)

        qa_from = dedent("""
        <div class="QaBox" style="width:80%;padding: 20px;margin-bottom: 20px;box-shadow: rgb(0 255 159 / 50%) 0px 0px 1px 2px;border-radius: 4px;">
            <div class="Question" style="border-radius: 2px;">{QUESTION}</div>
            <hr color="blue" style="border-top: dotted 2px #ccc;">
            <div class="Answer" style="border-radius: 2px;">{ANSWER}</div>
        </div>
        """)

        history_from = dedent("""
        <div class="historyBox" style="width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">
            <div class="entry" style="border-radius: 2px;">{ENTRY}</div>
        </div>
        """)
        CHAT_PREVIEW_BUF = ""
        for i, contents in enumerate(chatbot):
            question, answer = contents[0], contents[1]
            if question is None: question = ""
            try: question = str(question)
            except: question = ""
            if answer is None: answer = ""
            try: answer = str(answer)
            except: answer = ""
            CHAT_PREVIEW_BUF += qa_from.format(QUESTION=question, ANSWER=answer)

        HISTORY_PREVIEW_BUF = ""
        for h in history:
            HISTORY_PREVIEW_BUF += history_from.format(ENTRY=h)
        html_content = form.format(CHAT_PREVIEW=CHAT_PREVIEW_BUF, HISTORY_PREVIEW=HISTORY_PREVIEW_BUF, CSS=advanced_css)
        f.write(html_content)

    promote_file_to_downloadzone(fp, rename_file=file_name, chatbot=chatbot)
    return '对话历史写入:' + fp

def write_chat_to_file(chatbot, history=None, file_name=None):
    """
    将对话记录history以多种格式(HTML、Word、Markdown)写入文件中。如果没有指定文件名,则使用当前时间生成文件名。

    Args:
        chatbot: 聊天机器人对象,包含对话内容
        history: 对话历史记录
        file_name: 指定的文件名,如果为None则使用时间戳

    Returns:
        str: 提示信息,包含文件保存路径
    """
    import os
    import time
    import asyncio
    import aiofiles
    from toolbox import promote_file_to_downloadzone
    from crazy_functions.doc_fns.conversation_doc.excel_doc import save_chat_tables
    from crazy_functions.doc_fns.conversation_doc.html_doc import HtmlFormatter
    from crazy_functions.doc_fns.conversation_doc.markdown_doc import MarkdownFormatter
    from crazy_functions.doc_fns.conversation_doc.word_doc import WordFormatter
    from crazy_functions.doc_fns.conversation_doc.txt_doc import TxtFormatter
    from crazy_functions.doc_fns.conversation_doc.word2pdf import WordToPdfConverter

    async def save_html():
        try:
            html_formatter = HtmlFormatter(chatbot, history)
            html_content = html_formatter.create_document()
            html_file = os.path.join(save_dir, base_name + '.html')
            async with aiofiles.open(html_file, 'w', encoding='utf8') as f:
                await f.write(html_content)
            return html_file
        except Exception as e:
            print(f"保存HTML格式失败: {str(e)}")
            return None

    async def save_word():
        try:
            word_formatter = WordFormatter()
            doc = word_formatter.create_document(history)
            docx_file = os.path.join(save_dir, base_name + '.docx')
            # 由于python-docx不支持异步,使用线程池执行
            loop = asyncio.get_event_loop()
            await loop.run_in_executor(None, doc.save, docx_file)
            return docx_file
        except Exception as e:
            print(f"保存Word格式失败: {str(e)}")
            return None
    async def save_pdf(docx_file):
        try:
            if docx_file:
                # 获取文件名和保存路径
                pdf_file = os.path.join(save_dir, base_name + '.pdf')

                # 在线程池中执行转换
                loop = asyncio.get_event_loop()
                pdf_file = await loop.run_in_executor(
                    None,
                    WordToPdfConverter.convert_to_pdf,
                    docx_file
                    # save_dir
                )

                return pdf_file

        except Exception as e:
            print(f"保存PDF格式失败: {str(e)}")
            return None

    async def save_markdown():
        try:
            md_formatter = MarkdownFormatter()
            md_content = md_formatter.create_document(history)
            md_file = os.path.join(save_dir, base_name + '.md')
            async with aiofiles.open(md_file, 'w', encoding='utf8') as f:
                await f.write(md_content)
            return md_file
        except Exception as e:
            print(f"保存Markdown格式失败: {str(e)}")
            return None

    async def save_txt():
        try:
            txt_formatter = TxtFormatter()
            txt_content = txt_formatter.create_document(history)
            txt_file = os.path.join(save_dir, base_name + '.txt')
            async with aiofiles.open(txt_file, 'w', encoding='utf8') as f:
                await f.write(txt_content)
            return txt_file
        except Exception as e:
            print(f"保存TXT格式失败: {str(e)}")
            return None

    async def main():
        # 并发执行所有保存任务
        html_task = asyncio.create_task(save_html())
        word_task = asyncio.create_task(save_word())
        md_task = asyncio.create_task(save_markdown())
        txt_task = asyncio.create_task(save_txt())

        # 等待所有任务完成
        html_file = await html_task
        docx_file = await word_task
        md_file = await md_task
        txt_file = await txt_task

        # PDF转换需要等待word文件生成完成
        pdf_file = await save_pdf(docx_file)
        # 收集所有成功生成的文件
        result_files = [f for f in [html_file, docx_file, md_file, txt_file, pdf_file] if f]

        # 保存Excel表格
        excel_files = save_chat_tables(history, save_dir, base_name)
        result_files.extend(excel_files)

        return result_files

    # 生成时间戳
    timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())

    # 获取保存目录
    save_dir = get_log_folder(get_user(chatbot), plugin_name='chat_history')

    # 处理文件名
    base_name = file_name if file_name else f"聊天记录_{timestamp}"

    # 运行异步任务
    result_files = asyncio.run(main())

    # 将生成的文件添加到下载区
    for file in result_files:
        promote_file_to_downloadzone(file, rename_file=os.path.basename(file), chatbot=chatbot)

    # 如果没有成功保存任何文件,返回错误信息
    if not result_files:
        return "保存对话记录失败,请检查错误日志"

    ext_list = [os.path.splitext(f)[1] for f in result_files]
    # 返回成功信息和文件路径
    return f"对话历史已保存至以下格式文件:" + "、".join(ext_list)

def gen_file_preview(file_name):
    try:
        with open(file_name, 'r', encoding='utf8') as f:
            file_content = f.read()
        # pattern to match the text between <head> and </head>
        pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
        file_content = re.sub(pattern, '', file_content)
        html, history = file_content.split('<hr color="blue"> \n\n 对话数据 (无渲染):\n')
        history = history.strip('<code>')
        history = history.strip('</code>')
        history = history.split("\n>>>")
        return list(filter(lambda x:x!="", history))[0][:100]
    except:
        return ""

def read_file_to_chat(chatbot, history, file_name):
    with open(file_name, 'r', encoding='utf8') as f:
        file_content = f.read()
    from bs4 import BeautifulSoup
    soup = BeautifulSoup(file_content, 'lxml')
    # 提取QaBox信息
    chatbot.clear()
    qa_box_list = []
    qa_boxes = soup.find_all("div", class_="QaBox")
    for box in qa_boxes:
        question = box.find("div", class_="Question").get_text(strip=False)
        answer = box.find("div", class_="Answer").get_text(strip=False)
        qa_box_list.append({"Question": question, "Answer": answer})
        chatbot.append([question, answer])
    # 提取historyBox信息
    history_box_list = []
    history_boxes = soup.find_all("div", class_="historyBox")
    for box in history_boxes:
        entry = box.find("div", class_="entry").get_text(strip=False)
        history_box_list.append(entry)
    history = history_box_list
    chatbot.append([None, f"[Local Message] 载入对话{len(qa_box_list)}条,上下文{len(history)}条。"])
    return chatbot, history

@CatchException
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
    """
    txt             输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
    llm_kwargs      gpt模型参数,如温度和top_p等,一般原样传递下去就行
    plugin_kwargs   插件模型的参数,暂时没有用武之地
    chatbot         聊天显示框的句柄,用于显示给用户
    history         聊天历史,前情提要
    system_prompt   给gpt的静默提醒
    user_request    当前用户的请求信息(IP地址等)
    """
    file_name = plugin_kwargs.get("file_name", None)

    chatbot.append((None, f"[Local Message] {write_chat_to_file_legacy(chatbot, history, file_name)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话。"))
    try:
        chatbot.append((None, f"[Local Message] 正在尝试生成pdf以及word格式的对话存档,请稍等..."))
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求需要一段时间,我们先及时地做一次界面更新
        lastmsg = f"[Local Message] {write_chat_to_file(chatbot, history, file_name)}。" \
              f"您可以调用下拉菜单中的“载入对话历史会话”还原当下的对话,请注意,目前只支持html格式载入历史。" \
              f"当模型回答中存在表格,将提取表格内容存储为Excel的xlsx格式,如果你提供一些数据,然后输入指令要求模型帮你整理为表格" \
              f"(如“请帮我将下面的数据整理为表格:”),再利用此插件就可以获取到Excel表格。"
        yield from update_ui_latest_msg(lastmsg, chatbot, history) # 刷新界面 # 由于请求需要一段时间,我们先及时地做一次界面更新
    except Exception as e:
        logger.exception(f"已完成对话存档(pdf和word格式的对话存档生成未成功)。{str(e)}")
        lastmsg = "已完成对话存档(pdf和word格式的对话存档生成未成功)。"
        yield from update_ui_latest_msg(lastmsg, chatbot, history) # 刷新界面 # 由于请求需要一段时间,我们先及时地做一次界面更新
    return

class Conversation_To_File_Wrap(GptAcademicPluginTemplate):
    def __init__(self):
        """
        请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
        """
        pass

    def define_arg_selection_menu(self):
        """
        定义插件的二级选项菜单

        第一个参数,名称`file_name`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
        """
        gui_definition = {
            "file_name": ArgProperty(title="保存文件名", description="输入对话存档文件名,留空则使用时间作为文件名", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
        }
        return gui_definition

    def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
        """
        执行插件
        """
        yield from 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)





def hide_cwd(str):
    import os
    current_path = os.getcwd()
    replace_path = "."
    return str.replace(current_path, replace_path)

@CatchException
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
    """
    txt             输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
    llm_kwargs      gpt模型参数,如温度和top_p等,一般原样传递下去就行
    plugin_kwargs   插件模型的参数,暂时没有用武之地
    chatbot         聊天显示框的句柄,用于显示给用户
    history         聊天历史,前情提要
    system_prompt   给gpt的静默提醒
    user_request    当前用户的请求信息(IP地址等)
    """
    from crazy_functions.crazy_utils import get_files_from_everything
    success, file_manifest, _ = get_files_from_everything(txt, type='.html')

    if not success:
        if txt == "": txt = '空空如也的输入栏'
        import glob
        local_history = "<br/>".join([
            "`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`"
            for f in glob.glob(
                f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html',
                recursive=True
            )])
        chatbot.append([f"正在查找对话历史文件(html格式): {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"])
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
        return

    try:
        chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0])
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
    except:
        chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"])
        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
        return

@CatchException
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
    """
    txt             输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
    llm_kwargs      gpt模型参数,如温度和top_p等,一般原样传递下去就行
    plugin_kwargs   插件模型的参数,暂时没有用武之地
    chatbot         聊天显示框的句柄,用于显示给用户
    history         聊天历史,前情提要
    system_prompt   给gpt的静默提醒
    user_request    当前用户的请求信息(IP地址等)
    """

    import glob, os
    local_history = "<br/>".join([
        "`"+hide_cwd(f)+"`"
        for f in glob.glob(
            f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True
        )])
    for f in glob.glob(f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True):
        os.remove(f)
    chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
    yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
    return


================================================
FILE: crazy_functions/Document_Conversation.py
================================================
import os
import threading
import time
from dataclasses import dataclass
from typing import List, Tuple, Dict, Generator
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
from crazy_functions.rag_fns.rag_file_support import extract_text
from request_llms.bridge_all import model_info
from toolbox import update_ui, CatchException, report_exception
from shared_utils.fastapi_server import validate_path_safety


@dataclass
class FileFragment:
    """文件片段数据类,用于组织处理单元"""
    file_path: str
    content: str
    rel_path: str
    fragment_index: int
    total_fragments: int


class BatchDocumentSummarizer:
    """优化的文档总结器 - 批处理版本"""

    def __init__(self, llm_kwargs: Dict, query: str, chatbot: List, history: List, system_prompt: str):
        """初始化总结器"""
        self.llm_kwargs = llm_kwargs
        self.query = query
        self.chatbot = chatbot
        self.history = history
        self.system_prompt = system_prompt
        self.failed_files = []
        self.file_summaries_map = {}

    def _get_token_limit(self) -> int:
        """获取模型token限制"""
        max_token = model_info[self.llm_kwargs['llm_model']]['max_token']
        return max_token * 3 // 4

    def _create_batch_inputs(self, fragments: List[FileFragment]) -> Tuple[List, List, List]:
        """创建批处理输入"""
        inputs_array = []
        inputs_show_user_array = []
        history_array = []

        for frag in fragments:
            if self.query:
                i_say = (f'请按照用户要求对文件内容进行处理,文件名为{os.path.basename(frag.file_path)},'
                         f'用户要求为:{self.query}:'
                         f'文件内容是 ```{frag.content}```')
                i_say_show_user = (f'正在处理 {frag.rel_path} (片段 {frag.fragment_index + 1}/{frag.total_fragments})')
            else:
                i_say = (f'请对下面的内容用中文做总结,不超过500字,文件名是{os.path.basename(frag.file_path)},'
                         f'内容是 ```{frag.content}```')
                i_say_show_user = f'正在处理 {frag.rel_path} (片段 {frag.fragment_index + 1}/{frag.total_fragments})'

            inputs_array.append(i_say)
            inputs_show_user_array.append(i_say_show_user)
            history_array.append([])

        return inputs_array, inputs_show_user_array, history_array

    def _process_single_file_with_timeout(self, file_info: Tuple[str, str], mutable_status: List) -> List[FileFragment]:
        """包装了超时控制的文件处理函数"""

        def timeout_handler():
            thread = threading.current_thread()
            if hasattr(thread, '_timeout_occurred'):
                thread._timeout_occurred = True

        # 设置超时标记
        thread = threading.current_thread()
        thread._timeout_occurred = False

        # 设置超时时间为30秒,给予更多处理时间
        TIMEOUT_SECONDS = 30
        timer = threading.Timer(TIMEOUT_SECONDS, timeout_handler)
        timer.start()

        try:
            fp, project_folder = file_info
            fragments = []

            # 定期检查是否超时
            def check_timeout():
                if hasattr(thread, '_timeout_occurred') and thread._timeout_occurred:
                    raise TimeoutError(f"处理文件 {os.path.basename(fp)} 超时({TIMEOUT_SECONDS}秒)")

            # 更新状态
            mutable_status[0] = "检查文件大小"
            mutable_status[1] = time.time()
            check_timeout()

            # 文件大小检查
            if os.path.getsize(fp) > self.max_file_size:
                self.failed_files.append((fp, f"文件过大:超过{self.max_file_size / 1024 / 1024}MB"))
                mutable_status[2] = "文件过大"
                return fragments

            # 更新状态
            mutable_status[0] = "提取文件内容"
            mutable_status[1] = time.time()

            # 提取内容 - 使用单独的超时控制
            content = None
            extract_start_time = time.time()
            try:
                while True:
                    check_timeout()  # 检查全局超时

                    # 检查提取过程是否超时(10秒)
                    if time.time() - extract_start_time > 10:
                        raise TimeoutError("文件内容提取超时(10秒)")

                    try:
                        content = extract_text(fp)
                        break
                    except Exception as e:
                        if "timeout" in str(e).lower():
                            continue  # 如果是临时超时,重试
                        raise  # 其他错误直接抛出

            except Exception as e:
                self.failed_files.append((fp, f"文件读取失败:{str(e)}"))
                mutable_status[2] = "读取失败"
                return fragments

            if content is None:
                self.failed_files.append((fp, "文件解析失败:不支持的格式或文件损坏"))
                mutable_status[2] = "格式不支持"
                return fragments
            elif not content.strip():
                self.failed_files.append((fp, "文件内容为空"))
                mutable_status[2] = "内容为空"
                return fragments

            check_timeout()

            # 更新状态
            mutable_status[0] = "分割文本"
            mutable_status[1] = time.time()

            # 分割文本 - 添加超时检查
            split_start_time = time.time()
            try:
                while True:
                    check_timeout()  # 检查全局超时

                    # 检查分割过程是否超时(5秒)
                    if time.time() - split_start_time > 5:
                        raise TimeoutError("文本分割超时(5秒)")

                    paper_fragments = breakdown_text_to_satisfy_token_limit(
                        txt=content,
                        limit=self._get_token_limit(),
                        llm_model=self.llm_kwargs['llm_model']
                    )
                    break

            except Exception as e:
                self.failed_files.append((fp, f"文本分割失败:{str(e)}"))
                mutable_status[2] = "分割失败"
                return fragments

            # 处理片段
            rel_path = os.path.relpath(fp, project_folder)
            for i, frag in enumerate(paper_fragments):
                check_timeout()  # 每处理一个片段检查一次超时
                if frag.strip():
                    fragments.append(FileFragment(
                        file_path=fp,
                        content=frag,
                        rel_path=rel_path,
                        fragment_index=i,
                        total_fragments=len(paper_fragments)
                    ))

            mutable_status[2] = "处理完成"
            return fragments

        except TimeoutError as e:
            self.failed_files.append((fp, str(e)))
            mutable_status[2] = "处理超时"
            return []
        except Exception as e:
            self.failed_files.append((fp, f"处理失败:{str(e)}"))
            mutable_status[2] = "处理异常"
            return []
        finally:
            timer.cancel()

    def prepare_fragments(self, project_folder: str, file_paths: List[str]) -> Generator:
        import concurrent.futures

        from concurrent.futures import ThreadPoolExecutor
        from typing import Generator, List
        """并行准备所有文件的处理片段"""
        all_fragments = []
        total_files = len(file_paths)

        # 配置参数
        self.refresh_interval = 0.2  # UI刷新间隔
        self.watch_dog_patience = 5  # 看门狗超时时间
        self.max_file_size = 10 * 1024 * 1024  # 10MB限制
        self.max_workers = min(32, len(file_paths))  # 最多32个线程

        # 创建有超时控制的线程池
        executor = ThreadPoolExecutor(max_workers=self.max_workers)

        # 用于跨线程状态传递的可变列表 - 增加文件名信息
        mutable_status_array = [["等待中", time.time(), "pending", file_path] for file_path in file_paths]

        # 创建文件处理任务
        file_infos = [(fp, project_folder) for fp in file_paths]

        # 提交所有任务,使用带超时控制的处理函数
        futures = [
            executor.submit(
                self._process_single_file_with_timeout,
                file_info,
                mutable_status_array[i]
            ) for i, file_info in enumerate(file_infos)
        ]

        # 更新UI的计数器
        cnt = 0

        try:
            # 监控任务执行
            while True:
                time.sleep(self.refresh_interval)
                cnt += 1

                # 检查任务完成状态
                worker_done = [f.done() for f in futures]

                # 更新状态显示
                status_str = ""
                for i, (status, timestamp, desc, file_path) in enumerate(mutable_status_array):
                    # 获取文件名(去掉路径)
                    file_name = os.path.basename(file_path)
                    if worker_done[i]:
                        status_str += f"文件 {file_name}: {desc}\n\n"
                    else:
                        status_str += f"文件 {file_name}: {status} {desc}\n\n"

                # 更新UI
                self.chatbot[-1] = [
                    "处理进度",
                    f"正在处理文件...\n\n{status_str}" + "." * (cnt % 10 + 1)
                ]
                yield from update_ui(chatbot=self.chatbot, history=self.history)

                # 检查是否所有任务完成
                if all(worker_done):
                    break

        finally:
            # 确保线程池正确关闭
            executor.shutdown(wait=False)

        # 收集结果
        processed_files = 0
        for future in futures:
            try:
                fragments = future.result(timeout=0.1)  # 给予一个短暂的超时时间来获取结果
                all_fragments.extend(fragments)
                processed_files += 1
            except concurrent.futures.TimeoutError:
                # 处理获取结果超时
                file_index = futures.index(future)
                self.failed_files.append((file_paths[file_index], "结果获取超时"))
                continue
            except Exception as e:
                # 处理其他异常
                file_index = futures.index(future)
                self.failed_files.append((file_paths[file_index], f"未知错误:{str(e)}"))
                continue

        # 最终进度更新
        self.chatbot.append([
            "文件处理完成",
            f"成功处理 {len(all_fragments)} 个片段,失败 {len(self.failed_files)} 个文件"
        ])
        yield from update_ui(chatbot=self.chatbot, history=self.history)

        return all_fragments

    def _process_fragments_batch(self, fragments: List[FileFragment]) -> Generator:
        """批量处理文件片段"""
        from collections import defaultdict
        batch_size = 64  # 每批处理的片段数
        max_retries = 3  # 最大重试次数
        retry_delay = 5  # 重试延迟(秒)
        results = defaultdict(list)

        # 按批次处理
        for i in range(0, len(fragments), batch_size):
            batch = fragments[i:i + batch_size]

            inputs_array, inputs_show_user_array, history_array = self._create_batch_inputs(batch)
            sys_prompt_array = ["请总结以下内容:"] * len(batch)

            # 添加重试机制
            for retry in range(max_retries):
                try:
                    response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
                        inputs_array=inputs_array,
                        inputs_show_user_array=inputs_show_user_array,
                        llm_kwargs=self.llm_kwargs,
                        chatbot=self.chatbot,
                        history_array=history_array,
                        sys_prompt_array=sys_prompt_array,
                    )

                    # 处理响应
                    for j, frag in enumerate(batch):
                        summary = response_collection[j * 2 + 1]
                        if summary and summary.strip():
                            results[frag.rel_path].append({
                                'index': frag.fragment_index,
                                'summary': summary,
                                'total': frag.total_fragments
                            })
                    break  # 成功处理,跳出重试循环

                except Exception as e:
                    if retry == max_retries - 1:  # 最后一次重试失败
                        for frag in batch:
                            self.failed_files.append((frag.file_path, f"处理失败:{str(e)}"))
                    else:
                        yield from update_ui(self.chatbot.append([f"批次处理失败,{retry_delay}秒后重试...", str(e)]))
                        time.sleep(retry_delay)

        return results

    def _generate_final_summary_request(self) -> Tuple[List, List, List]:
        """准备最终总结请求"""
        if not self.file_summaries_map:
            return (["无可用的文件总结"], ["生成最终总结"], [[]])

        summaries = list(self.file_summaries_map.values())
        if all(not summary for summary in summaries):
            return (["所有文件处理均失败"], ["生成最终总结"], [[]])

        if self.plugin_kwargs.get("advanced_arg"):
            i_say = "根据以上所有文件的处理结果,按要求进行综合处理:" + self.plugin_kwargs['advanced_arg']
        else:
            i_say = "请根据以上所有文件的处理结果,生成最终的总结,不超过1000字。"

        return ([i_say], [i_say], [summaries])

    def process_files(self, project_folder: str, file_paths: List[str]) -> Generator:
        """处理所有文件"""
        total_files = len(file_paths)
        self.chatbot.append([f"开始处理", f"总计 {total_files} 个文件"])
        yield from update_ui(chatbot=self.chatbot, history=self.history)

        # 1. 准备所有文件片段
        # 在 process_files 函数中:
        fragments = yield from self.prepare_fragments(project_folder, file_paths)
        if not fragments:
            self.chatbot.append(["处理失败", "没有可处理的文件内容"])
            return "没有可处理的文件内容"

        # 2. 批量处理所有文件片段
        self.chatbot.append([f"文件分析", f"共计 {len(fragments)} 个处理单元"])
        yield from update_ui(chatbot=self.chatbot, history=self.history)

        try:
            file_summaries = yield from self._process_fragments_batch(fragments)
        except Exception as e:
            self.chatbot.append(["处理错误", f"批处理过程失败:{str(e)}"])
            return "处理过程发生错误"

        # 3. 为每个文件生成整体总结
        self.chatbot.append(["生成总结", "正在汇总文件内容..."])
        yield from update_ui(chatbot=self.chatbot, history=self.history)

        # 处理每个文件的总结
        for rel_path, summaries in file_summaries.items():
            if len(summaries) > 1:  # 多片段文件需要生成整体总结
                sorted_summaries = sorted(summaries, key=lambda x: x['index'])
                if self.plugin_kwargs.get("advanced_arg"):

                    i_say = f'请按照用户要求对文件内容进行处理,用户要求为:{self.plugin_kwargs["advanced_arg"]}:'
                else:
                    i_say = f"请总结文件 {os.path.basename(rel_path)} 的主要内容,不超过500字。"

                try:
                    summary_texts = [s['summary'] for s in sorted_summaries]
                    response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
                        inputs_array=[i_say],
                        inputs_show_user_array=[f"生成 {rel_path} 的处理结果"],
                        llm_kwargs=self.llm_kwargs,
                        chatbot=self.chatbot,
                        history_array=[summary_texts],
                        sys_prompt_array=["你是一个优秀的助手,"],
                    )
                    self.file_summaries_map[rel_path] = response_collection[1]
                except Exception as e:
                    self.chatbot.append(["警告", f"文件 {rel_path} 总结生成失败:{str(e)}"])
                    self.file_summaries_map[rel_path] = "总结生成失败"
            else:  # 单片段文件直接使用其唯一的总结
                self.file_summaries_map[rel_path] = summaries[0]['summary']

        # 4. 生成最终总结
        if total_files == 1:
            return "文件数为1,此时不调用总结模块"
        else:
            try:
                # 收集所有文件的总结用于生成最终总结
                file_summaries_for_final = []
                for rel_path, summary in self.file_summaries_map.items():
                    file_summaries_for_final.append(f"文件 {rel_path} 的总结:\n{summary}")

                if self.plugin_kwargs.get("advanced_arg"):
                    final_summary_prompt = ("根据以下所有文件的总结内容,按要求进行综合处理:" +
                                            self.plugin_kwargs['advanced_arg'])
                else:
                    final_summary_prompt = "请根据以下所有文件的总结内容,生成最终的总结报告。"

                response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
                    inputs_array=[final_summary_prompt],
                    inputs_show_user_array=["生成最终总结报告"],
                    llm_kwargs=self.llm_kwargs,
                    chatbot=self.chatbot,
                    history_array=[file_summaries_for_final],
                    sys_prompt_array=["总结所有文件内容。"],
                    max_workers=1
                )

                return response_collection[1] if len(response_collection) > 1 else "生成总结失败"
            except Exception as e:
                self.chatbot.append(["错误", f"最终总结生成失败:{str(e)}"])
                return "生成总结失败"

    def save_results(self, final_summary: str):
        """保存结果到文件"""
        from toolbox import promote_file_to_downloadzone, write_history_to_file
        from crazy_functions.doc_fns.batch_file_query_doc import MarkdownFormatter, HtmlFormatter, WordFormatter
        import os
        timestamp = time.strftime("%Y%m%d_%H%M%S")

        # 创建各种格式化器
        md_formatter = MarkdownFormatter(final_summary, self.file_summaries_map, self.failed_files)
        html_formatter = HtmlFormatter(final_summary, self.file_summaries_map, self.failed_files)
        word_formatter = WordFormatter(final_summary, self.file_summaries_map, self.failed_files)

        result_files = []

        # 保存 Markdown
        try:
            md_content = md_formatter.create_document()
            result_file_md = write_history_to_file(
                history=[md_content],  # 直接传入内容列表
                file_basename=f"文档总结_{timestamp}.md"
            )
            result_files.append(result_file_md)
        except:
            pass

        # 保存 HTML
        try:
            html_content = html_formatter.create_document()
            result_file_html = write_history_to_file(
                history=[html_content],
                file_basename=f"文档总结_{timestamp}.html"
            )
            result_files.append(result_file_html)
        except:
            pass

        # 保存 Word
        try:
            doc = word_formatter.create_document()
            # 由于 Word 文档需要用 doc.save(),我们使用与 md 文件相同的目录
            result_file_docx = os.path.join(
                os.path.dirname(result_file_md),
                f"文档总结_{timestamp}.docx"
            )
            doc.save(result_file_docx)
            result_files.append(result_file_docx)
        except:
            pass

        # 添加到下载区
        for file in result_files:
            promote_file_to_downloadzone(file, chatbot=self.chatbot)

        self.chatbot.append(["处理完成", f"结果已保存至: {', '.join(result_files)}"])


@CatchException
def 批量文件询问(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: List,
                 history: List, system_prompt: str, user_request: str):
    """主函数 - 优化版本"""
    # 初始化
    import glob
    import re
    from crazy_functions.rag_fns.rag_file_support import supports_format
    from toolbox import report_exception
    query = plugin_kwargs.get("advanced_arg")
    summarizer = BatchDocumentSummarizer(llm_kwargs, query, chatbot, history, system_prompt)
    chatbot.append(["函数插件功能", f"作者:lbykkkk,批量总结文件。支持格式: {', '.join(supports_format)}等其他文本格式文件,如果长时间卡在文件处理过程,请查看处理进度,然后删除所有处于“pending”状态的文件,然后重新上传处理。"])
    yield from update_ui(chatbot=chatbot, history=history)

    # 验证输入路径
    if not os.path.exists(txt):
        report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到项目或无权访问: {txt}")
        yield from update_ui(chatbot=chatbot, history=history)
        return

    # 获取文件列表
    project_folder = txt
    user_name = chatbot.get_user()
    validate_path_safety(project_folder, user_name)
    extract_folder = next((d for d in glob.glob(f'{project_folder}/*')
                           if os.path.isdir(d) and d.endswith('.extract')), project_folder)
    exclude_patterns = r'/[^/]+\.(zip|rar|7z|tar|gz)$'
    file_manifest = [f for f in glob.glob(f'{extract_folder}/**', recursive=True)
                     if os.path.isfile(f) and not re.search(exclude_patterns, f)]

    if not file_manifest:
        report_exception(chatbot, history, a=f"解析项目: {txt}", b="未找到支持的文件类型")
        yield from update_ui(chatbot=chatbot, history=history)
        return

    # 处理所有文件并生成总结
    final_summary = yield from summarizer.process_files(project_folder, file_manifest)
    yield from update_ui(chatbot=chatbot, history=history)

    # 保存结果
    summarizer.save_results(final_summary)
    yield from update_ui(chatbot=chatbot, history=history)


================================================
FILE: crazy_functions/Document_Conversation_Wrap.py
================================================
import random
from toolbox import get_conf
from crazy_functions.Document_Conversation import 批量文件询问
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty


class Document_Conversation_Wrap(GptAcademicPluginTemplate):
    def __init__(self):
        """
        请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
        """
        pass

    def define_arg_selection_menu(self):
        """
        定义插件的二级选项菜单

        第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
        第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
        第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;

        """
        gui_definition = {
            "main_input":
                ArgProperty(title="已上传的文件", description="上传文件后自动填充", default_value="", type="string").model_dump_json(),
            "searxng_url":
                ArgProperty(title="对材料提问", description="提问", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
        }
        return gui_definition

    def execute(txt, llm_kwargs, plugin_kwargs:dict, chatbot, history, system_prompt, user_request):
        """
        执行插件
        """
        yield from 批量文件询问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)


================================================
FILE: crazy_functions/Document_Optimize.py
================================================
import os
import time
import glob
import re
import threading
from typing import Dict, List, Generator, Tuple
from dataclasses import dataclass

from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
from crazy_functions.rag_fns.rag_file_support import extract_text, supports_format, convert_to_markdown
from request_llms.bridge_all import model_info
from toolbox import update_ui, CatchException, report_exception, promote_file_to_downloadzone, write_history_to_file
from shared_utils.fastapi_server import validate_path_safety

# 新增:导入结构化论文提取器
from crazy_functions.doc_fns.read_fns.unstructured_all.paper_structure_extractor import PaperStructureExtractor, ExtractorConfig, StructuredPaper

# 导入格式化器
from crazy_functions.paper_fns.file2file_doc import (
    TxtFormatter,
    MarkdownFormatter,
    HtmlFormatter,
    WordFormatter
)

@dataclass
class TextFragment:
    """文本片段数据类,用于组织处理单元"""
    content: str
    fragment_index: int
    total_fragments: int


class DocumentProcessor:
    """文档处理器 - 处理单个文档并输出结果"""

    def __init__(self, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: List, history: List, system_prompt: str):
        """初始化处理器"""
        self.llm_kwargs = llm_kwargs
        self.plugin_kwargs = plugin_kwargs
        self.chatbot = chatbot
        self.history = history
        self.system_prompt = system_prompt
        self.processed_results = []
        self.failed_fragments = []
        # 新增:初始化论文结构提取器
        self.paper_extractor = PaperStructureExtractor()

    def _get_token_limit(self) -> int:
        """获取模型token限制,返回更小的值以确保更细粒度的分割"""
        max_token = model_info[self.llm_kwargs['llm_model']]['max_token']
        # 降低token限制,使每个片段更小
        return max_token // 4  # 从3/4降低到1/4

    def _create_batch_inputs(self, fragments: List[TextFragment]) -> Tuple[List, List, List]:
        """创建批处理输入"""
        inputs_array = []
        inputs_show_user_array = []
        history_array = []

        user_instruction = self.plugin_kwargs.get("advanced_arg", "请润色以下学术文本,提高其语言表达的准确性、专业性和流畅度,保持学术风格,确保逻辑连贯,但不改变原文的科学内容和核心观点")

        for frag in fragments:
            i_say = (f'请按照以下要求处理文本内容:{user_instruction}\n\n'
                     f'请将对文本的处理结果放在<decision>和</decision>标签之间。\n\n'
                     f'文本内容:\n```\n{frag.content}\n```')

            i_say_show_user = f'正在处理文本片段 {frag.fragment_index + 1}/{frag.total_fragments}'

            inputs_array.append(i_say)
            inputs_show_user_array.append(i_say_show_user)
            history_array.append([])

        return inputs_array, inputs_show_user_array, history_array

    def _extract_decision(self, text: str) -> str:
        """从LLM响应中提取<decision>标签内的内容"""
        import re
        pattern = r'<decision>(.*?)</decision>'
        matches = re.findall(pattern, text, re.DOTALL)

        if matches:
            return matches[0].strip()
        else:
            # 如果没有找到标签,返回原始文本
            return text.strip()

    def process_file(self, file_path: str) -> Generator:
        """处理单个文件"""
        self.chatbot.append(["开始处理文件", f"文件路径: {file_path}"])
        yield from update_ui(chatbot=self.chatbot, history=self.history)

        try:
            # 首先尝试转换为Markdown
            from crazy_functions.rag_fns.rag_file_support import convert_to_markdown
            file_path = convert_to_markdown(file_path)

            # 1. 检查文件是否为支持的论文格式
            is_paper_format = any(file_path.lower().endswith(ext) for ext in self.paper_extractor.SUPPORTED_EXTENSIONS)

            if is_paper_format:
                # 使用结构化提取器处理论文
                return (yield from self._process_structured_paper(file_path))
            else:
                # 使用原有方式处理普通文档
                return (yield from self._process_regular_file(file_path))

        except Exception as e:
            self.chatbot.append(["处理错误", f"文件处理失败: {str(e)}"])
            yield from update_ui(chatbot=self.chatbot, history=self.history)
            return None

    def _process_structured_paper(self, file_path: str) -> Generator:
        """处理结构化论文文件"""
        # 1. 提取论文结构
        self.chatbot[-1] = ["正在分析论文结构", f"文件路径: {file_path}"]
        yield from update_ui(chatbot=self.chatbot, history=self.history)

        try:
            paper = self.paper_extractor.extract_paper_structure(file_path)

            if not paper or not paper.sections:
                self.chatbot.append(["无法提取论文结构", "将使用全文内容进行处理"])
                yield from update_ui(chatbot=self.chatbot, history=self.history)

                # 使用全文内容进行段落切分
                if paper and paper.full_text:
                    # 使用增强的分割函数进行更细致的分割
                    fragments = self._breakdown_section_content(paper.full_text)

                    # 创建文本片段对象
                    text_fragments = []
                    for i, frag in enumerate(fragments):
                        if frag.strip():
                            text_fragments.append(TextFragment(
                                content=frag,
                                fragment_index=i,
                                total_fragments=len(fragments)
                            ))

                    # 批量处理片段
                    if text_fragments:
                        self.chatbot[-1] = ["开始处理文本", f"共 {len(text_fragments)} 个片段"]
                        yield from update_ui(chatbot=self.chatbot, history=self.history)

                        # 一次性准备所有输入
                        inputs_array, inputs_show_user_array, history_array = self._create_batch_inputs(text_fragments)

                        # 使用系统提示
                        instruction = self.plugin_kwargs.get("advanced_arg", "请润色以下学术文本,提高其语言表达的准确性、专业性和流畅度,保持学术风格,确保逻辑连贯,但不改变原文的科学内容和核心观点")
                        sys_prompt_array = [f"你是一个专业的学术文献编辑助手。请按照用户的要求:'{instruction}'处理文本。保持学术风格,增强表达的准确性和专业性。"] * len(text_fragments)

                        # 调用LLM一次性处理所有片段
                        response_collecti
Download .txt
gitextract_m7znd4_k/

├── .dockerignore
├── .gitattributes
├── .gitignore
├── .pre-commit-config.yaml
├── Dockerfile
├── LICENSE
├── README.md
├── check_proxy.py
├── config.py
├── core_functional.py
├── crazy_functional.py
├── crazy_functions/
│   ├── Academic_Conversation.py
│   ├── Arxiv_Downloader.py
│   ├── Audio_Assistant.py
│   ├── Audio_Summary.py
│   ├── Commandline_Assistant.py
│   ├── Conversation_To_File.py
│   ├── Document_Conversation.py
│   ├── Document_Conversation_Wrap.py
│   ├── Document_Optimize.py
│   ├── Dynamic_Function_Generate.py
│   ├── Google_Scholar_Assistant_Legacy.py
│   ├── Helpers.py
│   ├── Image_Generate.py
│   ├── Image_Generate_Wrap.py
│   ├── Interactive_Func_Template.py
│   ├── Interactive_Mini_Game.py
│   ├── Internet_GPT.py
│   ├── Internet_GPT_Bing_Legacy.py
│   ├── Internet_GPT_Legacy.py
│   ├── Internet_GPT_Wrap.py
│   ├── Latex_Function.py
│   ├── Latex_Function_Wrap.py
│   ├── Latex_Project_Polish.py
│   ├── Latex_Project_Translate_Legacy.py
│   ├── Markdown_Translate.py
│   ├── Math_Animation_Gen.py
│   ├── Mermaid_Figure_Gen.py
│   ├── Multi_Agent_Legacy.py
│   ├── Multi_LLM_Query.py
│   ├── PDF_QA.py
│   ├── PDF_Summary.py
│   ├── PDF_Translate.py
│   ├── PDF_Translate_Nougat.py
│   ├── PDF_Translate_Wrap.py
│   ├── Paper_Abstract_Writer.py
│   ├── Paper_Reading.py
│   ├── Program_Comment_Gen.py
│   ├── Rag_Interface.py
│   ├── Social_Helper.py
│   ├── SourceCode_Analyse.py
│   ├── SourceCode_Analyse_JupyterNotebook.py
│   ├── SourceCode_Comment.py
│   ├── SourceCode_Comment_Wrap.py
│   ├── Vectorstore_QA.py
│   ├── VideoResource_GPT.py
│   ├── Void_Terminal.py
│   ├── Word_Summary.py
│   ├── __init__.py
│   ├── agent_fns/
│   │   ├── auto_agent.py
│   │   ├── echo_agent.py
│   │   ├── general.py
│   │   ├── persistent.py
│   │   ├── pipe.py
│   │   ├── python_comment_agent.py
│   │   ├── python_comment_compare.html
│   │   └── watchdog.py
│   ├── ast_fns/
│   │   └── comment_remove.py
│   ├── crazy_utils.py
│   ├── diagram_fns/
│   │   └── file_tree.py
│   ├── doc_fns/
│   │   ├── AI_review_doc.py
│   │   ├── __init__.py
│   │   ├── batch_file_query_doc.py
│   │   ├── content_folder.py
│   │   ├── conversation_doc/
│   │   │   ├── excel_doc.py
│   │   │   ├── html_doc.py
│   │   │   ├── markdown_doc.py
│   │   │   ├── pdf_doc.py
│   │   │   ├── txt_doc.py
│   │   │   ├── word2pdf.py
│   │   │   └── word_doc.py
│   │   ├── read_fns/
│   │   │   ├── __init__.py
│   │   │   ├── docx_reader.py
│   │   │   ├── excel_reader.py
│   │   │   ├── markitdown/
│   │   │   │   └── markdown_reader.py
│   │   │   ├── unstructured_all/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── paper_metadata_extractor.py
│   │   │   │   ├── paper_structure_extractor.py
│   │   │   │   └── unstructured_md.py
│   │   │   └── web_reader.py
│   │   └── text_content_loader.py
│   ├── game_fns/
│   │   ├── game_ascii_art.py
│   │   ├── game_interactive_story.py
│   │   └── game_utils.py
│   ├── gen_fns/
│   │   └── gen_fns_shared.py
│   ├── ipc_fns/
│   │   └── mp.py
│   ├── json_fns/
│   │   ├── pydantic_io.py
│   │   └── select_tool.py
│   ├── latex_fns/
│   │   ├── latex_actions.py
│   │   ├── latex_pickle_io.py
│   │   └── latex_toolbox.py
│   ├── live_audio/
│   │   ├── aliyunASR.py
│   │   └── audio_io.py
│   ├── media_fns/
│   │   └── get_media.py
│   ├── multi_stage/
│   │   └── multi_stage_utils.py
│   ├── paper_fns/
│   │   ├── __init__.py
│   │   ├── auto_git/
│   │   │   ├── handlers/
│   │   │   │   ├── base_handler.py
│   │   │   │   ├── code_handler.py
│   │   │   │   ├── repo_handler.py
│   │   │   │   ├── topic_handler.py
│   │   │   │   └── user_handler.py
│   │   │   ├── query_analyzer.py
│   │   │   └── sources/
│   │   │       └── github_source.py
│   │   ├── document_structure_extractor.py
│   │   ├── file2file_doc/
│   │   │   ├── __init__.py
│   │   │   ├── html_doc.py
│   │   │   ├── markdown_doc.py
│   │   │   ├── txt_doc.py
│   │   │   ├── word2pdf.py
│   │   │   └── word_doc.py
│   │   ├── github_search.py
│   │   ├── journal_paper_recom.py
│   │   ├── paper_download.py
│   │   ├── reduce_aigc.py
│   │   └── wiki/
│   │       └── wikipedia_api.py
│   ├── pdf_fns/
│   │   ├── breakdown_pdf_txt.py
│   │   ├── breakdown_txt.py
│   │   ├── parse_pdf.py
│   │   ├── parse_pdf_grobid.py
│   │   ├── parse_pdf_legacy.py
│   │   ├── parse_pdf_via_doc2x.py
│   │   ├── parse_word.py
│   │   ├── report_gen_html.py
│   │   ├── report_template.html
│   │   └── report_template_v2.html
│   ├── plugin_template/
│   │   └── plugin_class_template.py
│   ├── prompts/
│   │   └── internet.py
│   ├── rag_fns/
│   │   ├── llama_index_worker.py
│   │   ├── milvus_worker.py
│   │   ├── rag_file_support.py
│   │   └── vector_store_index.py
│   ├── review_fns/
│   │   ├── __init__.py
│   │   ├── conversation_doc/
│   │   │   ├── endnote_doc.py
│   │   │   ├── excel_doc.py
│   │   │   ├── html_doc.py
│   │   │   ├── markdown_doc.py
│   │   │   ├── reference_formatter.py
│   │   │   ├── word2pdf.py
│   │   │   └── word_doc.py
│   │   ├── data_sources/
│   │   │   ├── __init__.py
│   │   │   ├── adsabs_source.py
│   │   │   ├── arxiv_source.py
│   │   │   ├── base_source.py
│   │   │   ├── cas_if.json
│   │   │   ├── crossref_source.py
│   │   │   ├── elsevier_source.py
│   │   │   ├── github_source.py
│   │   │   ├── journal_metrics.py
│   │   │   ├── openalex_source.py
│   │   │   ├── pubmed_source.py
│   │   │   ├── scihub_source.py
│   │   │   ├── scopus_source.py
│   │   │   ├── semantic_source.py
│   │   │   └── unpaywall_source.py
│   │   ├── handlers/
│   │   │   ├── base_handler.py
│   │   │   ├── latest_handler.py
│   │   │   ├── paper_handler.py
│   │   │   ├── qa_handler.py
│   │   │   ├── recommend_handler.py
│   │   │   └── review_handler.py
│   │   ├── paper_processor/
│   │   │   └── paper_llm_ranker.py
│   │   ├── prompts/
│   │   │   ├── adsabs_prompts.py
│   │   │   ├── arxiv_prompts.py
│   │   │   ├── crossref_prompts.py
│   │   │   ├── paper_prompts.py
│   │   │   ├── pubmed_prompts.py
│   │   │   └── semantic_prompts.py
│   │   ├── query_analyzer.py
│   │   └── query_processor.py
│   ├── vector_fns/
│   │   ├── __init__.py
│   │   ├── general_file_loader.py
│   │   └── vector_database.py
│   ├── vt_fns/
│   │   ├── vt_call_plugin.py
│   │   ├── vt_modify_config.py
│   │   └── vt_state.py
│   ├── word_dfa/
│   │   └── dfa_algo.py
│   └── 高级功能函数模板.py
├── docker-compose.yml
├── docs/
│   ├── DOCUMENTATION_PLAN.md
│   ├── GithubAction+AllCapacity
│   ├── GithubAction+ChatGLM+Moss
│   ├── GithubAction+JittorLLMs
│   ├── GithubAction+NoLocal
│   ├── GithubAction+NoLocal+AudioAssistant
│   ├── GithubAction+NoLocal+Latex
│   ├── GithubAction+NoLocal+Vectordb
│   ├── README.Arabic.md
│   ├── README.English.md
│   ├── README.French.md
│   ├── README.German.md
│   ├── README.Italian.md
│   ├── README.Japanese.md
│   ├── README.Korean.md
│   ├── README.Portuguese.md
│   ├── README.Russian.md
│   ├── WindowsRun.bat
│   ├── WithFastapi.md
│   ├── customization/
│   │   ├── custom_buttons.md
│   │   ├── plugin_development.md
│   │   └── theme_customization.md
│   ├── deployment/
│   │   ├── cloud_deploy.md
│   │   ├── docker.md
│   │   └── reverse_proxy.md
│   ├── features/
│   │   ├── academic/
│   │   │   ├── arxiv_download.md
│   │   │   ├── arxiv_translation.md
│   │   │   ├── batch_file_query.md
│   │   │   ├── google_scholar.md
│   │   │   ├── latex_polish.md
│   │   │   ├── latex_proofread.md
│   │   │   ├── paper_reading.md
│   │   │   ├── pdf_nougat.md
│   │   │   ├── pdf_qa.md
│   │   │   ├── pdf_summary.md
│   │   │   ├── pdf_translation.md
│   │   │   ├── tex_abstract.md
│   │   │   └── word_summary.md
│   │   ├── agents/
│   │   │   ├── code_interpreter.md
│   │   │   └── void_terminal.md
│   │   ├── basic_functions.md
│   │   ├── basic_operations.md
│   │   ├── conversation/
│   │   │   ├── conversation_save.md
│   │   │   ├── image_generation.md
│   │   │   ├── internet_search.md
│   │   │   ├── mermaid_gen.md
│   │   │   ├── multi_model_query.md
│   │   │   └── voice_assistant.md
│   │   └── programming/
│   │       ├── batch_comment_gen.md
│   │       ├── code_analysis.md
│   │       ├── code_comment.md
│   │       ├── custom_code_analysis.md
│   │       ├── jupyter_analysis.md
│   │       └── markdown_translate.md
│   ├── get_started/
│   │   ├── configuration.md
│   │   ├── installation.md
│   │   └── quickstart.md
│   ├── index.md
│   ├── javascripts/
│   │   ├── animations.js
│   │   ├── code-copy.js
│   │   ├── code-zoom.js
│   │   ├── nav-scroll-fix.js
│   │   ├── responsive.js
│   │   ├── search-fix.js
│   │   └── tabbed-code.js
│   ├── models/
│   │   ├── azure.md
│   │   ├── custom_models.md
│   │   ├── local_models.md
│   │   ├── openai.md
│   │   ├── overview.md
│   │   └── transit_api.md
│   ├── plugin_with_secondary_menu.md
│   ├── reference/
│   │   ├── changelog.md
│   │   └── config_reference.md
│   ├── requirements.txt
│   ├── self_analysis.md
│   ├── stylesheets/
│   │   ├── animations.css
│   │   ├── code-enhancements.css
│   │   ├── feature-cards.css
│   │   ├── flowchart.css
│   │   ├── jupyter-simple.css
│   │   ├── mermaid.css
│   │   ├── mkdocstrings.css
│   │   ├── nav-scroll-fix.css
│   │   ├── readability-enhancements.css
│   │   ├── responsive.css
│   │   ├── syntax-highlight.css
│   │   ├── tabbed-code.css
│   │   ├── table-enhancements.css
│   │   └── workflow.css
│   ├── translate_english.json
│   ├── translate_japanese.json
│   ├── translate_std.json
│   ├── translate_traditionalchinese.json
│   ├── troubleshooting/
│   │   ├── faq.md
│   │   ├── model_errors.md
│   │   └── network_issues.md
│   ├── use_audio.md
│   ├── use_azure.md
│   ├── use_tts.md
│   └── use_vllm.md
├── main.py
├── mkdocs.yml
├── multi_language.py
├── request_llms/
│   ├── README.md
│   ├── bridge_all.py
│   ├── bridge_chatglm.py
│   ├── bridge_chatglm3.py
│   ├── bridge_chatglm4.py
│   ├── bridge_chatglmft.py
│   ├── bridge_chatglmonnx.py
│   ├── bridge_chatgpt.py
│   ├── bridge_chatgpt_vision.py
│   ├── bridge_claude.py
│   ├── bridge_cohere.py
│   ├── bridge_deepseekcoder.py
│   ├── bridge_google_gemini.py
│   ├── bridge_internlm.py
│   ├── bridge_jittorllms_llama.py
│   ├── bridge_jittorllms_pangualpha.py
│   ├── bridge_jittorllms_rwkv.py
│   ├── bridge_llama2.py
│   ├── bridge_moonshot.py
│   ├── bridge_moss.py
│   ├── bridge_newbingfree.py
│   ├── bridge_ollama.py
│   ├── bridge_openrouter.py
│   ├── bridge_qianfan.py
│   ├── bridge_qwen.py
│   ├── bridge_qwen_local.py
│   ├── bridge_skylark2.py
│   ├── bridge_spark.py
│   ├── bridge_stackclaude.py
│   ├── bridge_taichu.py
│   ├── bridge_tgui.py
│   ├── bridge_zhipu.py
│   ├── chatglmoonx.py
│   ├── com_google.py
│   ├── com_qwenapi.py
│   ├── com_skylark2api.py
│   ├── com_sparkapi.py
│   ├── com_taichu.py
│   ├── com_zhipuglm.py
│   ├── edge_gpt_free.py
│   ├── embed_models/
│   │   ├── bge_llm.py
│   │   ├── bridge_all_embed.py
│   │   └── openai_embed.py
│   ├── key_manager.py
│   ├── local_llm_class.py
│   ├── oai_std_model_template.py
│   ├── queued_pipe.py
│   ├── requirements_chatglm.txt
│   ├── requirements_chatglm4.txt
│   ├── requirements_chatglm_onnx.txt
│   ├── requirements_jittorllms.txt
│   ├── requirements_moss.txt
│   ├── requirements_newbing.txt
│   ├── requirements_qwen.txt
│   ├── requirements_qwen_local.txt
│   └── requirements_slackclaude.txt
├── requirements.txt
├── shared_utils/
│   ├── advanced_markdown_format.py
│   ├── char_visual_effect.py
│   ├── colorful.py
│   ├── config_loader.py
│   ├── connect_void_terminal.py
│   ├── context_clip_policy.py
│   ├── cookie_manager.py
│   ├── doc_loader_dynamic.py
│   ├── docker_as_service_api.py
│   ├── fastapi_server.py
│   ├── fastapi_stream_server.py
│   ├── handle_upload.py
│   ├── key_pattern_manager.py
│   ├── logging.py
│   ├── map_names.py
│   ├── nltk_downloader.py
│   └── text_mask.py
├── tests/
│   ├── __init__.py
│   ├── init_test.py
│   ├── test_academic_conversation.py
│   ├── test_anim_gen.py
│   ├── test_bilibili_down.py
│   ├── test_doc2x.py
│   ├── test_embed.py
│   ├── test_key_pattern_manager.py
│   ├── test_latex_auto_correct.py
│   ├── test_llms.py
│   ├── test_markdown.py
│   ├── test_markdown_format.py
│   ├── test_media.py
│   ├── test_plugins.py
│   ├── test_python_auto_docstring.py
│   ├── test_rag.py
│   ├── test_safe_pickle.py
│   ├── test_save_chat_to_html.py
│   ├── test_searxng.py
│   ├── test_social_helper.py
│   ├── test_tts.py
│   ├── test_utils.py
│   └── test_vector_plugins.py
├── themes/
│   ├── base64.mjs
│   ├── common.css
│   ├── common.js
│   ├── common.py
│   ├── contrast.css
│   ├── contrast.py
│   ├── cookies.py
│   ├── default.css
│   ├── default.py
│   ├── gradios.py
│   ├── green.css
│   ├── green.js
│   ├── green.py
│   ├── gui_advanced_plugin_class.py
│   ├── gui_floating_menu.py
│   ├── gui_toolbar.py
│   ├── init.js
│   ├── theme.js
│   ├── theme.py
│   ├── tts.js
│   ├── waifu_plugin/
│   │   ├── autoload.js
│   │   ├── live2d.js
│   │   ├── source
│   │   ├── waifu-tips.js
│   │   ├── waifu-tips.json
│   │   └── waifu.css
│   └── welcome.js
├── toolbox.py
└── version
Download .txt
SYMBOL INDEX (2251 symbols across 259 files)

FILE: check_proxy.py
  function check_proxy (line 3) | def check_proxy(proxies, return_ip=False):
  function _check_with_backup_source (line 47) | def _check_with_backup_source(proxies):
  function backup_and_download (line 65) | def backup_and_download(current_version, remote_version):
  function patch_and_restart (line 104) | def patch_and_restart(path):
  function get_current_version (line 152) | def get_current_version():
  function auto_update (line 168) | def auto_update(raise_error=False):
  function warm_up_modules (line 221) | def warm_up_modules():
  function try_warm_up_vectordb (line 250) | def try_warm_up_vectordb():
  function warm_up_vectordb (line 276) | def warm_up_vectordb():

FILE: core_functional.py
  function get_core_functions (line 10) | def get_core_functions():
  function handle_core_functionality (line 150) | def handle_core_functionality(additional_fn, inputs, history, chatbot):

FILE: crazy_functional.py
  function get_crazy_functions (line 5) | def get_crazy_functions():
  function get_multiplex_button_functions (line 767) | def get_multiplex_button_functions():

FILE: crazy_functions/Academic_Conversation.py
  function 学术对话 (line 22) | def 学术对话(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: List,

FILE: crazy_functions/Arxiv_Downloader.py
  function download_arxiv_ (line 8) | def download_arxiv_(url_pdf):
  function get_name (line 64) | def get_name(_url_):
  function 下载arxiv论文并翻译摘要 (line 122) | def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys...

FILE: crazy_functions/Audio_Assistant.py
  function chatbot2history (line 15) | def chatbot2history(chatbot):
  function visualize_audio (line 27) | def visualize_audio(chatbot, audio_shape):
  class AsyncGptTask (line 35) | class AsyncGptTask():
    method __init__ (line 36) | def __init__(self) -> None:
    method gpt_thread_worker (line 40) | def gpt_thread_worker(self, i_say, llm_kwargs, history, sys_prompt, ob...
    method add_async_gpt_task (line 51) | def add_async_gpt_task(self, i_say, chatbot_index, llm_kwargs, history...
    method update_chatbot (line 59) | def update_chatbot(self, chatbot):
  class InterviewAssistant (line 69) | class InterviewAssistant(AliyunASR):
    method __init__ (line 70) | def __init__(self):
    method __del__ (line 81) | def __del__(self):
    method init (line 87) | def init(self, chatbot):
    method no_audio_for_a_while (line 102) | def no_audio_for_a_while(self):
    method begin (line 108) | def begin(self, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...
  function Audio_Assistant (line 171) | def Audio_Assistant(txt, llm_kwargs, plugin_kwargs, chatbot, history, sy...

FILE: crazy_functions/Audio_Summary.py
  function split_audio_file (line 5) | def split_audio_file(filename, split_duration=1000):
  function AnalyAudio (line 41) | def AnalyAudio(parse_prompt, file_manifest, llm_kwargs, chatbot, history):
  function Audio_Summary (line 135) | def Audio_Summary(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst...

FILE: crazy_functions/Commandline_Assistant.py
  function Commandline_Assistant (line 7) | def Commandline_Assistant(txt, llm_kwargs, plugin_kwargs, chatbot, histo...

FILE: crazy_functions/Conversation_To_File.py
  function write_chat_to_file_legacy (line 8) | def write_chat_to_file_legacy(chatbot, history=None, file_name=None):
  function write_chat_to_file (line 75) | def write_chat_to_file(chatbot, history=None, file_name=None):
  function gen_file_preview (line 216) | def gen_file_preview(file_name):
  function read_file_to_chat (line 231) | def read_file_to_chat(chatbot, history, file_name):
  function 对话历史存档 (line 256) | def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom...
  class Conversation_To_File_Wrap (line 283) | class Conversation_To_File_Wrap(GptAcademicPluginTemplate):
    method __init__ (line 284) | def __init__(self):
    method define_arg_selection_menu (line 290) | def define_arg_selection_menu(self):
    method execute (line 301) | def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...
  function hide_cwd (line 311) | def hide_cwd(str):
  function 载入对话历史存档 (line 318) | def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...
  function 删除所有本地对话历史记录 (line 353) | def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, syste...

FILE: crazy_functions/Document_Conversation.py
  class FileFragment (line 15) | class FileFragment:
  class BatchDocumentSummarizer (line 24) | class BatchDocumentSummarizer:
    method __init__ (line 27) | def __init__(self, llm_kwargs: Dict, query: str, chatbot: List, histor...
    method _get_token_limit (line 37) | def _get_token_limit(self) -> int:
    method _create_batch_inputs (line 42) | def _create_batch_inputs(self, fragments: List[FileFragment]) -> Tuple...
    method _process_single_file_with_timeout (line 65) | def _process_single_file_with_timeout(self, file_info: Tuple[str, str]...
    method prepare_fragments (line 194) | def prepare_fragments(self, project_folder: str, file_paths: List[str]...
    method _process_fragments_batch (line 291) | def _process_fragments_batch(self, fragments: List[FileFragment]) -> G...
    method _generate_final_summary_request (line 339) | def _generate_final_summary_request(self) -> Tuple[List, List, List]:
    method process_files (line 355) | def process_files(self, project_folder: str, file_paths: List[str]) ->...
    method save_results (line 440) | def save_results(self, final_summary: str):
  function 批量文件询问 (line 497) | def 批量文件询问(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: List,

FILE: crazy_functions/Document_Conversation_Wrap.py
  class Document_Conversation_Wrap (line 7) | class Document_Conversation_Wrap(GptAcademicPluginTemplate):
    method __init__ (line 8) | def __init__(self):
    method define_arg_selection_menu (line 14) | def define_arg_selection_menu(self):
    method execute (line 31) | def execute(txt, llm_kwargs, plugin_kwargs:dict, chatbot, history, sys...

FILE: crazy_functions/Document_Optimize.py
  class TextFragment (line 28) | class TextFragment:
  class DocumentProcessor (line 35) | class DocumentProcessor:
    method __init__ (line 38) | def __init__(self, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: Lis...
    method _get_token_limit (line 50) | def _get_token_limit(self) -> int:
    method _create_batch_inputs (line 56) | def _create_batch_inputs(self, fragments: List[TextFragment]) -> Tuple...
    method _extract_decision (line 77) | def _extract_decision(self, text: str) -> str:
    method process_file (line 89) | def process_file(self, file_path: str) -> Generator:
    method _process_structured_paper (line 114) | def _process_structured_paper(self, file_path: str) -> Generator:
    method _process_regular_file (line 361) | def _process_regular_file(self, file_path: str) -> Generator:
    method save_results (line 450) | def save_results(self, content: str, original_file_path: str) -> List[...
    method _breakdown_section_content (line 528) | def _breakdown_section_content(self, content: str) -> List[str]:
  function 自定义智能文档处理 (line 611) | def 自定义智能文档处理(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: ...

FILE: crazy_functions/Dynamic_Function_Generate.py
  function inspect_dependency (line 43) | def inspect_dependency(chatbot, history):
  function get_code_block (line 47) | def get_code_block(reply):
  function gpt_interact_multi_step (line 58) | def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
  function for_immediate_show_off_when_possible (line 117) | def for_immediate_show_off_when_possible(file_type, fp, chatbot):
  function have_any_recent_upload_files (line 128) | def have_any_recent_upload_files(chatbot):
  function get_recent_file_prompt_support (line 136) | def get_recent_file_prompt_support(chatbot):
  function Dynamic_Function_Generate (line 142) | def Dynamic_Function_Generate(txt, llm_kwargs, plugin_kwargs, chatbot, h...

FILE: crazy_functions/Google_Scholar_Assistant_Legacy.py
  function get_meta_information (line 11) | def get_meta_information(url, chatbot, history):
  function Google_Scholar_Assistant_Legacy (line 135) | def Google_Scholar_Assistant_Legacy(txt, llm_kwargs, plugin_kwargs, chat...

FILE: crazy_functions/Helpers.py
  function 猜你想问 (line 14) | def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt...
  function 清除缓存 (line 35) | def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt...

FILE: crazy_functions/Image_Generate.py
  function gen_image (line 11) | def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-...
  function edit_image (line 54) | def edit_image(llm_kwargs, prompt, image_path, resolution="1024x1024", m...
  function 图片生成_DALLE2 (line 100) | def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys...
  function 图片生成_DALLE3 (line 130) | def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys...
  function gen_image_banana (line 161) | def gen_image_banana(chatbot, history, text_prompt, image_base64_list=No...
  function 图片生成_NanoBanana (line 290) | def 图片生成_NanoBanana(prompt, llm_kwargs, plugin_kwargs, chatbot, history,...
  class ImageEditState (line 353) | class ImageEditState(GptAcademicState):
    method get_image_file (line 355) | def get_image_file(self, x):
    method lock_plugin (line 365) | def lock_plugin(self, chatbot):
    method unlock_plugin (line 369) | def unlock_plugin(self, chatbot):
    method get_resolution (line 374) | def get_resolution(self, x):
    method get_prompt (line 377) | def get_prompt(self, x):
    method reset (line 381) | def reset(self):
    method feed (line 389) | def feed(self, prompt, chatbot):
    method next_req (line 399) | def next_req(self):
    method already_obtained_all_materials (line 405) | def already_obtained_all_materials(self):
  function 图片修改_DALLE2 (line 409) | def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys...
  function make_transparent (line 435) | def make_transparent(input_image_path, output_image_path):
  function resize_image (line 449) | def resize_image(input_path, output_path, max_size=1024):
  function make_square_image (line 466) | def make_square_image(input_path, output_path):

FILE: crazy_functions/Image_Generate_Wrap.py
  class ImageGen_Wrap (line 7) | class ImageGen_Wrap(GptAcademicPluginTemplate):
    method __init__ (line 8) | def __init__(self):
    method define_arg_selection_menu (line 14) | def define_arg_selection_menu(self):
    method execute (line 45) | def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/Interactive_Func_Template.py
  function 交互功能模板函数 (line 5) | def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...
  function get_image_page_by_keyword (line 51) | def get_image_page_by_keyword(keyword):

FILE: crazy_functions/Interactive_Mini_Game.py
  function 随机小游戏 (line 8) | def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...
  function 随机小游戏1 (line 26) | def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/Internet_GPT.py
  function search_optimizer (line 16) | def search_optimizer(
  function get_auth_ip (line 109) | def get_auth_ip():
  function searxng_request (line 116) | def searxng_request(query, proxies, categories='general', searxng_url=No...
  function scrape_text (line 169) | def scrape_text(url, proxies) -> str:
  function jina_scrape_text (line 204) | def jina_scrape_text(url) -> str:
  function internet_search_with_analysis_prompt (line 221) | def internet_search_with_analysis_prompt(prompt, analysis_prompt, llm_kw...
  function 连接网络回答问题 (line 255) | def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...

FILE: crazy_functions/Internet_GPT_Bing_Legacy.py
  function bing_search (line 8) | def bing_search(query, proxies=None):
  function scrape_text (line 30) | def scrape_text(url, proxies) -> str:
  function 连接bing搜索回答问题 (line 58) | def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, syste...

FILE: crazy_functions/Internet_GPT_Legacy.py
  function google (line 7) | def google(query, proxies):
  function scrape_text (line 30) | def scrape_text(url, proxies) -> str:
  function 连接网络回答问题 (line 58) | def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...

FILE: crazy_functions/Internet_GPT_Wrap.py
  class NetworkGPT_Wrap (line 7) | class NetworkGPT_Wrap(GptAcademicPluginTemplate):
    method __init__ (line 8) | def __init__(self):
    method define_arg_selection_menu (line 14) | def define_arg_selection_menu(self):
    method execute (line 41) | def execute(txt, llm_kwargs, plugin_kwargs:dict, chatbot, history, sys...

FILE: crazy_functions/Latex_Function.py
  function switch_prompt (line 14) | def switch_prompt(pfg, mode, more_requirement):
  function descend_to_extracted_folder_if_exist (line 44) | def descend_to_extracted_folder_if_exist(project_folder):
  function move_project (line 60) | def move_project(project_folder, arxiv_id=None):
  function arxiv_download (line 91) | def arxiv_download(chatbot, history, txt, allow_cache=True):
  function pdf2tex_project (line 181) | def pdf2tex_project(pdf_file_path, plugin_kwargs):
  function Latex英文纠错加PDF对比 (line 253) | def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, sy...
  function Latex翻译中文并重新编译PDF (line 331) | def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, ...
  function PDF翻译中文并重新编译PDF (line 454) | def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, sy...

FILE: crazy_functions/Latex_Function_Wrap.py
  class Arxiv_Localize (line 6) | class Arxiv_Localize(GptAcademicPluginTemplate):
    method __init__ (line 7) | def __init__(self):
    method define_arg_selection_menu (line 13) | def define_arg_selection_menu(self):
    method execute (line 38) | def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...
  class PDF_Localize (line 55) | class PDF_Localize(GptAcademicPluginTemplate):
    method __init__ (line 56) | def __init__(self):
    method define_arg_selection_menu (line 62) | def define_arg_selection_menu(self):
    method execute (line 81) | def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/Latex_Project_Polish.py
  class PaperFileGroup (line 5) | class PaperFileGroup():
    method __init__ (line 6) | def __init__(self):
    method run_file_split (line 19) | def run_file_split(self, max_token_limit=1900):
    method merge_result (line 37) | def merge_result(self):
    method write_result (line 42) | def write_result(self):
    method zip_result (line 50) | def zip_result(self):
  function 多文件润色 (line 57) | def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chat...
  function Latex英文润色 (line 138) | def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...
  function Latex中文润色 (line 176) | def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...
  function Latex英文纠错 (line 212) | def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/Latex_Project_Translate_Legacy.py
  class PaperFileGroup (line 5) | class PaperFileGroup():
    method __init__ (line 6) | def __init__(self):
    method run_file_split (line 19) | def run_file_split(self, max_token_limit=1900):
  function 多文件翻译 (line 38) | def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chat...
  function Latex英译中 (line 109) | def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...
  function Latex中译英 (line 146) | def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...

FILE: crazy_functions/Markdown_Translate.py
  class PaperFileGroup (line 8) | class PaperFileGroup():
    method __init__ (line 9) | def __init__(self):
    method run_file_split (line 22) | def run_file_split(self, max_token_limit=2048):
    method merge_result (line 40) | def merge_result(self):
    method write_result (line 45) | def write_result(self, language):
  function 多文件翻译 (line 54) | def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chat...
  function get_files_from_everything (line 121) | def get_files_from_everything(txt, preference=''):
  function Markdown英译中 (line 162) | def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system...
  function Markdown中译英 (line 201) | def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system...
  function Markdown翻译指定语言 (line 233) | def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys...

FILE: crazy_functions/Math_Animation_Gen.py
  function inspect_dependency (line 7) | def inspect_dependency(chatbot, history):
  function eval_manim (line 17) | def eval_manim(code):
  function get_code_block (line 45) | def get_code_block(reply):
  function 动画生成 (line 54) | def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt...
  function examples_of_manim (line 105) | def examples_of_manim():

FILE: crazy_functions/Mermaid_Figure_Gen.py
  function 解析历史输入 (line 185) | def 解析历史输入(history, llm_kwargs, file_manifest, chatbot, plugin_kwargs):
  function Mermaid_Figure_Gen (line 300) | def Mermaid_Figure_Gen(
  class Mermaid_Gen (line 386) | class Mermaid_Gen(GptAcademicPluginTemplate):
    method __init__ (line 387) | def __init__(self):
    method define_arg_selection_menu (line 390) | def define_arg_selection_menu(self):
    method execute (line 413) | def execute(

FILE: crazy_functions/Multi_Agent_Legacy.py
  function remove_model_prefix (line 18) | def remove_model_prefix(llm):
  function Multi_Agent_Legacy终端 (line 25) | def Multi_Agent_Legacy终端(txt, llm_kwargs, plugin_kwargs, chatbot, histor...

FILE: crazy_functions/Multi_LLM_Query.py
  function 同时问询 (line 5) | def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt...
  function 同时问询_指定模型 (line 35) | def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/PDF_QA.py
  function 解析PDF (line 8) | def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system...
  function PDF_QA标准文件输入 (line 65) | def PDF_QA标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syste...

FILE: crazy_functions/PDF_Summary.py
  function 解析PDF (line 12) | def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chat...
  function PDF_Summary (line 106) | def PDF_Summary(txt, llm_kwargs, plugin_kwargs, chatbot, history, system...

FILE: crazy_functions/PDF_Translate.py
  function 批量翻译PDF文档 (line 12) | def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/PDF_Translate_Nougat.py
  function markdown_to_dict (line 14) | def markdown_to_dict(article_content):
  function 批量翻译PDF文档 (line 51) | def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...
  function 解析PDF_基于NOUGAT (line 98) | def 解析PDF_基于NOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwa...

FILE: crazy_functions/PDF_Translate_Wrap.py
  class PDF_Tran (line 5) | class PDF_Tran(GptAcademicPluginTemplate):
    method __init__ (line 6) | def __init__(self):
    method define_arg_selection_menu (line 12) | def define_arg_selection_menu(self):
    method execute (line 26) | def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/Paper_Abstract_Writer.py
  function 解析Paper (line 7) | def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch...
  function Paper_Abstract_Writer (line 46) | def Paper_Abstract_Writer(txt, llm_kwargs, plugin_kwargs, chatbot, histo...

FILE: crazy_functions/Paper_Reading.py
  class PaperQuestion (line 16) | class PaperQuestion:
  class PaperAnalyzer (line 24) | class PaperAnalyzer:
    method __init__ (line 27) | def __init__(self, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: Lis...
    method _load_paper (line 68) | def _load_paper(self, paper_path: str) -> Generator:
    method _analyze_question (line 88) | def _analyze_question(self, question: PaperQuestion) -> Generator:
    method _generate_summary (line 114) | def _generate_summary(self) -> Generator:
    method save_report (line 145) | def save_report(self, report: str) -> Generator:
    method analyze_paper (line 172) | def analyze_paper(self, paper_path: str) -> Generator:
  function _find_paper_file (line 194) | def _find_paper_file(path: str) -> str:
  function download_paper_by_id (line 224) | def download_paper_by_id(paper_info, chatbot, history) -> str:
  function 快速论文解读 (line 295) | def 快速论文解读(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: List,

FILE: crazy_functions/Program_Comment_Gen.py
  function Program_Comment_Gen (line 7) | def Program_Comment_Gen(file_manifest, project_folder, llm_kwargs, plugi...
  function 批量Program_Comment_Gen (line 37) | def 批量Program_Comment_Gen(txt, llm_kwargs, plugin_kwargs, chatbot, histo...

FILE: crazy_functions/Rag_Interface.py
  function handle_document_upload (line 18) | def handle_document_upload(files: List[str], llm_kwargs, plugin_kwargs, ...
  function Rag问答 (line 49) | def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_promp...

FILE: crazy_functions/Social_Helper.py
  class SocialNetwork (line 14) | class SocialNetwork():
    method __init__ (line 15) | def __init__(self):
  class SaveAndLoad (line 18) | class SaveAndLoad():
    method __init__ (line 19) | def __init__(self, user_name, llm_kwargs, auto_load_checkpoint=True, c...
    method does_checkpoint_exist (line 27) | def does_checkpoint_exist(self, checkpoint_dir=None):
    method save_to_checkpoint (line 34) | def save_to_checkpoint(self, checkpoint_dir=None):
    method load_from_checkpoint (line 40) | def load_from_checkpoint(self, checkpoint_dir=None):
  class Friend (line 50) | class Friend(BaseModel):
  class FriendList (line 55) | class FriendList(BaseModel):
  class SocialNetworkWorker (line 59) | class SocialNetworkWorker(SaveAndLoad):
    method ai_socail_advice (line 60) | def ai_socail_advice(self, prompt, llm_kwargs, plugin_kwargs, chatbot,...
    method ai_remove_friend (line 63) | def ai_remove_friend(self, prompt, llm_kwargs, plugin_kwargs, chatbot,...
    method ai_list_friends (line 66) | def ai_list_friends(self, prompt, llm_kwargs, plugin_kwargs, chatbot, ...
    method ai_add_multi_friends (line 69) | def ai_add_multi_friends(self, prompt, llm_kwargs, plugin_kwargs, chat...
    method run (line 84) | def run(self, txt, llm_kwargs, plugin_kwargs, chatbot, history, system...
    method add_friend (line 134) | def add_friend(self, friend):
  function I人助手 (line 148) | def I人助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt...

FILE: crazy_functions/SourceCode_Analyse.py
  function 解析源代码新 (line 6) | def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, cha...
  function make_diagram (line 108) | def make_diagram(this_iteration_files, result, this_iteration_history_fe...
  function 解析项目本身 (line 113) | def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom...
  function 解析一个Python项目 (line 126) | def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syste...
  function 解析一个Matlab项目 (line 145) | def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syste...
  function 解析一个C项目的头文件 (line 164) | def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system...
  function 解析一个C项目 (line 185) | def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro...
  function 解析一个Java项目 (line 208) | def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_...
  function 解析一个前端项目 (line 231) | def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...
  function 解析一个Golang项目 (line 261) | def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syste...
  function 解析一个Rust项目 (line 283) | def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_...
  function 解析一个Lua项目 (line 304) | def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...
  function 解析一个CSharp项目 (line 327) | def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syste...
  function 解析任意code项目 (line 348) | def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_...

FILE: crazy_functions/SourceCode_Analyse_JupyterNotebook.py
  class PaperFileGroup (line 7) | class PaperFileGroup():
    method __init__ (line 8) | def __init__(self):
    method run_file_split (line 21) | def run_file_split(self, max_token_limit=1900):
  function parseNotebook (line 41) | def parseNotebook(filename, enable_markdown=1):
  function ipynb解释 (line 66) | def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch...
  function 解析ipynb文件 (line 118) | def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/SourceCode_Comment.py
  function 注释源代码 (line 14) | def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chat...
  function 注释Python项目 (line 144) | def 注释Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_...

FILE: crazy_functions/SourceCode_Comment_Wrap.py
  class SourceCodeComment_Wrap (line 6) | class SourceCodeComment_Wrap(GptAcademicPluginTemplate):
    method __init__ (line 7) | def __init__(self):
    method define_arg_selection_menu (line 13) | def define_arg_selection_menu(self):
    method execute (line 27) | def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/Vectorstore_QA.py
  function 知识库文件注入 (line 16) | def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro...
  function 读取知识库作答 (line 87) | def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro...

FILE: crazy_functions/VideoResource_GPT.py
  class Query (line 20) | class Query(BaseModel):
  class VideoResource (line 24) | class VideoResource(BaseModel):
  function get_video_resource (line 32) | def get_video_resource(search_keyword):
  function download_video (line 43) | def download_video(bvid, user_name, chatbot, history):
  class Strategy (line 87) | class Strategy(BaseModel):
  function 多媒体任务 (line 95) | def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_promp...
  function debug (line 203) | def debug(bvid, llm_kwargs, plugin_kwargs, chatbot, history, system_prom...

FILE: crazy_functions/Void_Terminal.py
  class UserIntention (line 60) | class UserIntention(BaseModel):
  function chat (line 67) | def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt...
  function analyze_intention_with_simple_rules (line 86) | def analyze_intention_with_simple_rules(txt):
  function Void_Terminal (line 107) | def Void_Terminal(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst...
  function Void_Terminal主路由 (line 136) | def Void_Terminal主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, s...

FILE: crazy_functions/Word_Summary.py
  function 解析docx (line 8) | def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, cha...
  function Word_Summary (line 82) | def Word_Summary(txt, llm_kwargs, plugin_kwargs, chatbot, history, syste...

FILE: crazy_functions/agent_fns/auto_agent.py
  class AutoGenMath (line 8) | class AutoGenMath(AutoGenGeneral):
    method define_agents (line 10) | def define_agents(self):

FILE: crazy_functions/agent_fns/echo_agent.py
  class EchoDemo (line 4) | class EchoDemo(PluginMultiprocessManager):
    method subprocess_worker (line 5) | def subprocess_worker(self, child_conn):

FILE: crazy_functions/agent_fns/general.py
  function gpt_academic_generate_oai_reply (line 6) | def gpt_academic_generate_oai_reply(
  class AutoGenGeneral (line 35) | class AutoGenGeneral(PluginMultiprocessManager):
    method gpt_academic_print_override (line 36) | def gpt_academic_print_override(self, user_proxy, message, sender):
    method gpt_academic_get_human_input (line 44) | def gpt_academic_get_human_input(self, user_proxy, message):
    method define_agents (line 63) | def define_agents(self):
    method exe_autogen (line 66) | def exe_autogen(self, input):
    method subprocess_worker (line 97) | def subprocess_worker(self, child_conn):
  class AutoGenGroupChat (line 105) | class AutoGenGroupChat(AutoGenGeneral):
    method exe_autogen (line 106) | def exe_autogen(self, input):
    method define_group_chat_manager_config (line 137) | def define_group_chat_manager_config(self):

FILE: crazy_functions/agent_fns/persistent.py
  class GradioMultiuserManagerForPersistentClasses (line 3) | class GradioMultiuserManagerForPersistentClasses():
    method __init__ (line 4) | def __init__(self):
    method already_alive (line 7) | def already_alive(self, key):
    method set (line 10) | def set(self, key, x):
    method get (line 14) | def get(self, key):

FILE: crazy_functions/agent_fns/pipe.py
  class PipeCom (line 6) | class PipeCom:
    method __init__ (line 7) | def __init__(self, cmd, content) -> None:
  class PluginMultiprocessManager (line 12) | class PluginMultiprocessManager:
    method __init__ (line 13) | def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system...
    method feed_heartbeat_watchdog (line 31) | def feed_heartbeat_watchdog(self):
    method is_alive (line 35) | def is_alive(self):
    method launch_subprocess_with_pipe (line 38) | def launch_subprocess_with_pipe(self):
    method terminate (line 48) | def terminate(self):
    method subprocess_worker (line 53) | def subprocess_worker(self, child_conn):
    method send_command (line 57) | def send_command(self, cmd):
    method immediate_showoff_when_possible (line 68) | def immediate_showoff_when_possible(self, fp):
    method overwatch_workdir_file_change (line 81) | def overwatch_workdir_file_change(self):
    method main_process_ui_control (line 112) | def main_process_ui_control(self, txt, create_or_resume) -> str:
    method subprocess_worker_wait_user_feedback (line 181) | def subprocess_worker_wait_user_feedback(self, wait_msg="wait user fee...

FILE: crazy_functions/agent_fns/python_comment_agent.py
  class PythonCodeComment (line 177) | class PythonCodeComment():
    method __init__ (line 179) | def __init__(self, llm_kwargs, plugin_kwargs, language, observe_window...
    method generate_tagged_code_from_full_context (line 198) | def generate_tagged_code_from_full_context(self):
    method read_file (line 206) | def read_file(self, path, brief):
    method find_next_function_begin (line 215) | def find_next_function_begin(self, tagged_code:list, begin_and_end):
    method _get_next_window (line 242) | def _get_next_window(self):
    method dedent (line 268) | def dedent(self, text):
    method get_next_batch (line 312) | def get_next_batch(self):
    method tag_code (line 316) | def tag_code(self, fn, hint):
    method get_markdown_block_in_html (line 354) | def get_markdown_block_in_html(self, html):
    method sync_and_patch (line 365) | def sync_and_patch(self, original, revised):
    method begin_comment_source_code (line 402) | def begin_comment_source_code(self, chatbot=None, history=None):
    method verify_successful (line 439) | def verify_successful(self, original, revised):

FILE: crazy_functions/agent_fns/watchdog.py
  class WatchDog (line 4) | class WatchDog():
    method __init__ (line 5) | def __init__(self, timeout, bark_fn, interval=3, msg="") -> None:
    method watch (line 13) | def watch(self):
    method begin_watch (line 22) | def begin_watch(self):
    method feed (line 28) | def feed(self):

FILE: crazy_functions/ast_fns/comment_remove.py
  function remove_python_comments (line 7) | def remove_python_comments(input_source: str) -> str:

FILE: crazy_functions/crazy_utils.py
  function input_clipping (line 7) | def input_clipping(inputs, history, max_token_limit, return_clip_flags=F...
  function request_gpt_model_in_new_thread_with_ui_alive (line 68) | def request_gpt_model_in_new_thread_with_ui_alive(
  function can_multi_process (line 166) | def can_multi_process(llm) -> bool:
  function request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency (line 187) | def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_effici...
  function read_and_clean_pdf_text (line 359) | def read_and_clean_pdf_text(fp):
  function get_files_from_everything (line 542) | def get_files_from_everything(txt, type): # type='.md'
  class nougat_interface (line 592) | class nougat_interface():
    method __init__ (line 593) | def __init__(self):
    method nougat_with_timeout (line 596) | def nougat_with_timeout(self, command, cwd, timeout=3600):
    method NOUGAT_parse_pdf (line 612) | def NOUGAT_parse_pdf(self, fp, chatbot, history):
  function try_install_deps (line 637) | def try_install_deps(deps, reload_m=[]):
  function get_plugin_arg (line 647) | def get_plugin_arg(plugin_kwargs, key, default):

FILE: crazy_functions/diagram_fns/file_tree.py
  class FileNode (line 5) | class FileNode:
    method __init__ (line 6) | def __init__(self, name, build_manifest=False):
    method add_linebreaks_at_spaces (line 18) | def add_linebreaks_at_spaces(string, interval=10):
    method sanitize_comment (line 21) | def sanitize_comment(self, comment):
    method add_file (line 29) | def add_file(self, file_path, file_comment):
    method print_files_recursively (line 63) | def print_files_recursively(self, level=0, code="R0"):
  function build_file_tree_mermaid_diagram (line 94) | def build_file_tree_mermaid_diagram(file_manifest, file_comments, graph_...

FILE: crazy_functions/doc_fns/AI_review_doc.py
  class DocumentFormatter (line 17) | class DocumentFormatter(ABC):
    method __init__ (line 20) | def __init__(self, final_summary: str, file_summaries_map: Dict, faile...
    method format_failed_files (line 26) | def format_failed_files(self) -> str:
    method format_file_summaries (line 31) | def format_file_summaries(self) -> str:
    method create_document (line 36) | def create_document(self) -> str:
  class WordFormatter (line 41) | class WordFormatter(DocumentFormatter):
    method __init__ (line 44) | def __init__(self, *args, **kwargs):
    method _setup_document (line 56) | def _setup_document(self):
    method _create_styles (line 81) | def _create_styles(self):
    method _create_heading_style (line 98) | def _create_heading_style(self, style_name: str, font_name: str, font_...
    method _get_heading_number (line 111) | def _get_heading_number(self, level: int) -> str:
    method _add_heading (line 139) | def _add_heading(self, text: str, level: int):
    method _get_run_style (line 175) | def _get_run_style(self, run, font_name: str, font_size: int, bold: bo...
    method format_failed_files (line 182) | def format_failed_files(self) -> str:
    method _add_content (line 199) | def _add_content(self, text: str, indent: bool = True):
    method format_file_summaries (line 208) | def format_file_summaries(self) -> str:
    method create_document (line 258) | def create_document(self):
    method save_as_pdf (line 283) | def save_as_pdf(self, word_path, pdf_path=None):
  class MarkdownFormatter (line 302) | class MarkdownFormatter(DocumentFormatter):
    method format_failed_files (line 305) | def format_failed_files(self) -> str:
    method format_file_summaries (line 315) | def format_file_summaries(self) -> str:
    method create_document (line 334) | def create_document(self) -> str:
  class HtmlFormatter (line 353) | class HtmlFormatter(DocumentFormatter):
    method __init__ (line 356) | def __init__(self, *args, **kwargs):
    method format_failed_files (line 744) | def format_failed_files(self) -> str:
    method format_file_summaries (line 758) | def format_file_summaries(self) -> str:
    method create_document (line 780) | def create_document(self) -> str:

FILE: crazy_functions/doc_fns/batch_file_query_doc.py
  class DocumentFormatter (line 17) | class DocumentFormatter(ABC):
    method __init__ (line 20) | def __init__(self, final_summary: str, file_summaries_map: Dict, faile...
    method format_failed_files (line 26) | def format_failed_files(self) -> str:
    method format_file_summaries (line 31) | def format_file_summaries(self) -> str:
    method create_document (line 36) | def create_document(self) -> str:
  class WordFormatter (line 41) | class WordFormatter(DocumentFormatter):
    method __init__ (line 44) | def __init__(self, *args, **kwargs):
    method _setup_document (line 56) | def _setup_document(self):
    method _create_styles (line 81) | def _create_styles(self):
    method _create_heading_style (line 98) | def _create_heading_style(self, style_name: str, font_name: str, font_...
    method _get_heading_number (line 111) | def _get_heading_number(self, level: int) -> str:
    method _add_heading (line 139) | def _add_heading(self, text: str, level: int):
    method _get_run_style (line 175) | def _get_run_style(self, run, font_name: str, font_size: int, bold: bo...
    method format_failed_files (line 182) | def format_failed_files(self) -> str:
    method _add_content (line 199) | def _add_content(self, text: str, indent: bool = True):
    method format_file_summaries (line 208) | def format_file_summaries(self) -> str:
    method create_document (line 258) | def create_document(self):
    method save_as_pdf (line 283) | def save_as_pdf(self, word_path, pdf_path=None):
  class MarkdownFormatter (line 302) | class MarkdownFormatter(DocumentFormatter):
    method format_failed_files (line 305) | def format_failed_files(self) -> str:
    method format_file_summaries (line 315) | def format_file_summaries(self) -> str:
    method create_document (line 334) | def create_document(self) -> str:
  class HtmlFormatter (line 353) | class HtmlFormatter(DocumentFormatter):
    method __init__ (line 356) | def __init__(self, *args, **kwargs):
    method format_failed_files (line 744) | def format_failed_files(self) -> str:
    method format_file_summaries (line 758) | def format_file_summaries(self) -> str:
    method create_document (line 780) | def create_document(self) -> str:

FILE: crazy_functions/doc_fns/content_folder.py
  class FoldingError (line 14) | class FoldingError(Exception):
  class FormattingError (line 19) | class FormattingError(FoldingError):
  class MetadataError (line 24) | class MetadataError(FoldingError):
  class ValidationError (line 29) | class ValidationError(FoldingError):
  class FoldingStyle (line 34) | class FoldingStyle(Enum):
  class FoldingOptions (line 42) | class FoldingOptions:
  class BaseMetadata (line 54) | class BaseMetadata(ABC):
    method validate (line 58) | def validate(self) -> bool:
    method _validate_non_empty_str (line 62) | def _validate_non_empty_str(self, value: Optional[str]) -> bool:
  class FileMetadata (line 68) | class FileMetadata(BaseMetadata):
    method validate (line 76) | def validate(self) -> bool:
  class ContentFormatter (line 91) | class ContentFormatter(ABC, Generic[T]):
    method format (line 98) | def format(self,
    method _create_summary (line 117) | def _create_summary(self, metadata: T) -> str:
    method _format_content_block (line 121) | def _format_content_block(self,
    method _add_indent (line 132) | def _add_indent(self, text: str, level: int) -> str:
  class FileContentFormatter (line 140) | class FileContentFormatter(ContentFormatter[FileMetadata]):
    method format (line 143) | def format(self,
  class ContentFoldingManager (line 183) | class ContentFoldingManager:
    method __init__ (line 186) | def __init__(self):
    method _register_default_formatters (line 191) | def _register_default_formatters(self) -> None:
    method register_formatter (line 195) | def register_formatter(self, name: str, formatter: ContentFormatter) -...
    method _guess_language (line 201) | def _guess_language(self, extension: str) -> Optional[str]:
    method format_content (line 223) | def format_content(self,

FILE: crazy_functions/doc_fns/conversation_doc/excel_doc.py
  class ExcelTableFormatter (line 8) | class ExcelTableFormatter:
    method __init__ (line 11) | def __init__(self):
    method _normalize_table_row (line 17) | def _normalize_table_row(self, row):
    method _is_separator_row (line 26) | def _is_separator_row(self, row):
    method _extract_tables_from_text (line 31) | def _extract_tables_from_text(self, text):
    method _parse_table (line 66) | def _parse_table(self, table_lines):
    method _create_sheet (line 95) | def _create_sheet(self, question_num, table_num):
    method create_document (line 106) | def create_document(self, history):
  function save_chat_tables (line 147) | def save_chat_tables(history, save_dir, base_name):

FILE: crazy_functions/doc_fns/conversation_doc/html_doc.py
  class HtmlFormatter (line 3) | class HtmlFormatter:
    method __init__ (line 6) | def __init__(self, chatbot, history):
    method format_chat_content (line 138) | def format_chat_content(self) -> str:
    method format_history_content (line 152) | def format_history_content(self) -> str:
    method create_document (line 166) | def create_document(self) -> str:

FILE: crazy_functions/doc_fns/conversation_doc/markdown_doc.py
  class MarkdownFormatter (line 2) | class MarkdownFormatter:
    method __init__ (line 5) | def __init__(self):
    method _add_content (line 8) | def _add_content(self, text: str):
    method create_document (line 13) | def create_document(self, history: list) -> str:

FILE: crazy_functions/doc_fns/conversation_doc/pdf_doc.py
  function convert_markdown_to_pdf (line 7) | def convert_markdown_to_pdf(markdown_text):
  class PDFFormatter (line 37) | class PDFFormatter:
    method __init__ (line 40) | def __init__(self):
    method _init_reportlab (line 46) | def _init_reportlab(self):
    method _get_reportlab_lib (line 66) | def _get_reportlab_lib(self):
    method _get_reportlab_platypus (line 69) | def _get_reportlab_platypus(self):
    method _register_fonts (line 72) | def _register_fonts(self):
    method _create_styles (line 96) | def _create_styles(self):
    method create_document (line 140) | def create_document(self, history, output_path):

FILE: crazy_functions/doc_fns/conversation_doc/txt_doc.py
  function convert_markdown_to_txt (line 5) | def convert_markdown_to_txt(markdown_text):
  class TxtFormatter (line 38) | class TxtFormatter:
    method __init__ (line 41) | def __init__(self):
    method _setup_document (line 45) | def _setup_document(self):
    method _format_header (line 51) | def _format_header(self):
    method create_document (line 60) | def create_document(self, history):

FILE: crazy_functions/doc_fns/conversation_doc/word2pdf.py
  class WordToPdfConverter (line 9) | class WordToPdfConverter:
    method convert_to_pdf (line 13) | def convert_to_pdf(word_path: Union[str, Path], pdf_path: Union[str, P...
    method batch_convert (line 89) | def batch_convert(word_dir: Union[str, Path], pdf_dir: Union[str, Path...
    method convert_doc_to_pdf (line 123) | def convert_doc_to_pdf(doc, output_dir: Union[str, Path] = None) -> str:

FILE: crazy_functions/doc_fns/conversation_doc/word_doc.py
  function convert_markdown_to_word (line 10) | def convert_markdown_to_word(markdown_text):
  class WordFormatter (line 48) | class WordFormatter:
    method __init__ (line 51) | def __init__(self):
    method _setup_document (line 56) | def _setup_document(self):
    method _create_styles (line 81) | def _create_styles(self):
    method create_document (line 144) | def create_document(self,  history):

FILE: crazy_functions/doc_fns/read_fns/excel_reader.py
  class ExtractorConfig (line 16) | class ExtractorConfig:
  class ExcelTextExtractor (line 33) | class ExcelTextExtractor:
    method __init__ (line 40) | def __init__(self, config: Optional[ExtractorConfig] = None):
    method _setup_logging (line 45) | def _setup_logging(self) -> None:
    method _detect_encoding (line 56) | def _detect_encoding(self, file_path: Path) -> str:
    method _validate_file (line 69) | def _validate_file(self, file_path: Union[str, Path]) -> Path:
    method _format_value (line 89) | def _format_value(self, value: Any) -> str:
    method _process_chunk (line 96) | def _process_chunk(self, chunk: pd.DataFrame, columns: Optional[List[s...
    method _read_file (line 131) | def _read_file(self, file_path: Path) -> Union[pd.DataFrame, Iterator[...
    method extract_text (line 183) | def extract_text(
    method get_supported_formats (line 246) | def get_supported_formats() -> List[str]:
  function main (line 251) | def main():

FILE: crazy_functions/doc_fns/read_fns/markitdown/markdown_reader.py
  class MarkdownConverterConfig (line 14) | class MarkdownConverterConfig:
  class MarkdownConverter (line 50) | class MarkdownConverter:
    method __init__ (line 60) | def __init__(self, config: Optional[MarkdownConverterConfig] = None):
    method _setup_logging (line 72) | def _setup_logging(self) -> None:
    method _check_markitdown_installation (line 85) | def _check_markitdown_installation(self) -> None:
    method _validate_file (line 104) | def _validate_file(self, file_path: Union[str, Path], max_size_mb: int...
    method _cleanup_text (line 143) | def _cleanup_text(self, text: str) -> str:
    method get_supported_formats (line 164) | def get_supported_formats() -> List[str]:
    method convert_to_markdown (line 168) | def convert_to_markdown(
    method convert_to_markdown_and_save (line 245) | def convert_to_markdown_and_save(
    method batch_convert (line 265) | def batch_convert(
  function main (line 300) | def main():

FILE: crazy_functions/doc_fns/read_fns/unstructured_all/paper_metadata_extractor.py
  class PaperMetadata (line 18) | class PaperMetadata:
  class ExtractorConfig (line 36) | class ExtractorConfig:
  class PaperMetadataExtractor (line 47) | class PaperMetadataExtractor:
    method __init__ (line 65) | def __init__(self, config: Optional[ExtractorConfig] = None):
    method _setup_logging (line 74) | def _setup_logging(self) -> None:
    method _validate_file (line 87) | def _validate_file(self, file_path: Union[str, Path], max_size_mb: int...
    method _cleanup_text (line 126) | def _cleanup_text(self, text: str) -> str:
    method get_supported_formats (line 147) | def get_supported_formats() -> List[str]:
    method extract_metadata (line 151) | def extract_metadata(self, file_path: Union[str, Path], strategy: str ...
    method _extract_title_and_authors (line 194) | def _extract_title_and_authors(self, elements, metadata: PaperMetadata...
    method _evaluate_title_candidate (line 317) | def _evaluate_title_candidate(self, text, position, element):
    method _extract_abstract_and_keywords (line 378) | def _extract_abstract_and_keywords(self, elements, metadata: PaperMeta...
    method _extract_additional_metadata (line 432) | def _extract_additional_metadata(self, elements, metadata: PaperMetada...
  function main (line 461) | def main():

FILE: crazy_functions/doc_fns/read_fns/unstructured_all/paper_structure_extractor.py
  class PaperSection (line 21) | class PaperSection:
  class Figure (line 31) | class Figure:
  class Formula (line 40) | class Formula:
  class Reference (line 48) | class Reference:
  class StructuredPaper (line 59) | class StructuredPaper:
  class ExtractorConfig (line 72) | class ExtractorConfig:
  class PaperStructureExtractor (line 87) | class PaperStructureExtractor:
    method __init__ (line 112) | def __init__(self, config: Optional[ExtractorConfig] = None):
    method _setup_logging (line 122) | def _setup_logging(self) -> None:
    method _cleanup_text (line 135) | def _cleanup_text(self, text: str) -> str:
    method get_supported_formats (line 160) | def get_supported_formats() -> List[str]:
    method extract_paper_structure (line 164) | def extract_paper_structure(self, file_path: Union[str, Path], strateg...
    method _extract_sections (line 222) | def _extract_sections(self, elements) -> List[PaperSection]:
    method _is_likely_section_title (line 273) | def _is_likely_section_title(self, title_text: str, element, index: in...
    method _calculate_followup_content_length (line 526) | def _calculate_followup_content_length(self, index: int, elements, max...
    method _identify_section_type (line 546) | def _identify_section_type(self, title_text: str) -> str:
    method _estimate_title_level (line 561) | def _estimate_title_level(self, title_element, all_elements) -> int:
    method _extract_content_between_indices (line 602) | def _extract_content_between_indices(self, elements, start_index: int,...
    method _extract_content_after_index (line 613) | def _extract_content_after_index(self, elements, start_index: int) -> ...
    method _build_section_hierarchy (line 624) | def _build_section_hierarchy(self, sections: List[PaperSection]) -> Li...
    method _extract_figures_and_tables (line 661) | def _extract_figures_and_tables(self, elements) -> Tuple[List[Figure],...
    method _surrounding_has_math_symbols (line 750) | def _surrounding_has_math_symbols(self, index: int, elements, window: ...
    method _extract_formulas (line 786) | def _extract_formulas(self, elements) -> List[Formula]:
    method _extract_references (line 866) | def _extract_references(self, elements) -> List[Reference]:
    method _extract_full_text (line 960) | def _extract_full_text(self, elements) -> str:
    method generate_markdown (line 972) | def generate_markdown(self, paper: StructuredPaper) -> str:
    method _format_sections_markdown (line 1051) | def _format_sections_markdown(self, sections: List[PaperSection], leve...
    method _format_formula_content (line 1086) | def _format_formula_content(self, content: str) -> str:
    method _is_in_references_section (line 1111) | def _is_in_references_section(self, index: int, elements) -> bool:
  function main (line 1161) | def main():

FILE: crazy_functions/doc_fns/read_fns/unstructured_all/unstructured_md.py
  function extract_and_save_as_markdown (line 4) | def extract_and_save_as_markdown(paper_path, output_path=None):

FILE: crazy_functions/doc_fns/read_fns/web_reader.py
  class WebExtractorConfig (line 13) | class WebExtractorConfig:
  class WebTextExtractor (line 41) | class WebTextExtractor:
    method __init__ (line 47) | def __init__(self, config: Optional[WebExtractorConfig] = None):
    method _setup_logging (line 56) | def _setup_logging(self) -> None:
    method _validate_url (line 69) | def _validate_url(self, url: str) -> bool:
    method _download_webpage (line 84) | def _download_webpage(self, url: str) -> Optional[str]:
    method _cleanup_text (line 113) | def _cleanup_text(self, text: str) -> str:
    method extract_text (line 136) | def extract_text(self, url: str) -> str:
  function main (line 187) | def main():

FILE: crazy_functions/doc_fns/text_content_loader.py
  class FileInfo (line 19) | class FileInfo:
  class TextContentLoader (line 28) | class TextContentLoader:
    method __init__ (line 42) | def __init__(self, chatbot: List, history: List):
    method _create_file_info (line 55) | def _create_file_info(self, entry: os.DirEntry, root_path: str) -> Fil...
    method _process_file_batch (line 78) | def _process_file_batch(self, file_batch: List[FileInfo]) -> List[Tupl...
    method _read_file_content (line 121) | def _read_file_content(self, file_info: FileInfo) -> Optional[str]:
    method _is_valid_file (line 139) | def _is_valid_file(self, file_path: str) -> bool:
    method _collect_files (line 160) | def _collect_files(self, path: str) -> List[FileInfo]:
    method _format_content_with_fold (line 229) | def _format_content_with_fold(self, file_info, content: str) -> str:
    method _format_content_for_llm (line 259) | def _format_content_for_llm(self, file_infos: List[FileInfo], contents...
    method execute (line 287) | def execute(self, txt: str) -> Generator:
    method execute_single_file (line 354) | def execute_single_file(self, file_path: str) -> Generator:
    method __del__ (line 446) | def __del__(self):

FILE: crazy_functions/game_fns/game_ascii_art.py
  class MiniGame_ASCII_Art (line 9) | class MiniGame_ASCII_Art(GptAcademicGameBaseState):
    method step (line 10) | def step(self, prompt, chatbot, history):

FILE: crazy_functions/game_fns/game_interactive_story.py
  class MiniGame_ResumeStory (line 74) | class MiniGame_ResumeStory(GptAcademicGameBaseState):
    method begin_game_step_0 (line 85) | def begin_game_step_0(self, prompt, chatbot, history):
    method generate_story_image (line 93) | def generate_story_image(self, story_paragraph):
    method step (line 102) | def step(self, prompt, chatbot, history):

FILE: crazy_functions/game_fns/game_utils.py
  function get_code_block (line 4) | def get_code_block(reply):
  function is_same_thing (line 12) | def is_same_thing(a, b, llm_kwargs):

FILE: crazy_functions/gen_fns/gen_fns_shared.py
  function get_class_name (line 8) | def get_class_name(class_string):
  function try_make_module (line 14) | def try_make_module(code, chatbot):
  function is_function_successfully_generated (line 30) | def is_function_successfully_generated(fn_path, class_name, return_dict):
  function subprocess_worker (line 49) | def subprocess_worker(code, file_path, return_dict):

FILE: crazy_functions/ipc_fns/mp.py
  function run_in_subprocess_wrapper_func (line 5) | def run_in_subprocess_wrapper_func(v_args):
  function run_in_subprocess_with_timeout (line 15) | def run_in_subprocess_with_timeout(func, timeout=60):

FILE: crazy_functions/json_fns/pydantic_io.py
  class JsonStringError (line 46) | class JsonStringError(Exception): ...
  class GptJsonIO (line 48) | class GptJsonIO():
    method __init__ (line 50) | def __init__(self, schema, example_instruction=True):
    method generate_format_instructions (line 55) | def generate_format_instructions(self):
    method generate_output (line 71) | def generate_output(self, text):
    method generate_repair_prompt (line 82) | def generate_repair_prompt(self, broken_json, error):
    method generate_output_auto_repair (line 93) | def generate_output_auto_repair(self, response, gpt_gen_fn):

FILE: crazy_functions/json_fns/select_tool.py
  function structure_output (line 3) | def structure_output(txt, prompt, err_msg, run_gpt_fn, pydantic_cls):
  function select_tool (line 18) | def select_tool(prompt, run_gpt_fn, pydantic_cls):

FILE: crazy_functions/latex_fns/latex_actions.py
  function split_subprocess (line 19) | def split_subprocess(txt, project_folder, return_dict, opts):
  class LatexPaperSplit (line 84) | class LatexPaperSplit():
    method __init__ (line 90) | def __init__(self) -> None:
    method read_title_and_abstract (line 100) | def read_title_and_abstract(self, txt):
    method merge_result (line 110) | def merge_result(self, arr, mode, msg, buggy_lines=[], buggy_line_surg...
    method split (line 150) | def split(self, txt, project_folder, opts):
  class LatexPaperFileGroup (line 171) | class LatexPaperFileGroup():
    method __init__ (line 175) | def __init__(self):
    method run_file_split (line 187) | def run_file_split(self, max_token_limit=1900):
    method merge_result (line 204) | def merge_result(self):
    method write_result (line 209) | def write_result(self):
  function Latex精细分解与转化 (line 218) | def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwarg...
  function remove_buggy_lines (line 320) | def remove_buggy_lines(file_path, log_path, tex_name, tex_name_pure, n_f...
  function 编译Latex (line 347) | def 编译Latex(chatbot, history, main_file_original, main_file_modified, wo...
  function write_html (line 481) | def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
  function upload_to_gptac_cloud_if_user_allow (line 509) | def upload_to_gptac_cloud_if_user_allow(chatbot, arxiv_id):
  function check_gptac_cloud (line 547) | def check_gptac_cloud(arxiv_id, chatbot):

FILE: crazy_functions/latex_fns/latex_pickle_io.py
  class SafeUnpickler (line 4) | class SafeUnpickler(pickle.Unpickler):
    method get_safe_classes (line 6) | def get_safe_classes(self):
    method find_class (line 22) | def find_class(self, module, name):
  function objdump (line 34) | def objdump(obj, file="objdump.tmp"):
  function objload (line 41) | def objload(file="objdump.tmp"):

FILE: crazy_functions/latex_fns/latex_toolbox.py
  class LinkedListNode (line 13) | class LinkedListNode:
    method __init__ (line 18) | def __init__(self, string, preserve=True) -> None:
  function convert_to_linklist (line 27) | def convert_to_linklist(text, mask):
  function post_process (line 42) | def post_process(root):
  function set_forbidden_text (line 153) | def set_forbidden_text(text, mask, pattern, flags=0):
  function reverse_forbidden_text (line 168) | def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=...
  function set_forbidden_text_careful_brace (line 188) | def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
  function reverse_forbidden_text_careful_brace (line 212) | def reverse_forbidden_text_careful_brace(
  function set_forbidden_text_begin_end (line 241) | def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_l...
  function find_main_tex_file (line 285) | def find_main_tex_file(file_manifest, mode):
  function rm_comments (line 334) | def rm_comments(main_file):
  function find_tex_file_ignore_case (line 348) | def find_tex_file_ignore_case(fp):
  function merge_tex_files_ (line 375) | def merge_tex_files_(project_foler, main_file, mode):
  function find_title_and_abs (line 397) | def find_title_and_abs(main_file):
  function merge_tex_files (line 430) | def merge_tex_files(project_foler, main_file, mode):
  function insert_abstract (line 484) | def insert_abstract(tex_content):
  function mod_inbraket (line 524) | def mod_inbraket(match):
  function fix_content (line 538) | def fix_content(final_tex, node_string):
  function compile_latex_with_timeout (line 595) | def compile_latex_with_timeout(command, cwd, timeout=60):
  function run_in_subprocess_wrapper_func (line 611) | def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exce...
  function run_in_subprocess (line 622) | def run_in_subprocess(func):
  function _merge_pdfs (line 646) | def _merge_pdfs(pdf1_path, pdf2_path, output_path):
  function _merge_pdfs_ng (line 655) | def _merge_pdfs_ng(pdf1_path, pdf2_path, output_path):
  function _merge_pdfs_legacy (line 856) | def _merge_pdfs_legacy(pdf1_path, pdf2_path, output_path):

FILE: crazy_functions/live_audio/aliyunASR.py
  function write_numpy_to_wave (line 6) | def write_numpy_to_wave(filename, rate, data, add_header=False):
  function is_speaker_speaking (line 87) | def is_speaker_speaking(vad, data, sample_rate):
  class AliyunASR (line 107) | class AliyunASR():
    method test_on_sentence_begin (line 109) | def test_on_sentence_begin(self, message, *args):
    method test_on_sentence_end (line 112) | def test_on_sentence_end(self, message, *args):
    method test_on_start (line 117) | def test_on_start(self, message, *args):
    method test_on_error (line 120) | def test_on_error(self, message, *args):
    method test_on_close (line 124) | def test_on_close(self, *args):
    method test_on_result_chg (line 128) | def test_on_result_chg(self, message, *args):
    method test_on_completed (line 133) | def test_on_completed(self, message, *args):
    method audio_convertion_thread (line 136) | def audio_convertion_thread(self, uuid):
    method get_token (line 223) | def get_token(self):

FILE: crazy_functions/live_audio/audio_io.py
  function Singleton (line 4) | def Singleton(cls):
  class RealtimeAudioDistribution (line 16) | class RealtimeAudioDistribution():
    method __init__ (line 17) | def __init__(self) -> None:
    method clean_up (line 22) | def clean_up(self):
    method feed (line 25) | def feed(self, uuid, audio):
    method read (line 35) | def read(self, uuid):
  function change_sample_rate (line 43) | def change_sample_rate(audio, old_sr, new_sr):

FILE: crazy_functions/media_fns/get_media.py
  function download_video (line 6) | def download_video(video_id, only_audio, user_name, chatbot, history):
  function search_videos (line 33) | def search_videos(keywords):

FILE: crazy_functions/multi_stage/multi_stage_utils.py
  function have_any_recent_upload_files (line 10) | def have_any_recent_upload_files(chatbot):
  class GptAcademicState (line 18) | class GptAcademicState():
    method __init__ (line 19) | def __init__(self):
    method reset (line 22) | def reset(self):
    method dump_state (line 25) | def dump_state(self, chatbot):
    method set_state (line 28) | def set_state(self, chatbot, key, value):
    method get_state (line 32) | def get_state(chatbot, cls=None):
  class GptAcademicGameBaseState (line 41) | class GptAcademicGameBaseState():
    method init_game (line 45) | def init_game(self, chatbot, lock_plugin):
    method lock_plugin (line 51) | def lock_plugin(self, chatbot):
    method get_plugin_name (line 57) | def get_plugin_name(self):
    method dump_state (line 62) | def dump_state(self, chatbot):
    method set_state (line 65) | def set_state(self, chatbot, key, value):
    method sync_state (line 70) | def sync_state(chatbot, llm_kwargs, cls, plugin_name, callback_fn, loc...
    method continue_game (line 83) | def continue_game(self, prompt, chatbot, history):

FILE: crazy_functions/paper_fns/auto_git/handlers/base_handler.py
  class BaseHandler (line 9) | class BaseHandler(ABC):
    method __init__ (line 12) | def __init__(self, github: GitHubSource, llm_kwargs: Dict = None):
    method _get_search_params (line 17) | def _get_search_params(self, plugin_kwargs: Dict) -> Dict:
    method handle (line 27) | async def handle(
    method _search_repositories (line 39) | async def _search_repositories(self, query: str, language: str = None,...
    method _search_bilingual_repositories (line 64) | async def _search_bilingual_repositories(self, english_query: str, chi...
    method _search_code (line 112) | async def _search_code(self, query: str, language: str = None, per_pag...
    method _search_bilingual_code (line 132) | async def _search_bilingual_code(self, english_query: str, chinese_que...
    method _search_users (line 177) | async def _search_users(self, query: str, per_page: int = 30) -> List[...
    method _search_bilingual_users (line 192) | async def _search_bilingual_users(self, english_query: str, chinese_qu...
    method _search_topics (line 233) | async def _search_topics(self, query: str, per_page: int = 30) -> List...
    method _search_bilingual_topics (line 248) | async def _search_bilingual_topics(self, english_query: str, chinese_q...
    method _get_repo_details (line 290) | async def _get_repo_details(self, repos: List[Dict]) -> List[Dict]:
    method _format_repos (line 328) | def _format_repos(self, repos: List[Dict]) -> str:
    method _generate_apology_prompt (line 368) | def _generate_apology_prompt(self, criteria: SearchCriteria) -> str:
    method _get_current_time (line 383) | def _get_current_time(self) -> str:

FILE: crazy_functions/paper_fns/auto_git/handlers/code_handler.py
  class CodeSearchHandler (line 6) | class CodeSearchHandler(BaseHandler):
    method __init__ (line 9) | def __init__(self, github, llm_kwargs=None):
    method handle (line 12) | async def handle(
    method _get_code_details (line 89) | async def _get_code_details(self, code_results: List[Dict]) -> List[Di...
    method _format_code_results (line 119) | def _format_code_results(self, code_results: List[Dict]) -> str:

FILE: crazy_functions/paper_fns/auto_git/handlers/repo_handler.py
  class RepositoryHandler (line 6) | class RepositoryHandler(BaseHandler):
    method __init__ (line 9) | def __init__(self, github, llm_kwargs=None):
    method handle (line 12) | async def handle(
    method _build_repo_detail_prompt (line 113) | def _build_repo_detail_prompt(self, main_repo: Dict, similar_repos: Li...

FILE: crazy_functions/paper_fns/auto_git/handlers/topic_handler.py
  class TopicHandler (line 6) | class TopicHandler(BaseHandler):
    method __init__ (line 9) | def __init__(self, github, llm_kwargs=None):
    method handle (line 12) | async def handle(
    method _format_topic_repos (line 172) | def _format_topic_repos(self, repos: List[Dict]) -> str:

FILE: crazy_functions/paper_fns/auto_git/handlers/user_handler.py
  class UserSearchHandler (line 6) | class UserSearchHandler(BaseHandler):
    method __init__ (line 9) | def __init__(self, github, llm_kwargs=None):
    method handle (line 12) | async def handle(
    method _get_user_details (line 91) | async def _get_user_details(self, users: List[Dict]) -> List[Dict]:
    method _format_users (line 121) | def _format_users(self, users: List[Dict]) -> str:

FILE: crazy_functions/paper_fns/auto_git/query_analyzer.py
  class SearchCriteria (line 6) | class SearchCriteria:
  class QueryAnalyzer (line 17) | class QueryAnalyzer:
    method __init__ (line 24) | def __init__(self):
    method analyze_query (line 32) | def analyze_query(self, query: str, chatbot: List, llm_kwargs: Dict):
    method _normalize_query_type (line 306) | def _normalize_query_type(self, query_type: str, query: str) -> str:
    method _extract_tag (line 325) | def _extract_tag(self, text: str, tag: str) -> str:

FILE: crazy_functions/paper_fns/auto_git/sources/github_source.py
  class GitHubSource (line 9) | class GitHubSource:
    method __init__ (line 21) | def __init__(self, api_key: Optional[Union[str, List[str]]] = None):
    method _initialize (line 36) | def _initialize(self) -> None:
    method _request (line 53) | async def _request(self, method: str, endpoint: str, params: Dict = No...
    method get_user (line 98) | async def get_user(self, username: Optional[str] = None) -> Dict:
    method get_user_repos (line 110) | async def get_user_repos(self, username: Optional[str] = None, sort: s...
    method get_user_starred (line 133) | async def get_user_starred(self, username: Optional[str] = None,
    method get_repo (line 154) | async def get_repo(self, owner: str, repo: str) -> Dict:
    method get_repo_branches (line 167) | async def get_repo_branches(self, owner: str, repo: str, per_page: int...
    method get_repo_commits (line 186) | async def get_repo_commits(self, owner: str, repo: str, sha: Optional[...
    method get_commit_details (line 213) | async def get_commit_details(self, owner: str, repo: str, commit_sha: ...
    method get_file_content (line 229) | async def get_file_content(self, owner: str, repo: str, path: str, ref...
    method get_directory_content (line 257) | async def get_directory_content(self, owner: str, repo: str, path: str...
    method get_issues (line 279) | async def get_issues(self, owner: str, repo: str, state: str = "open",
    method get_issue (line 306) | async def get_issue(self, owner: str, repo: str, issue_number: int) ->...
    method get_issue_comments (line 320) | async def get_issue_comments(self, owner: str, repo: str, issue_number...
    method get_pull_requests (line 336) | async def get_pull_requests(self, owner: str, repo: str, state: str = ...
    method get_pull_request (line 363) | async def get_pull_request(self, owner: str, repo: str, pr_number: int...
    method get_pull_request_files (line 377) | async def get_pull_request_files(self, owner: str, repo: str, pr_numbe...
    method search_repositories (line 393) | async def search_repositories(self, query: str, sort: str = "stars",
    method search_code (line 417) | async def search_code(self, query: str, sort: str = "indexed",
    method search_issues (line 441) | async def search_issues(self, query: str, sort: str = "created",
    method search_users (line 465) | async def search_users(self, query: str, sort: str = "followers",
    method get_organization (line 491) | async def get_organization(self, org: str) -> Dict:
    method get_organization_repos (line 503) | async def get_organization_repos(self, org: str, type: str = "all",
    method get_organization_members (line 529) | async def get_organization_members(self, org: str, per_page: int = 30,...
    method get_repository_languages (line 549) | async def get_repository_languages(self, owner: str, repo: str) -> Dict:
    method get_repository_stats_contributors (line 562) | async def get_repository_stats_contributors(self, owner: str, repo: st...
    method get_repository_stats_commit_activity (line 575) | async def get_repository_stats_commit_activity(self, owner: str, repo:...
  function example_usage (line 588) | async def example_usage():

FILE: crazy_functions/paper_fns/document_structure_extractor.py
  class DocumentSection (line 17) | class DocumentSection:
  class StructuredDocument (line 28) | class StructuredDocument:
  class GenericDocumentStructureExtractor (line 37) | class GenericDocumentStructureExtractor:
    method __init__ (line 73) | def __init__(self):
    method _setup_logging (line 78) | def _setup_logging(self):
    method extract_document_structure (line 86) | def extract_document_structure(self, file_path: str, strategy: str = "...
    method _convert_paper_to_document (line 134) | def _convert_paper_to_document(self, paper: StructuredPaper) -> Struct...
    method _convert_paper_sections (line 163) | def _convert_paper_sections(self, paper_sections: List[PaperSection], ...
    method _extract_generic_structure (line 194) | def _extract_generic_structure(self, elements) -> StructuredDocument:
    method _build_section_hierarchy (line 306) | def _build_section_hierarchy(self, sections: List[DocumentSection]) ->...
    method _is_likely_heading (line 350) | def _is_likely_heading(self, text: str, element, index: int, elements)...
    method _estimate_heading_level (line 406) | def _estimate_heading_level(self, text: str, element) -> int:
    method _identify_section_type (line 460) | def _identify_section_type(self, title_text: str) -> str:
    method _has_sufficient_following_content (line 486) | def _has_sufficient_following_content(self, index: int, elements, min_...
    method _extract_content_between (line 509) | def _extract_content_between(self, elements, start_index: int, end_ind...
    method generate_markdown (line 528) | def generate_markdown(self, doc: StructuredDocument) -> str:
    method _format_sections_markdown (line 564) | def _format_sections_markdown(self, sections: List[DocumentSection], b...

FILE: crazy_functions/paper_fns/file2file_doc/html_doc.py
  class HtmlFormatter (line 1) | class HtmlFormatter:
    method __init__ (line 4) | def __init__(self, processing_type="文本处理"):
    method _escape_html (line 162) | def _escape_html(self, text):
    method _markdown_to_html (line 167) | def _markdown_to_html(self, text):
    method create_document (line 262) | def create_document(self, content: str) -> str:

FILE: crazy_functions/paper_fns/file2file_doc/markdown_doc.py
  class MarkdownFormatter (line 1) | class MarkdownFormatter:
    method __init__ (line 4) | def __init__(self):
    method _add_content (line 7) | def _add_content(self, text: str):
    method create_document (line 12) | def create_document(self, content: str, processing_type: str = "文本处理")...

FILE: crazy_functions/paper_fns/file2file_doc/txt_doc.py
  function convert_markdown_to_txt (line 3) | def convert_markdown_to_txt(markdown_text):
  class TxtFormatter (line 35) | class TxtFormatter:
    method __init__ (line 38) | def __init__(self):
    method _setup_document (line 42) | def _setup_document(self):
    method _format_header (line 48) | def _format_header(self):
    method create_document (line 57) | def create_document(self, content):

FILE: crazy_functions/paper_fns/file2file_doc/word2pdf.py
  class WordToPdfConverter (line 8) | class WordToPdfConverter:
    method convert_to_pdf (line 12) | def convert_to_pdf(word_path: Union[str, Path], pdf_path: Union[str, P...
    method batch_convert (line 59) | def batch_convert(word_dir: Union[str, Path], pdf_dir: Union[str, Path...
    method convert_doc_to_pdf (line 93) | def convert_doc_to_pdf(doc, output_dir: Union[str, Path] = None) -> str:

FILE: crazy_functions/paper_fns/file2file_doc/word_doc.py
  function convert_markdown_to_word (line 9) | def convert_markdown_to_word(markdown_text):
  class WordFormatter (line 46) | class WordFormatter:
    method __init__ (line 49) | def __init__(self):
    method _setup_document (line 54) | def _setup_document(self):
    method _create_styles (line 79) | def _create_styles(self):
    method create_document (line 146) | def create_document(self, content: str, processing_type: str = "文本处理"):

FILE: crazy_functions/paper_fns/github_search.py
  function GitHub项目智能检索 (line 30) | def GitHub项目智能检索(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbo...

FILE: crazy_functions/paper_fns/journal_paper_recom.py
  class RecommendationQuestion (line 19) | class RecommendationQuestion:
  class JournalConferenceRecommender (line 27) | class JournalConferenceRecommender:
    method __init__ (line 30) | def __init__(self, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: Lis...
    method _load_paper (line 71) | def _load_paper(self, paper_path: str) -> Generator:
    method _analyze_question (line 90) | def _analyze_question(self, question: RecommendationQuestion) -> Gener...
    method _generate_journal_recommendations (line 116) | def _generate_journal_recommendations(self) -> Generator:
    method _generate_conference_recommendations (line 186) | def _generate_conference_recommendations(self) -> Generator:
    method _generate_priority_summary (line 273) | def _generate_priority_summary(self, journal_recommendations: str, con...
    method save_recommendations (line 366) | def save_recommendations(self, journal_recommendations: str, conferenc...
    method recommend_venues (line 412) | def recommend_venues(self, paper_path: str) -> Generator:
    method _add_to_history (line 439) | def _add_to_history(self, journal_recommendations: str, conference_rec...
  function _find_paper_file (line 475) | def _find_paper_file(path: str) -> str:
  function download_paper_by_id (line 505) | def download_paper_by_id(paper_info, chatbot, history) -> str:
  function 论文期刊会议推荐 (line 575) | def 论文期刊会议推荐(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: L...

FILE: crazy_functions/paper_fns/paper_download.py
  function extract_paper_id (line 9) | def extract_paper_id(txt):
  function extract_paper_ids (line 45) | def extract_paper_ids(txt):
  function format_arxiv_id (line 74) | def format_arxiv_id(paper_id):
  function get_arxiv_paper (line 81) | def get_arxiv_paper(paper_id):
  function create_zip_archive (line 116) | def create_zip_archive(files, save_path):
  function 论文下载 (line 131) | def 论文下载(txt: str, llm_kwargs, plugin_kwargs, chatbot, history, system_p...
  function test (line 270) | async def test():

FILE: crazy_functions/paper_fns/reduce_aigc.py
  class TextFragment (line 28) | class TextFragment:
  class DocumentProcessor (line 35) | class DocumentProcessor:
    method __init__ (line 38) | def __init__(self, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: Lis...
    method _get_token_limit (line 54) | def _get_token_limit(self) -> int:
    method _create_batch_inputs (line 60) | def _create_batch_inputs(self, fragments: List[TextFragment], current_...
    method _extract_decision (line 135) | def _extract_decision(self, text: str) -> str:
    method process_file (line 147) | def process_file(self, file_path: str) -> Generator:
    method _process_structured_paper (line 171) | def _process_structured_paper(self, file_path: str) -> Generator:
    method _process_regular_file (line 420) | def _process_regular_file(self, file_path: str) -> Generator:
    method save_results (line 510) | def save_results(self, content: str, original_file_path: str) -> List[...
    method _breakdown_section_content (line 540) | def _breakdown_section_content(self, content: str) -> List[str]:
    method _process_text_fragments (line 621) | def _process_text_fragments(self, text_fragments: List[TextFragment], ...
  function 学术降重 (line 787) | def 学术降重(txt: str, llm_kwargs: Dict, plugin_kwargs: Dict, chatbot: List,

FILE: crazy_functions/paper_fns/wiki/wikipedia_api.py
  class WikipediaAPI (line 8) | class WikipediaAPI:
    method __init__ (line 11) | def __init__(self, language: str = "zh", user_agent: str = None,
    method _make_request (line 34) | async def _make_request(self, url, params=None):
    method search (line 80) | async def search(self, query: str, limit: int = 10, namespace: int = 0...
    method get_page_content (line 109) | async def get_page_content(self, title: str, section: Optional[int] = ...
    method get_summary (line 144) | async def get_summary(self, title: str, sentences: int = 3) -> str:
    method get_random_articles (line 178) | async def get_random_articles(self, count: int = 1, namespace: int = 0...
    method login (line 206) | async def login(self, username: str, password: str) -> bool:
    method setup_oauth (line 257) | async def setup_oauth(self, consumer_token: str, consumer_secret: str,
  function example_usage (line 316) | async def example_usage():

FILE: crazy_functions/pdf_fns/breakdown_pdf_txt.py
  function force_breakdown (line 6) | def force_breakdown(txt, limit, get_token_fn):
  function maintain_storage (line 15) | def maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage):
  function cut (line 31) | def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_...
  function breakdown_text_to_satisfy_token_limit_ (line 88) | def breakdown_text_to_satisfy_token_limit_(txt, limit, llm_model="gpt-3....
  function cut_new (line 117) | def cut_new(limit, get_token_fn, txt_tocut, must_break_at_empty_line, mu...
  function breakdown_text_to_satisfy_token_limit_new_ (line 187) | def breakdown_text_to_satisfy_token_limit_new_(txt, limit, llm_model="gp...
  function cut_from_end_to_satisfy_token_limit_ (line 219) | def cut_from_end_to_satisfy_token_limit_(txt, limit, reserve_token=500, ...

FILE: crazy_functions/pdf_fns/breakdown_txt.py
  function force_breakdown (line 4) | def force_breakdown(txt, limit, get_token_fn):
  function maintain_storage (line 13) | def maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage):
  function cut (line 29) | def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_...
  function breakdown_text_to_satisfy_token_limit_ (line 86) | def breakdown_text_to_satisfy_token_limit_(txt, limit, llm_model="gpt-3....

FILE: crazy_functions/pdf_fns/parse_pdf.py
  class GROBID_OFFLINE_EXCEPTION (line 14) | class GROBID_OFFLINE_EXCEPTION(Exception): pass
  function get_avail_grobid_url (line 16) | def get_avail_grobid_url():
  function parse_pdf (line 30) | def parse_pdf(pdf_path, grobid_url):
  function produce_report_markdown (line 43) | def produce_report_markdown(gpt_response_collection, meta, paper_meta_in...
  function translate_pdf (line 75) | def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_concl...

FILE: crazy_functions/pdf_fns/parse_pdf_grobid.py
  function 解析PDF_基于GROBID (line 7) | def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwa...

FILE: crazy_functions/pdf_fns/parse_pdf_legacy.py
  function 解析PDF_简单拆解 (line 11) | def 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs,...

FILE: crazy_functions/pdf_fns/parse_pdf_via_doc2x.py
  function retry_request (line 13) | def retry_request(max_retries=3, delay=3):
  function make_request (line 42) | def make_request(method, url, **kwargs):
  function doc2x_api_response_status (line 49) | def doc2x_api_response_status(response, uid=""):
  function 解析PDF_DOC2X_转Latex (line 75) | def 解析PDF_DOC2X_转Latex(pdf_file_path):
  function 解析PDF_DOC2X (line 80) | def 解析PDF_DOC2X(pdf_file_path, format="tex"):
  function 解析PDF_DOC2X_单文件 (line 209) | def 解析PDF_DOC2X_单文件(
  function 解析PDF_基于DOC2X (line 332) | def 解析PDF_基于DOC2X(file_manifest, *args):

FILE: crazy_functions/pdf_fns/parse_word.py
  function extract_text_from_files (line 4) | def extract_text_from_files(txt, chatbot, history):

FILE: crazy_functions/pdf_fns/report_gen_html.py
  class construct_html (line 7) | class construct_html():
    method __init__ (line 8) | def __init__(self) -> None:
    method add_row (line 11) | def add_row(self, a, b):
    method save_file (line 51) | def save_file(self, file_name):

FILE: crazy_functions/plugin_template/plugin_class_template.py
  class ArgProperty (line 6) | class ArgProperty(BaseModel): # PLUGIN_ARG_MENU
  class GptAcademicPluginTemplate (line 13) | class GptAcademicPluginTemplate():
    method __init__ (line 14) | def __init__(self):
    method define_arg_selection_menu (line 21) | def define_arg_selection_menu(self):
    method get_js_code_for_generating_menu (line 40) | def get_js_code_for_generating_menu(self, btnName):
    method execute (line 51) | def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...

FILE: crazy_functions/rag_fns/llama_index_worker.py
  class SaveLoad (line 31) | class SaveLoad():
    method does_checkpoint_exist (line 33) | def does_checkpoint_exist(self, checkpoint_dir=None):
    method save_to_checkpoint (line 40) | def save_to_checkpoint(self, checkpoint_dir=None):
    method load_from_checkpoint (line 45) | def load_from_checkpoint(self, checkpoint_dir=None):
    method create_new_vs (line 56) | def create_new_vs(self):
    method purge (line 59) | def purge(self):
  class LlamaIndexRagWorker (line 65) | class LlamaIndexRagWorker(SaveLoad):
    method __init__ (line 66) | def __init__(self, user_name, llm_kwargs, auto_load_checkpoint=True, c...
    method assign_embedding_model (line 77) | def assign_embedding_model(self):
    method inspect_vector_store (line 80) | def inspect_vector_store(self):
    method add_documents_to_vector_store (line 90) | def add_documents_to_vector_store(self, document_list: List[Document]):
    method add_text_to_vector_store (line 104) | def add_text_to_vector_store(self, text: str):
    method remember_qa (line 115) | def remember_qa(self, question, answer):
    method retrieve_from_store_with_query (line 119) | def retrieve_from_store_with_query(self, query):
    method build_prompt (line 125) | def build_prompt(self, query, nodes):
    method generate_node_array_preview (line 129) | def generate_node_array_preview(self, nodes):
    method purge_vector_store (line 134) | def purge_vector_store(self):

FILE: crazy_functions/rag_fns/milvus_worker.py
  class MilvusSaveLoad (line 38) | class MilvusSaveLoad():
    method does_checkpoint_exist (line 40) | def does_checkpoint_exist(self, checkpoint_dir=None):
    method save_to_checkpoint (line 47) | def save_to_checkpoint(self, checkpoint_dir=None):
    method load_from_checkpoint (line 52) | def load_from_checkpoint(self, checkpoint_dir=None):
    method create_new_vs (line 66) | def create_new_vs(self, checkpoint_dir, overwrite=False):
    method purge (line 76) | def purge(self):
  class MilvusRagWorker (line 79) | class MilvusRagWorker(MilvusSaveLoad, LlamaIndexRagWorker):
    method __init__ (line 81) | def __init__(self, user_name, llm_kwargs, auto_load_checkpoint=True, c...
    method inspect_vector_store (line 92) | def inspect_vector_store(self):

FILE: crazy_functions/rag_fns/rag_file_support.py
  function convert_to_markdown (line 6) | def convert_to_markdown(file_path: str) -> str:
  function extract_text (line 32) | def extract_text(file_path):

FILE: crazy_functions/rag_fns/vector_store_index.py
  class GptacVectorStoreIndex (line 15) | class GptacVectorStoreIndex(VectorStoreIndex):
    method default_vector_store (line 18) | def default_vector_store(

FILE: crazy_functions/review_fns/conversation_doc/endnote_doc.py
  class EndNoteFormatter (line 4) | class EndNoteFormatter:
    method __init__ (line 7) | def __init__(self):
    method create_document (line 10) | def create_document(self, papers: List[PaperMetadata]) -> str:

FILE: crazy_functions/review_fns/conversation_doc/excel_doc.py
  class ExcelTableFormatter (line 7) | class ExcelTableFormatter:
    method __init__ (line 10) | def __init__(self):
    method _normalize_table_row (line 17) | def _normalize_table_row(self, row):
    method _is_separator_row (line 26) | def _is_separator_row(self, row):
    method _extract_tables_from_text (line 31) | def _extract_tables_from_text(self, text):
    method _parse_table (line 66) | def _parse_table(self, table_lines):
    method _create_sheet (line 95) | def _create_sheet(self, question_num, table_num):
    method create_document (line 106) | def create_document(self, history):
  function save_chat_tables (line 147) | def save_chat_tables(history, save_dir, base_name):

FILE: crazy_functions/review_fns/conversation_doc/html_doc.py
  class HtmlFormatter (line 1) | class HtmlFormatter:
    method __init__ (line 4) | def __init__(self):
    method create_document (line 134) | def create_document(self, question: str, answer: str, ranked_papers: l...

FILE: crazy_functions/review_fns/conversation_doc/markdown_doc.py
  class MarkdownFormatter (line 1) | class MarkdownFormatter:
    method __init__ (line 4) | def __init__(self):
    method _add_content (line 7) | def _add_content(self, text: str):
    method create_document (line 12) | def create_document(self, question: str, answer: str, ranked_papers: l...

FILE: crazy_functions/review_fns/conversation_doc/reference_formatter.py
  class ReferenceFormatter (line 5) | class ReferenceFormatter:
    method __init__ (line 8) | def __init__(self):
    method _sanitize_bibtex (line 11) | def _sanitize_bibtex(self, text: str) -> str:
    method _generate_cite_key (line 41) | def _generate_cite_key(self, paper: PaperMetadata) -> str:
    method _get_entry_type (line 73) | def _get_entry_type(self, paper: PaperMetadata) -> str:
    method create_document (line 90) | def create_document(self, papers: List[PaperMetadata]) -> str:

FILE: crazy_functions/review_fns/conversation_doc/word2pdf.py
  class WordToPdfConverter (line 8) | class WordToPdfConverter:
    method _replace_docx_in_filename (line 12) | def _replace_docx_in_filename(filename: Union[str, Path]) -> Path:
    method convert_to_pdf (line 22) | def convert_to_pdf(word_path: Union[str, Path], pdf_path: Union[str, P...
    method batch_convert (line 68) | def batch_convert(word_dir: Union[str, Path], pdf_dir: Union[str, Path...
    method convert_doc_to_pdf (line 106) | def convert_doc_to_pdf(doc, output_dir: Union[str, Path] = None) -> str:

FILE: crazy_functions/review_fns/conversation_doc/word_doc.py
  class WordFormatter (line 13) | class WordFormatter:
    method __init__ (line 16) | def __init__(self):
    method _setup_document (line 21) | def _setup_document(self):
    method _create_styles (line 46) | def _create_styles(self):
    method create_document (line 109) | def create_document(self, question: str, answer: str, ranked_papers: l...
    method _create_hyperlink_style (line 208) | def _create_hyperlink_style(self):
    method _add_hyperlink (line 218) | def _add_hyperlink(self, paragraph, text, url):

FILE: crazy_functions/review_fns/data_sources/adsabs_source.py
  class AdsabsSource (line 10) | class AdsabsSource(DataSource):
    method __init__ (line 20) | def __init__(self, api_key: str = None):
    method _initialize (line 29) | def _initialize(self) -> None:
    method _make_request (line 37) | async def _make_request(self, url: str, method: str = "GET", data: dic...
    method _parse_paper (line 63) | def _parse_paper(self, doc: dict) -> PaperMetadata:
    method search (line 96) | async def search(
    method _build_query_string (line 153) | def _build_query_string(self, params: dict) -> str:
    method get_paper_details (line 157) | async def get_paper_details(self, bibcode: str) -> Optional[PaperMetad...
    method get_related_papers (line 170) | async def get_related_papers(self, bibcode: str, limit: int = 100) -> ...
    method search_by_author (line 190) | async def search_by_author(
    method search_by_journal (line 200) | async def search_by_journal(
    method get_latest_papers (line 210) | async def get_latest_papers(
    method get_citations (line 219) | async def get_citations(self, bibcode: str) -> List[PaperMetadata]:
    method get_references (line 238) | async def get_references(self, bibcode: str) -> List[PaperMetadata]:
  function example_usage (line 257) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/arxiv_source.py
  class ArxivSource (line 10) | class ArxivSource(DataSource):
    method __init__ (line 164) | def __init__(self):
    method _initialize (line 182) | def _initialize(self) -> None:
    method search (line 186) | async def search(
    method search_by_id (line 221) | async def search_by_id(self, paper_id: Union[str, List[str]]) -> List[...
    method search_by_category (line 237) | async def search_by_category(
    method search_by_authors (line 259) | async def search_by_authors(
    method search_by_date_range (line 279) | async def search_by_date_range(
    method download_pdf (line 296) | async def download_pdf(self, paper_id: str, dirpath: str = "./", filen...
    method download_source (line 321) | async def download_source(self, paper_id: str, dirpath: str = "./", fi...
    method get_citations (line 346) | async def get_citations(self, paper_id: str) -> List[PaperMetadata]:
    method get_references (line 350) | async def get_references(self, paper_id: str) -> List[PaperMetadata]:
    method get_paper_details (line 354) | async def get_paper_details(self, paper_id: str) -> Optional[PaperMeta...
    method _parse_paper_data (line 377) | def _parse_paper_data(self, result: arxiv.Result) -> PaperMetadata:
    method get_latest_papers (line 407) | async def get_latest_papers(
  function example_usage (line 507) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/base_source.py
  class PaperMetadata (line 5) | class PaperMetadata:
    method __init__ (line 7) | def __init__(
    method if_factor (line 43) | def if_factor(self) -> Optional[float]:
    method if_factor (line 48) | def if_factor(self, value: float):
    method cas_division (line 53) | def cas_division(self) -> Optional[str]:
    method cas_division (line 58) | def cas_division(self, value: str):
    method jcr_division (line 63) | def jcr_division(self) -> Optional[str]:
    method jcr_division (line 68) | def jcr_division(self, value: str):
  class DataSource (line 72) | class DataSource(ABC):
    method __init__ (line 75) | def __init__(self, api_key: Optional[str] = None):
    method _initialize (line 80) | def _initialize(self) -> None:
    method search (line 85) | async def search(self, query: str, limit: int = 100) -> List[PaperMeta...
    method get_paper_details (line 90) | async def get_paper_details(self, paper_id: str) -> PaperMetadata:
    method get_citations (line 95) | async def get_citations(self, paper_id: str) -> List[PaperMetadata]:
    method get_references (line 100) | async def get_references(self, paper_id: str) -> List[PaperMetadata]:

FILE: crazy_functions/review_fns/data_sources/crossref_source.py
  class CrossrefSource (line 7) | class CrossrefSource(DataSource):
    method _initialize (line 16) | def _initialize(self) -> None:
    method search (line 28) | async def search(
    method get_paper_details (line 99) | async def get_paper_details(self, doi: str) -> PaperMetadata:
    method get_references (line 124) | async def get_references(self, doi: str) -> List[PaperMetadata]:
    method get_citations (line 165) | async def get_citations(self, doi: str) -> List[PaperMetadata]:
    method _parse_work (line 193) | def _parse_work(self, work: Dict) -> PaperMetadata:
    method search_by_authors (line 244) | async def search_by_authors(
    method search_by_date_range (line 260) | async def search_by_date_range(
  function example_usage (line 277) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/elsevier_source.py
  class ElsevierSource (line 10) | class ElsevierSource(DataSource):
    method __init__ (line 19) | def __init__(self, api_key: str = None):
    method _initialize (line 28) | def _initialize(self) -> None:
    method _make_request (line 39) | async def _make_request(self, url: str, params: Dict = None) -> Option...
    method search (line 73) | async def search(
    method _parse_entry (line 124) | def _parse_entry(self, entry: Dict) -> Optional[PaperMetadata]:
    method get_citations (line 166) | async def get_citations(self, doi: str, limit: int = 100) -> List[Pape...
    method get_references (line 190) | async def get_references(self, doi: str) -> List[PaperMetadata]:
    method _parse_reference (line 209) | def _parse_reference(self, ref: Dict) -> Optional[PaperMetadata]:
    method search_by_author (line 247) | async def search_by_author(
    method search_by_affiliation (line 257) | async def search_by_affiliation(
    method search_by_venue (line 267) | async def search_by_venue(
    method test_api_access (line 277) | async def test_api_access(self):
    method get_paper_details (line 308) | async def get_paper_details(self, paper_id: str) -> Optional[PaperMeta...
    method fetch_abstract (line 321) | async def fetch_abstract(self, doi: str) -> Optional[str]:
  function example_usage (line 352) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/github_source.py
  class GitHubSource (line 9) | class GitHubSource:
    method __init__ (line 18) | def __init__(self, api_key: Optional[Union[str, List[str]]] = None):
    method _initialize (line 33) | def _initialize(self) -> None:
    method _request (line 50) | async def _request(self, method: str, endpoint: str, params: Dict = No...
    method get_user (line 95) | async def get_user(self, username: Optional[str] = None) -> Dict:
    method get_user_repos (line 107) | async def get_user_repos(self, username: Optional[str] = None, sort: s...
    method get_user_starred (line 130) | async def get_user_starred(self, username: Optional[str] = None,
    method get_repo (line 151) | async def get_repo(self, owner: str, repo: str) -> Dict:
    method get_repo_branches (line 164) | async def get_repo_branches(self, owner: str, repo: str, per_page: int...
    method get_repo_commits (line 183) | async def get_repo_commits(self, owner: str, repo: str, sha: Optional[...
    method get_commit_details (line 210) | async def get_commit_details(self, owner: str, repo: str, commit_sha: ...
    method get_file_content (line 226) | async def get_file_content(self, owner: str, repo: str, path: str, ref...
    method get_directory_content (line 254) | async def get_directory_content(self, owner: str, repo: str, path: str...
    method get_issues (line 276) | async def get_issues(self, owner: str, repo: str, state: str = "open",
    method get_issue (line 303) | async def get_issue(self, owner: str, repo: str, issue_number: int) ->...
    method get_issue_comments (line 317) | async def get_issue_comments(self, owner: str, repo: str, issue_number...
    method get_pull_requests (line 333) | async def get_pull_requests(self, owner: str, repo: str, state: str = ...
    method get_pull_request (line 360) | async def get_pull_request(self, owner: str, repo: str, pr_number: int...
    method get_pull_request_files (line 374) | async def get_pull_request_files(self, owner: str, repo: str, pr_numbe...
    method search_repositories (line 390) | async def search_repositories(self, query: str, sort: str = "stars",
    method search_code (line 414) | async def search_code(self, query: str, sort: str = "indexed",
    method search_issues (line 438) | async def search_issues(self, query: str, sort: str = "created",
    method search_users (line 462) | async def search_users(self, query: str, sort: str = "followers",
    method get_organization (line 488) | async def get_organization(self, org: str) -> Dict:
    method get_organization_repos (line 500) | async def get_organization_repos(self, org: str, type: str = "all",
    method get_organization_members (line 526) | async def get_organization_members(self, org: str, per_page: int = 30,...
    method get_repository_languages (line 546) | async def get_repository_languages(self, owner: str, repo: str) -> Dict:
    method get_repository_stats_contributors (line 559) | async def get_repository_stats_contributors(self, owner: str, repo: st...
    method get_repository_stats_commit_activity (line 572) | async def get_repository_stats_commit_activity(self, owner: str, repo:...
  function example_usage (line 585) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/journal_metrics.py
  class JournalMetrics (line 5) | class JournalMetrics:
    method __init__ (line 8) | def __init__(self):
    method _normalize_journal_name (line 14) | def _normalize_journal_name(self, name: str) -> str:
    method _convert_if_value (line 49) | def _convert_if_value(self, if_str: str) -> Optional[float]:
    method _load_journal_data (line 59) | def _load_journal_data(self):
    method get_journal_metrics (line 99) | def get_journal_metrics(self, venue_name: str, venue_info: dict) -> dict:

FILE: crazy_functions/review_fns/data_sources/openalex_source.py
  class OpenAlexSource (line 8) | class OpenAlexSource(DataSource):
    method _initialize (line 11) | def _initialize(self) -> None:
    method search (line 15) | async def search(self, query: str, limit: int = 100) -> List[PaperMeta...
    method _parse_work (line 36) | def _parse_work(self, work: Dict) -> PaperMetadata:
    method _reformat_name (line 78) | def _reformat_name(self, name: str) -> str:
    method get_paper_details (line 85) | async def get_paper_details(self, doi: str) -> PaperMetadata:
    method get_references (line 96) | async def get_references(self, doi: str) -> List[PaperMetadata]:
    method get_citations (line 107) | async def get_citations(self, doi: str) -> List[PaperMetadata]:
  function example_usage (line 123) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/pubmed_source.py
  class PubMedSource (line 12) | class PubMedSource(DataSource):
    method __init__ (line 22) | def __init__(self, api_key: str = None):
    method _initialize (line 31) | def _initialize(self) -> None:
    method _make_request (line 39) | async def _make_request(self, url: str) -> Optional[str]:
    method search (line 60) | async def search(
    method _fetch_papers_batch (line 120) | async def _fetch_papers_batch(self, pmids: List[str]) -> List[PaperMet...
    method _parse_article (line 151) | def _parse_article(self, article: ET.Element) -> PaperMetadata:
    method get_paper_details (line 237) | async def get_paper_details(self, pmid: str) -> Optional[PaperMetadata]:
    method get_related_papers (line 242) | async def get_related_papers(self, pmid: str, limit: int = 100) -> Lis...
    method search_by_author (line 280) | async def search_by_author(
    method search_by_journal (line 292) | async def search_by_journal(
    method get_latest_papers (line 304) | async def get_latest_papers(
    method get_citations (line 321) | async def get_citations(self, paper_id: str) -> List[PaperMetadata]:
    method get_references (line 335) | async def get_references(self, paper_id: str) -> List[PaperMetadata]:
  function example_usage (line 374) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/scihub_source.py
  class SciHub (line 10) | class SciHub:
    method __init__ (line 35) | def __init__(self, doi: str, path: Path, url=None, timeout=60, use_pro...
    method _test_proxy_connection (line 53) | def _test_proxy_connection(self):
    method _check_pdf_validity (line 73) | def _check_pdf_validity(self, content):
    method _send_request (line 85) | def _send_request(self):
    method _extract_url (line 144) | def _extract_url(self, response):
    method _download_pdf (line 181) | def _download_pdf(self, pdf_url):
    method fetch (line 259) | def fetch(self):

FILE: crazy_functions/review_fns/data_sources/scopus_source.py
  class ScopusSource (line 8) | class ScopusSource(DataSource):
    method __init__ (line 17) | def __init__(self, api_key: str = None):
    method _initialize (line 26) | def _initialize(self) -> None:
    method _make_request (line 34) | async def _make_request(self, url: str, params: Dict = None) -> Option...
    method _parse_paper_data (line 56) | def _parse_paper_data(self, data: Dict) -> PaperMetadata:
    method search (line 150) | async def search(
    method get_paper_details (line 212) | async def get_paper_details(self, paper_id: str) -> Optional[PaperMeta...
    method get_citations (line 240) | async def get_citations(self, paper_id: str) -> List[PaperMetadata]:
    method get_references (line 270) | async def get_references(self, paper_id: str) -> List[PaperMetadata]:
    method search_by_author (line 300) | async def search_by_author(
    method search_by_journal (line 312) | async def search_by_journal(
    method get_latest_papers (line 324) | async def get_latest_papers(
  function example_usage (line 333) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/semantic_source.py
  class SemanticScholarSource (line 6) | class SemanticScholarSource(DataSource):
    method __init__ (line 9) | def __init__(self, api_key: str = None):
    method _initialize (line 18) | def _initialize(self) -> None:
    method _ensure_client (line 41) | async def _ensure_client(self):
    method search (line 47) | async def search(
    method get_paper_details (line 75) | async def get_paper_details(self, doi: str) -> Optional[PaperMetadata]:
    method get_citations (line 85) | async def get_citations(
    method get_references (line 111) | async def get_references(
    method get_recommended_papers (line 137) | async def get_recommended_papers(self, doi: str, limit: int = 100) -> ...
    method get_recommended_papers_from_lists (line 161) | async def get_recommended_papers_from_lists(
    method search_author (line 193) | async def search_author(self, query: str, limit: int = 100) -> List[di...
    method get_author_details (line 217) | async def get_author_details(self, author_id: str) -> Optional[dict]:
    method get_author_papers (line 238) | async def get_author_papers(self, author_id: str, limit: int = 100) ->...
    method get_paper_autocomplete (line 254) | async def get_paper_autocomplete(self, query: str) -> List[dict]:
    method _parse_paper_data (line 278) | def _parse_paper_data(self, paper) -> PaperMetadata:
  function example_usage (line 363) | async def example_usage():

FILE: crazy_functions/review_fns/data_sources/unpaywall_source.py
  class UnpaywallSource (line 6) | class UnpaywallSource(DataSource):
    method _initialize (line 9) | def _initialize(self) -> None:
    method search (line 13) | async def search(self, query: str, limit: int = 100) -> List[PaperMeta...
    method _parse_response (line 27) | def _parse_response(self, data: Dict) -> PaperMetadata:

FILE: crazy_functions/review_fns/handlers/base_handler.py
  class BaseHandler (line 17) | class BaseHandler(ABC):
    method __init__ (line 20) | def __init__(self, arxiv: ArxivSource, semantic: SemanticScholarSource...
    method _get_search_params (line 30) | def _get_search_params(self, plugin_kwargs: Dict) -> Dict:
    method handle (line 39) | async def handle(
    method _search_arxiv (line 51) | async def _search_arxiv(self, params: Dict, limit_multiplier: int = 1,...
    method _search_semantic (line 88) | async def _search_semantic(self, params: Dict, limit_multiplier: int =...
    method _search_pubmed (line 110) | async def _search_pubmed(self, params: Dict, limit_multiplier: int = 1...
    method _search_crossref (line 147) | async def _search_crossref(self, params: Dict, limit_multiplier: int =...
    method _search_adsabs (line 177) | async def _search_adsabs(self, params: Dict, limit_multiplier: int = 1...
    method _search_all_sources (line 198) | async def _search_all_sources(self, criteria: SearchCriteria, search_p...
    method _format_paper_time (line 283) | def _format_paper_time(self, paper) -> str:
    method _format_papers (line 295) | def _format_papers(self, papers: List) -> str:
    method _get_current_time (line 387) | def _get_current_time(self) -> str:
    method _generate_apology_prompt (line 392) | def _generate_apology_prompt(self, criteria: SearchCriteria) -> str:
    method get_ranked_papers (line 406) | def get_ranked_papers(self) -> str:
    method _is_pubmed_paper (line 410) | def _is_pubmed_paper(self, paper) -> bool:

FILE: crazy_functions/review_fns/handlers/latest_handler.py
  class Arxiv最新论文推荐功能 (line 6) | class Arxiv最新论文推荐功能(BaseHandler):
    method __init__ (line 9) | def __init__(self, arxiv, semantic, llm_kwargs=None):
    method handle (line 12) | async def handle(

FILE: crazy_functions/review_fns/handlers/paper_handler.py
  class 单篇论文分析功能 (line 7) | class 单篇论文分析功能(BaseHandler):
    method __init__ (line 10) | def __init__(self, arxiv, semantic, llm_kwargs=None):
    method handle (line 13) | async def handle(
    method _get_paper_details (line 121) | async def _get_paper_details(self, criteria: SearchCriteria):
    method _get_citation_context (line 202) | async def _get_citation_context(self, paper: Dict, plugin_kwargs: Dict...
    method _generate_analysis (line 235) | async def _generate_analysis(
    method _get_technical_prompt (line 328) | def _get_technical_prompt(self, paper: Dict) -> str:

FILE: crazy_functions/review_fns/handlers/qa_handler.py
  class 学术问答功能 (line 7) | class 学术问答功能(BaseHandler):
    method __init__ (line 10) | def __init__(self, arxiv, semantic, llm_kwargs=None):
    method handle (line 13) | async def handle(
    method _search_relevant_papers (line 83) | async def _search_relevant_papers(self, criteria: SearchCriteria, sear...
    method _generate_answer (line 100) | async def _generate_answer(

FILE: crazy_functions/review_fns/handlers/recommend_handler.py
  class 论文推荐功能 (line 7) | class 论文推荐功能(BaseHandler):
    method __init__ (line 10) | def __init__(self, arxiv, semantic, llm_kwargs=None):
    method handle (line 13) | async def handle(
    method _search_seed_papers (line 114) | async def _search_seed_papers(self, criteria: SearchCriteria, search_p...
    method _get_recommendations (line 129) | async def _get_recommendations(self, seed_papers: List, multiplier: in...

FILE: crazy_functions/review_fns/handlers/review_handler.py
  class 文献综述功能 (line 6) | class 文献综述功能(BaseHandler):
    method __init__ (line 9) | def __init__(self, arxiv, semantic, llm_kwargs=None):
    method handle (line 12) | async def handle(
    method _generate_medical_review_prompt (line 51) | def _generate_medical_review_prompt(self, criteria: SearchCriteria) ->...
    method _generate_general_review_prompt (line 123) | def _generate_general_review_prompt(self, criteria: SearchCriteria) ->...

FILE: crazy_functions/review_fns/paper_processor/paper_llm_ranker.py
  class PaperLLMRanker (line 8) | class PaperLLMRanker:
    method __init__ (line 11) | def __init__(self, llm_kwargs: Dict = None):
    method _update_paper_metrics (line 15) | def _update_paper_metrics(self, papers: List[PaperMetadata]) -> None:
    method _get_year_as_int (line 35) | def _get_year_as_int(self, paper) -> int:
    method rank_papers (line 87) | def rank_papers(
    method _build_enhanced_query (line 251) | def _build_enhanced_query(self, query: str, criteria: SearchCriteria) ...
    method _build_enhanced_document (line 284) | def _build_enhanced_document(self, paper: PaperMetadata, criteria: Sea...
    method _select_papers_strategically (line 330) | def _select_papers_strategically(

FILE: crazy_functions/review_fns/query_analyzer.py
  class SearchCriteria (line 9) | class SearchCriteria:
  class QueryAnalyzer (line 27) | class QueryAnalyzer:
    method __init__ (line 44) | def __init__(self):
    method analyze_query (line 53) | def analyze_query(self, query: str, chatbot: List, llm_kwargs: Dict):
    method _normalize_query_type (line 400) | def _normalize_query_type(self, query_type: str, query: str) -> str:
    method _extract_tag (line 419) | def _extract_tag(self, text: str, tag: str) -> str:

FILE: crazy_functions/review_fns/query_processor.py
  class QueryProcessor (line 10) | class QueryProcessor:
    method __init__ (line 13) | def __init__(self):
    method process_query (line 26) | async def process_query(

FILE: crazy_functions/vector_fns/general_file_loader.py
  class ChineseTextSplitter (line 9) | class ChineseTextSplitter(CharacterTextSplitter):
    method __init__ (line 10) | def __init__(self, pdf: bool = False, sentence_size: int = None, **kwa...
    method split_text1 (line 15) | def split_text1(self, text: str) -> List[str]:
    method split_text (line 29) | def split_text(self, text: str) -> List[str]:   ##此处需要进一步优化逻辑
  function load_file (line 64) | def load_file(filepath, sentence_size):

FILE: crazy_functions/vector_fns/vector_database.py
  function similarity_search_with_score_by_vector (line 59) | def similarity_search_with_score_by_vector(
  class LocalDocQA (line 130) | class LocalDocQA:
    method init_cfg (line 138) | def init_cfg(self,
    method init_knowledge_vector_store (line 145) | def init_knowledge_vector_store(self,
    method get_loaded_file (line 205) | def get_loaded_file(self, vs_path):
    method get_knowledge_based_content_test (line 216) | def get_knowledge_based_content_test(self, query, vs_path, chunk_content,
  function construct_vector_store (line 244) | def construct_vector_store(vs_id, vs_path, files, sentence_size, history...
  class knowledge_archive_interface (line 270) | class knowledge_archive_interface():
    method __init__ (line 271) | def __init__(self) -> None:
    method get_chinese_text2vec (line 278) | def get_chinese_text2vec(self):
    method feed_archive (line 290) | def feed_archive(self, file_manifest, vs_path, id="default"):
    method get_current_archive_id (line 306) | def get_current_archive_id(self):
    method get_loaded_file (line 309) | def get_loaded_file(self, vs_path):
    method answer_with_archive_by_id (line 312) | def answer_with_archive_by_id(self, txt, id, vs_path):

FILE: crazy_functions/vt_fns/vt_call_plugin.py
  function read_avail_plugin_enum (line 9) | def read_avail_plugin_enum():
  function wrap_code (line 22) | def wrap_code(txt):
  function have_any_recent_upload_files (line 26) | def have_any_recent_upload_files(chatbot):
  function get_recent_file_prompt_support (line 34) | def get_recent_file_prompt_support(chatbot):
  function get_inputs_show_user (line 43) | def get_inputs_show_user(inputs, plugin_arr_enum_prompt):
  function execute_plugin (line 52) | def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys...

FILE: crazy_functions/vt_fns/vt_modify_config.py
  function modify_configuration_hot (line 9) | def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, hi...
  function modify_configuration_reboot (line 68) | def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot,...

FILE: crazy_functions/vt_fns/vt_state.py
  class VoidTerminalState (line 3) | class VoidTerminalState():
    method __init__ (line 4) | def __init__(self):
    method reset_state (line 7) | def reset_state(self):
    method lock_plugin (line 10) | def lock_plugin(self, chatbot):
    method unlock_plugin (line 14) | def unlock_plugin(self, chatbot):
    method set_state (line 19) | def set_state(self, chatbot, key, value):
    method get_state (line 23) | def get_state(chatbot):

FILE: crazy_functions/word_dfa/dfa_algo.py
  class Term (line 2611) | class Term:
    method __str__ (line 2616) | def __str__(self):
  class DFA (line 2619) | class DFA:
    method __init__ (line 2620) | def __init__(self):
    method build_dfa (line 2624) | def build_dfa(self):
    method is_at_word_end (line 2654) | def is_at_word_end(self, text, j):
    method search (line 2669) | def search(self, text):
  function main (line 2689) | def main():

FILE: crazy_functions/高级功能函数模板.py
  function 高阶功能模板函数 (line 33) | def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr...
  class Demo_Wrap (line 74) | class Demo_Wrap(GptAcademicPluginTemplate):
    method __init__ (line 75) | def __init__(self):
    method define_arg_selection_menu (line 81) | def define_arg_selection_menu(self):
    method execute (line 91) | def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p...
  function 测试图表渲染 (line 128) | def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom...

FILE: docs/javascripts/animations.js
  function initScrollAnimations (line 35) | function initScrollAnimations() {
  function initImageAnimations (line 67) | function initImageAnimations() {
  function addCodeLanguageBadges (line 96) | function addCodeLanguageBadges() {
  function initCopyButtonAnimations (line 123) | function initCopyButtonAnimations() {
  function initSmoothScroll (line 155) | function initSmoothScroll() {
  function handleReducedMotion (line 187) | function handleReducedMotion() {
  function enhanceTabSwitching (line 216) | function enhanceTabSwitching() {
  function enhanceDetails (line 244) | function enhanceDetails() {
  function enhanceNavigation (line 267) | function enhanceNavigation() {
  function debounce (line 296) | function debounce(func, wait) {
  function initScrollProgress (line 315) | function initScrollProgress() {
  function init (line 356) | function init() {

FILE: docs/javascripts/code-copy.js
  function initCodeCopyButtons (line 9) | function initCodeCopyButtons() {
  function fallbackCopyTextToClipboard (line 124) | function fallbackCopyTextToClipboard(text, button) {
  function showCopiedState (line 146) | function showCopiedState(button) {

FILE: docs/javascripts/nav-scroll-fix.js
  function getSidebar (line 25) | function getSidebar() {
  function setScrollTopInstant (line 38) | function setScrollTopInstant(sidebar, position) {
  function saveSidebarScroll (line 56) | function saveSidebarScroll() {
  function restoreSidebarScroll (line 74) | function restoreSidebarScroll() {
  function init (line 103) | function init() {

FILE: docs/javascripts/responsive.js
  function initMobileNav (line 19) | function initMobileNav() {
  function toggleMobileNav (line 71) | function toggleMobileNav() {
  function closeMobileNav (line 90) | function closeMobileNav() {
  function initTableScroll (line 111) | function initTableScroll() {
  function updateTableScrollState (line 140) | function updateTableScrollState(wrapper) {
  function initTouchOptimization (line 160) | function initTouchOptimization() {
  function initViewportFix (line 187) | function initViewportFix() {
  function initScrollProgress (line 206) | function initScrollProgress() {
  function initScrollToTop (line 232) | function initScrollToTop() {
  function initResponsiveImages (line 268) | function initResponsiveImages() {
  function debounce (line 300) | function debounce(func, wait) {
  function throttle (line 312) | function throttle(func, limit) {
  function init (line 331) | function init() {

FILE: docs/javascripts/tabbed-code.js
  function initTabbedSets (line 9) | function initTabbedSets() {
  function addCopyButtonToTabbedSet (line 76) | function addCopyButtonToTabbedSet(tabbedSet) {
  function fallbackCopyTextToClipboard (line 127) | function fallbackCopyTextToClipboard(text, button) {
  function showCopiedState (line 149) | function showCopiedState(button) {

FILE: main.py
  function enable_log (line 17) | def enable_log(PATH_LOGGING):
  function encode_plugin_info (line 21) | def encode_plugin_info(k, plugin)->str:
  function main (line 35) | def main():

FILE: multi_language.py
  function lru_file_cache (line 59) | def lru_file_cache(maxsize=128, ttl=None, filename=None):
  function contains_chinese (line 127) | def contains_chinese(string):
  function split_list (line 134) | def split_list(lst, n_each_req):
  function map_to_json (line 146) | def map_to_json(map, language):
  function read_map_from_json (line 152) | def read_map_from_json(language):
  function advanced_split (line 160) | def advanced_split(splitted_string, spliter, include_spliter=False):
  function trans (line 182) | def trans(word_to_translate, language, special=False):
  function trans_json (line 245) | def trans_json(word_to_translate, language, special=False):
  function step_1_core_key_translate (line 295) | def step_1_core_key_translate():
  function step_2_core_key_translate (line 390) | def step_2_core_key_translate():

FILE: request_llms/bridge_all.py
  class LazyloadTiktoken (line 51) | class LazyloadTiktoken(object):
    method __init__ (line 52) | def __init__(self, model):
    method get_encoder (line 57) | def get_encoder(model):
    method encode (line 63) | def encode(self, *args, **kwargs):
    method decode (line 67) | def decode(self, *args, **kwargs):
  function LLM_CATCH_EXCEPTION (line 1428) | def LLM_CATCH_EXCEPTION(f):
  function predict_no_ui_long_connection (line 1442) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function execute_model_override (line 1524) | def execute_model_override(llm_kwargs, additional_fn, method):
  function predict (line 1539) | def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot,

FILE: request_llms/bridge_chatglm.py
  class GetGLM2Handle (line 13) | class GetGLM2Handle(LocalLLMHandle):
    method load_model_info (line 15) | def load_model_info(self):
    method load_model_and_tokenizer (line 20) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 47) | def llm_stream_generator(self, **kwargs):
    method try_to_import_special_deps (line 68) | def try_to_import_special_deps(self, **kwargs):

FILE: request_llms/bridge_chatglm3.py
  class GetGLM3Handle (line 12) | class GetGLM3Handle(LocalLLMHandle):
    method load_model_info (line 14) | def load_model_info(self):
    method load_model_and_tokenizer (line 19) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 62) | def llm_stream_generator(self, **kwargs):
    method try_to_import_special_deps (line 84) | def try_to_import_special_deps(self, **kwargs):

FILE: request_llms/bridge_chatglm4.py
  class GetGLM4Handle (line 16) | class GetGLM4Handle(LocalLLMHandle):
    method load_model_info (line 18) | def load_model_info(self):
    method load_model_and_tokenizer (line 23) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 44) | def llm_stream_generator(self, **kwargs):
    method try_to_import_special_deps (line 68) | def try_to_import_special_deps(self, **kwargs):

FILE: request_llms/bridge_chatglmft.py
  function string_to_options (line 14) | def string_to_options(arguments):
  class GetGLMFTHandle (line 30) | class GetGLMFTHandle(Process):
    method __init__ (line 31) | def __init__(self):
    method check_dependency (line 42) | def check_dependency(self):
    method ready (line 51) | def ready(self):
    method run (line 54) | def run(self):
    method stream_chat (line 126) | def stream_chat(self, **kwargs):
  function predict_no_ui_long_connection (line 141) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 173) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_chatglmonnx.py
  class GetONNXGLMHandle (line 20) | class GetONNXGLMHandle(LocalLLMHandle):
    method load_model_info (line 22) | def load_model_info(self):
    method load_model_and_tokenizer (line 27) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 41) | def llm_stream_generator(self, **kwargs):
    method try_to_import_special_deps (line 63) | def try_to_import_special_deps(self, **kwargs):

FILE: request_llms/bridge_chatgpt.py
  function get_full_error (line 37) | def get_full_error(chunk, stream_response):
  function make_multimodal_input (line 48) | def make_multimodal_input(inputs, image_paths):
  function reverse_base64_from_input (line 57) | def reverse_base64_from_input(inputs):
  function contain_base64 (line 66) | def contain_base64(inputs):
  function append_image_if_contain_base64 (line 70) | def append_image_if_contain_base64(inputs):
  function remove_image_if_contain_base64 (line 91) | def remove_image_if_contain_base64(inputs):
  function decode_chunk (line 99) | def decode_chunk(chunk):
  function verify_endpoint (line 120) | def verify_endpoint(endpoint):
  function predict_no_ui_long_connection (line 128) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 226) | def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:Cha...
  function handle_o1_model_special (line 408) | def handle_o1_model_special(response, inputs, llm_kwargs, chatbot, histo...
  function handle_error (line 419) | def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, er...
  function generate_payload (line 449) | def generate_payload(inputs:str, llm_kwargs:dict, history:list, system_p...

FILE: request_llms/bridge_chatgpt_vision.py
  function report_invalid_key (line 29) | def report_invalid_key(key):
  function get_full_error (line 33) | def get_full_error(chunk, stream_response):
  function decode_chunk (line 44) | def decode_chunk(chunk):
  function verify_endpoint (line 64) | def verify_endpoint(endpoint):
  function predict_no_ui_long_connection (line 70) | def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_pr...
  function predict (line 74) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...
  function handle_error (line 215) | def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, er...
  function generate_payload (line 246) | def generate_payload(inputs, llm_kwargs, history, system_prompt, image_p...

FILE: request_llms/bridge_claude.py
  function get_full_error (line 32) | def get_full_error(chunk, stream_response):
  function decode_chunk (line 43) | def decode_chunk(chunk):
  function predict_no_ui_long_connection (line 72) | def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_pr...
  function make_media_input (line 142) | def make_media_input(history,inputs,image_paths):
  function predict (line 147) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...
  function multiple_picture_types (line 236) | def multiple_picture_types(image_paths):
  function generate_payload (line 251) | def generate_payload(inputs, llm_kwargs, history, system_prompt, image_p...

FILE: request_llms/bridge_cohere.py
  function get_full_error (line 31) | def get_full_error(chunk, stream_response):
  function decode_chunk (line 42) | def decode_chunk(chunk):
  function verify_endpoint (line 63) | def verify_endpoint(endpoint):
  function predict_no_ui_long_connection (line 71) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 127) | def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:Cha...
  function handle_error (line 241) | def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, er...
  function generate_payload (line 271) | def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):

FILE: request_llms/bridge_deepseekcoder.py
  function download_huggingface_model (line 11) | def download_huggingface_model(model_name, max_retry, local_dir):
  class GetCoderLMHandle (line 23) | class GetCoderLMHandle(LocalLLMHandle):
    method load_model_info (line 25) | def load_model_info(self):
    method load_model_and_tokenizer (line 30) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 83) | def llm_stream_generator(self, **kwargs):
    method try_to_import_special_deps (line 119) | def try_to_import_special_deps(self, **kwargs): pass

FILE: request_llms/bridge_google_gemini.py
  function predict_no_ui_long_connection (line 18) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function make_media_input (line 47) | def make_media_input(inputs, image_paths):
  function predict (line 56) | def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:Cha...

FILE: request_llms/bridge_internlm.py
  function try_to_import_special_deps (line 16) | def try_to_import_special_deps():
  function combine_history (line 19) | def combine_history(prompt, hist):
  class GetInternlmHandle (line 37) | class GetInternlmHandle(LocalLLMHandle):
    method load_model_info (line 39) | def load_model_info(self):
    method try_to_import_special_deps (line 44) | def try_to_import_special_deps(self, **kwargs):
    method load_model_and_tokenizer (line 50) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 66) | def llm_stream_generator(self, **kwargs):

FILE: request_llms/bridge_jittorllms_llama.py
  class GetGLMHandle (line 12) | class GetGLMHandle(Process):
    method __init__ (line 13) | def __init__(self):
    method check_dependency (line 24) | def check_dependency(self):
    method ready (line 36) | def ready(self):
    method run (line 39) | def run(self):
    method stream_chat (line 94) | def stream_chat(self, **kwargs):
  function predict_no_ui_long_connection (line 109) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 141) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_jittorllms_pangualpha.py
  class GetGLMHandle (line 12) | class GetGLMHandle(Process):
    method __init__ (line 13) | def __init__(self):
    method check_dependency (line 24) | def check_dependency(self):
    method ready (line 36) | def ready(self):
    method run (line 39) | def run(self):
    method stream_chat (line 94) | def stream_chat(self, **kwargs):
  function predict_no_ui_long_connection (line 109) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 141) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_jittorllms_rwkv.py
  class GetGLMHandle (line 12) | class GetGLMHandle(Process):
    method __init__ (line 13) | def __init__(self):
    method check_dependency (line 24) | def check_dependency(self):
    method ready (line 36) | def ready(self):
    method run (line 39) | def run(self):
    method stream_chat (line 94) | def stream_chat(self, **kwargs):
  function predict_no_ui_long_connection (line 109) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 141) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_llama2.py
  class GetLlamaHandle (line 15) | class GetLlamaHandle(LocalLLMHandle):
    method load_model_info (line 17) | def load_model_info(self):
    method load_model_and_tokenizer (line 22) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 41) | def llm_stream_generator(self, **kwargs):
    method try_to_import_special_deps (line 80) | def try_to_import_special_deps(self, **kwargs):

FILE: request_llms/bridge_moonshot.py
  class MoonShotInit (line 15) | class MoonShotInit:
    method __init__ (line 17) | def __init__(self):
    method __converter_file (line 22) | def __converter_file(self, user_input: str):
    method __converter_user (line 40) | def __converter_user(self, user_input: str):
    method __conversation_history (line 44) | def __conversation_history(self, history):
    method _analysis_content (line 65) | def _analysis_content(self, chuck):
    method generate_payload (line 76) | def generate_payload(self, inputs, llm_kwargs, history, system_prompt,...
    method generate_messages (line 101) | def generate_messages(self, inputs, llm_kwargs, history, system_prompt...
  function msg_handle_error (line 120) | def msg_handle_error(llm_kwargs, chunk_decoded):
  function predict (line 149) | def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:Cha...
  function predict_no_ui_long_connection (line 171) | def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_pr...

FILE: request_llms/bridge_moss.py
  class GetGLMHandle (line 10) | class GetGLMHandle(Process):
    method __init__ (line 11) | def __init__(self): # 主进程执行
    method check_dependency (line 22) | def check_dependency(self): # 主进程执行
    method ready (line 35) | def ready(self):
    method moss_init (line 39) | def moss_init(self): # 子进程执行
    method run (line 105) | def run(self): # 子进程执行
    method stream_chat (line 159) | def stream_chat(self, **kwargs): # 主进程执行
  function predict_no_ui_long_connection (line 174) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 205) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_newbingfree.py
  function preprocess_newbing_out (line 27) | def preprocess_newbing_out(s):
  function preprocess_newbing_out_simple (line 40) | def preprocess_newbing_out_simple(result):
  class NewBingHandle (line 50) | class NewBingHandle(Process):
    method __init__ (line 51) | def __init__(self):
    method check_dependency (line 62) | def check_dependency(self):
    method ready (line 73) | def ready(self):
    method async_run (line 76) | async def async_run(self):
    method run (line 125) | def run(self):
    method stream_chat (line 179) | def stream_chat(self, **kwargs):
  function predict_no_ui_long_connection (line 206) | def predict_no_ui_long_connection(
  function predict (line 253) | def predict(

FILE: request_llms/bridge_ollama.py
  function get_full_error (line 32) | def get_full_error(chunk, stream_response):
  function decode_chunk (line 43) | def decode_chunk(chunk):
  function predict_no_ui_long_connection (line 55) | def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_pr...
  function predict (line 120) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...
  function handle_error (line 210) | def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, er...
  function generate_payload (line 228) | def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):

FILE: request_llms/bridge_openrouter.py
  function get_full_error (line 31) | def get_full_error(chunk, stream_response):
  function make_multimodal_input (line 42) | def make_multimodal_input(inputs, image_paths):
  function reverse_base64_from_input (line 51) | def reverse_base64_from_input(inputs):
  function contain_base64 (line 60) | def contain_base64(inputs):
  function append_image_if_contain_base64 (line 64) | def append_image_if_contain_base64(inputs):
  function remove_image_if_contain_base64 (line 85) | def remove_image_if_contain_base64(inputs):
  function decode_chunk (line 93) | def decode_chunk(chunk):
  function verify_endpoint (line 114) | def verify_endpoint(endpoint):
  function predict_no_ui_long_connection (line 122) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 208) | def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:Cha...
  function handle_o1_model_special (line 366) | def handle_o1_model_special(response, inputs, llm_kwargs, chatbot, histo...
  function handle_error (line 377) | def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, er...
  function generate_payload (line 407) | def generate_payload(inputs:str, llm_kwargs:dict, history:list, system_p...

FILE: request_llms/bridge_qianfan.py
  function cache_decorator (line 11) | def cache_decorator(timeout):
  function get_access_token (line 31) | def get_access_token():
  function generate_message_payload (line 50) | def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
  function generate_from_baidu_qianfan (line 77) | def generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prom...
  function predict_no_ui_long_connection (line 123) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 139) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_qwen.py
  function predict_no_ui_long_connection (line 8) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 26) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_qwen_local.py
  class GetQwenLMHandle (line 12) | class GetQwenLMHandle(LocalLLMHandle):
    method load_model_info (line 14) | def load_model_info(self):
    method load_model_and_tokenizer (line 19) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 34) | def llm_stream_generator(self, **kwargs):
    method try_to_import_special_deps (line 49) | def try_to_import_special_deps(self, **kwargs):

FILE: request_llms/bridge_skylark2.py
  function validate_key (line 7) | def validate_key():
  function predict_no_ui_long_connection (line 12) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 33) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_spark.py
  function validate_key (line 10) | def validate_key():
  function predict_no_ui_long_connection (line 16) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 37) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...

FILE: request_llms/bridge_stackclaude.py
  class SlackClient (line 25) | class SlackClient(AsyncWebClient):
    method open_channel (line 41) | async def open_channel(self):
    method chat (line 47) | async def chat(self, text):
    method get_slack_messages (line 54) | async def get_slack_messages(self):
    method get_reply (line 69) | async def get_reply(self):
  class ClaudeHandle (line 93) | class ClaudeHandle(Process):
    method __init__ (line 94) | def __init__(self):
    method check_dependency (line 106) | def check_dependency(self):
    method ready (line 117) | def ready(self):
    method async_run (line 120) | async def async_run(self):
    method run (line 156) | def run(self):
    method stream_chat (line 195) | def stream_chat(self, **kwargs):
  function predict_no_ui_long_connection (line 222) | def predict_no_ui_long_connection(
  function predict (line 266) | def predict(

FILE: request_llms/bridge_taichu.py
  function validate_key (line 10) | def validate_key():
  function predict_no_ui_long_connection (line 15) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 42) | def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:Cha...

FILE: request_llms/bridge_tgui.py
  function random_hash (line 17) | def random_hash():
  function run (line 21) | async def run(context, max_token, temperature, top_p, addr, port):
  function predict (line 87) | def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], syst...
  function predict_no_ui_long_connection (line 145) | def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_promp...

FILE: request_llms/bridge_zhipu.py
  function validate_key (line 10) | def validate_key():
  function make_media_input (line 15) | def make_media_input(inputs, image_paths):
  function predict_no_ui_long_connection (line 20) | def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:l...
  function predict (line 47) | def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:Cha...

FILE: request_llms/chatglmoonx.py
  function chat_template (line 42) | def chat_template(history: list[tuple[str, str]], current: str):
  function process_response (line 52) | def process_response(response: str):
  class ChatGLMModel (line 68) | class ChatGLMModel():
    method __init__ (line 70) | def __init__(self, onnx_model_path=onnx_model_path, tokenizer_path=tok...
    method prepare_input (line 78) | def prepare_input(self, prompt: str):
    method sample_next_token (line 87) | def sample_next_token(self, logits: np.ndarray, top_k=50, top_p=0.7, t...
    method generate_iterate (line 106) | def generate_iterate(self, prompt: str, max_generated_tokens=100, top_...
  function replace_spaces_with_blank (line 156) | def replace_spaces_with_blank(match: re.Match[str]):
  function replace_blank_with_spaces (line 160) | def replace_blank_with_spaces(match: re.Match[str]):
  class ChatGLMTokenizer (line 164) | class ChatGLMTokenizer:
    method __init__ (line 165) | def __init__(self, vocab_file):
    method __len__ (line 171) | def __len__(self):
    method __getitem__ (line 174) | def __getitem__(self, key: str):
    method preprocess (line 178) | def preprocess(self, text: str, linebreak=True, whitespaces=True):
    method encode (line 187) | def encode(
    method decode (line 222) | def decode(self, text_ids: list[int]) -> str:

FILE: request_llms/com_google.py
  function files_filter_handler (line 28) | def files_filter_handler(file_list):
  function input_encode_handler (line 51) | def input_encode_handler(inputs, llm_kwargs):
  function file_manifest_filter_html (line 62) | def file_manifest_filter_html(file_list, filter_: list = None, md_type=F...
  function link_mtime_to_md (line 88) | def link_mtime_to_md(file):
  function html_local_file (line 95) | def html_local_file(file):
  function html_local_img (line 102) | def html_local_img(__file, layout="left", max_width=None, max_height=Non...
  function reverse_base64_from_input (line 115) | def reverse_base64_from_input(inputs):
  function contain_base64 (line 120) | def contain_base64(inputs):
  class GoogleChatInit (line 124) | class GoogleChatInit:
    method __init__ (line 125) | def __init__(self, llm_kwargs):
    method generate_chat (line 130) | def generate_chat(self, inputs, llm_kwargs, history, system_prompt, im...
    method __conversation_user (line 144) | def __conversation_user(self, user_input, llm_kwargs, enable_multimoda...
    method __conversation_history (line 167) | def __conversation_history(self, history, llm_kwargs, enable_multimoda...
    method generate_message_payload (line 181) | def generate_message_payload(

FILE: request_llms/com_qwenapi.py
  class QwenRequestInstance (line 8) | class QwenRequestInstance():
    method __init__ (line 9) | def __init__(self):
    method format_reasoning (line 24) | def format_reasoning(self, reasoning_content:str, main_content:str):
    method generate (line 32) | def generate(self, inputs, llm_kwargs, history, system_prompt):
  function generate_message_payload (line 89) | def generate_message_payload(inputs, llm_kwargs, history, system_prompt):

FILE: request_llms/com_skylark2api.py
  class YUNQUERequestInstance (line 10) | class YUNQUERequestInstance():
    method __init__ (line 11) | def __init__(self):
    method generate (line 18) | def generate(self, inputs, llm_kwargs, history, system_prompt):
    method generate_message_payload (line 69) | def generate_message_payload(inputs, llm_kwargs, history, system_prompt):

FILE: request_llms/com_sparkapi.py
  class Ws_Param (line 19) | class Ws_Param(object):
    method __init__ (line 21) | def __init__(self, APPID, APIKey, APISecret, gpt_url):
    method create_url (line 30) | def create_url(self):
  class SparkRequestInstance (line 59) | class SparkRequestInstance():
    method __init__ (line 60) | def __init__(self):
    method generate (line 78) | def generate(self, inputs, llm_kwargs, history, system_prompt, use_ima...
    method create_blocking_request (line 92) | def create_blocking_request(self, inputs, llm_kwargs, history, system_...
  function generate_message_payload (line 158) | def generate_message_payload(inputs, llm_kwargs, history, system_prompt,...
  function gen_params (line 192) | def gen_params(appid, inputs, llm_kwargs, history, system_prompt, file_m...

FILE: request_llms/com_taichu.py
  class TaichuChatInit (line 9) | class TaichuChatInit:
    method __init__ (line 10) | def __init__(self): ...
    method __conversation_user (line 12) | def __conversation_user(self, user_input: str, llm_kwargs:dict):
    method __conversation_history (line 15) | def __conversation_history(self, history:list, llm_kwargs:dict):
    method generate_chat (line 29) | def generate_chat(self, inputs:str, llm_kwargs:dict, history:list, sys...

FILE: request_llms/com_zhipuglm.py
  function input_encode_handler (line 12) | def input_encode_handler(inputs:str, llm_kwargs:dict):
  class ZhipuChatInit (line 23) | class ZhipuChatInit:
    method __init__ (line 25) | def __init__(self):
    method __conversation_user (line 32) | def __conversation_user(self, user_input: str, llm_kwargs:dict):
    method __conversation_history (line 50) | def __conversation_history(self, history:list, llm_kwargs:dict):
    method preprocess_param (line 65) | def preprocess_param(param, default=0.95, min_val=0.01, max_val=0.99):
    method __conversation_message_payload (line 79) | def __conversation_message_payload(self, inputs:str, llm_kwargs:dict, ...
    method generate_chat (line 124) | def generate_chat(self, inputs:str, llm_kwargs:dict, history:list, sys...

FILE: request_llms/edge_gpt_free.py
  class NotAllowedToAccess (line 98) | class NotAllowedToAccess(Exception):
  class ConversationStyle (line 102) | class ConversationStyle(Enum):
  function _append_identifier (line 161) | def _append_identifier(msg: dict) -> str:
  function _get_ran_hex (line 169) | def _get_ran_hex(length: int = 32) -> str:
  class _ChatHubRequest (line 176) | class _ChatHubRequest:
    method __init__ (line 181) | def __init__(
    method update (line 195) | def update(
  class _Conversation (line 290) | class _Conversation:
    method __init__ (line 295) | def __init__(
    method create (line 352) | async def create(
  class _ChatHub (line 413) | class _ChatHub:
    method __init__ (line 418) | def __init__(
    method ask_stream (line 437) | async def ask_stream(
    method _initial_handshake (line 608) | async def _initial_handshake(self) -> None:
    method close (line 612) | async def close(self) -> None:
  class Chatbot (line 622) | class Chatbot:
    method __init__ (line 627) | def __init__(
    method create (line 640) | async def create(
    method ask (line 653) | async def ask(
    method ask_stream (line 678) | async def ask_stream(
    method close (line 702) | async def close(self) -> None:
    method reset (line 708) | async def reset(self) -> None:
  function _get_input_async (line 720) | async def _get_input_async(
  function _create_session (line 734) | def _create_session() -> PromptSession:
  function _create_completer (line 754) | def _create_completer(commands: list, pattern_str: str = "$"):
  function async_main (line 758) | async def async_main(args: argparse.Namespace) -> None:
  function main (line 844) | def main() -> None:
  class Cookie (line 894) | class Cookie:
    method fetch_default (line 908) | def fetch_default(cls, path=None):
    method files (line 930) | def files(cls):
    method import_data (line 936) | def import_data(cls):
    method import_next (line 958) | def import_next(cls):
  class Query (line 969) | class Query:
    method __init__ (line 975) | def __init__(
    method log_and_send_query (line 1023) | def log_and_send_query(self, echo, echo_prompt):
    method create_image (line 1031) | def create_image(self):
    method send_to_bing (line 1038) | async def send_to_bing(self, echo=True, echo_prompt=False):
    method output (line 1067) | def output(self):
    method sources (line 1072) | def sources(self):
    method sources_dict (line 1077) | def sources_dict(self):
    method code (line 1090) | def code(self):
    method languages (line 1097) | def languages(self):
    method suggestions (line 1103) | def suggestions(self):
    method __repr__ (line 1110) | def __repr__(self):
    method __str__ (line 1113) | def __str__(self):
  class ImageQuery (line 1117) | class ImageQuery(Query):
    method __init__ (line 1118) | def __init__(self, prompt, **kwargs):
    method __repr__ (line 1122) | def __repr__(self):

FILE: request_llms/embed_models/bge_llm.py
  class BGELLMRanker (line 10) | class BGELLMRanker:
    method __init__ (line 12) | def __init__(self, llm_kwargs):
    method is_paper_relevant (line 15) | def is_paper_relevant(self, query: str, paper_text: str) -> bool:
    method batch_check_relevance (line 59) | def batch_check_relevance(self, query: str, paper_texts: List[str], sh...
  function main (line 95) | def main():

FILE: request_llms/embed_models/openai_embed.py
  function mean_agg (line 10) | def mean_agg(embeddings):
  class EmbeddingModel (line 14) | class EmbeddingModel():
    method get_agg_embedding_from_queries (line 16) | def get_agg_embedding_from_queries(
    method get_text_embedding_batch (line 26) | def get_text_embedding_batch(
  class OpenAiEmbeddingModel (line 34) | class OpenAiEmbeddingModel(EmbeddingModel):
    method __init__ (line 36) | def __init__(self, llm_kwargs:dict=None):
    method get_query_embedding (line 39) | def get_query_embedding(self, query: str):
    method compute_embedding (line 42) | def compute_embedding(self, text="这是要计算嵌入的文本", llm_kwargs:dict=None, b...
    method embedding_dimension (line 74) | def embedding_dimension(self, llm_kwargs=None):

FILE: request_llms/key_manager.py
  function Singleton (line 3) | def Singleton(cls):
  class OpenAI_ApiKeyManager (line 15) | class OpenAI_ApiKeyManager():
    method __init__ (line 16) | def __init__(self, mode='blacklist') -> None:
    method add_key_to_blacklist (line 20) | def add_key_to_blacklist(self, key):
    method select_avail_key (line 23) | def select_avail_key(self, key_list):

FILE: request_llms/local_llm_class.py
  class ThreadLock (line 10) | class ThreadLock(object):
    method __init__ (line 11) | def __init__(self):
    method acquire (line 14) | def acquire(self):
    method release (line 20) | def release(self):
    method __enter__ (line 25) | def __enter__(self):
    method __exit__ (line 28) | def __exit__(self, type, value, traceback):
  class GetSingletonHandle (line 32) | class GetSingletonHandle():
    method __init__ (line 33) | def __init__(self):
    method get_llm_model_instance (line 36) | def get_llm_model_instance(self, cls, *args, **kargs):
  function reset_tqdm_output (line 46) | def reset_tqdm_output():
  class LocalLLMHandle (line 67) | class LocalLLMHandle(Process):
    method __init__ (line 68) | def __init__(self):
    method get_state (line 88) | def get_state(self):
    method set_state (line 94) | def set_state(self, new_state):
    method load_model_info (line 101) | def load_model_info(self):
    method load_model_and_tokenizer (line 107) | def load_model_and_tokenizer(self):
    method llm_stream_generator (line 114) | def llm_stream_generator(self, **kwargs):
    method try_to_import_special_deps (line 118) | def try_to_import_special_deps(self, **kwargs):
    method check_dependency (line 125) | def check_dependency(self):
    method run (line 135) | def run(self):
    method clear_pending_messages (line 171) | def clear_pending_messages(self):
    method stream_chat (line 185) | def stream_chat(self, **kwargs):
  function get_local_llm_predict_fns (line 216) | def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_for...

FILE: request_llms/oai_std_model_template.py
  function get_full_error (line 19) | def get_full_error(chunk, stream_response):
  function decode_chunk (line 31) | def decode_chunk(chunk):
  function generate_message (line 79) | def generate_message(input, model, key, history, max_output_token, syste...
  function get_predict_function (line 122) | def get_predict_function(

FILE: request_llms/queued_pipe.py
  class PipeSide (line 5) | class PipeSide(object):
    method __init__ (line 6) | def __init__(self, q_2remote, q_2local) -> None:
    method recv (line 10) | def recv(self):
    method send (line 13) | def send(self, buf):
    method poll (line 16) | def poll(self):
  function create_queue_pipe (line 19) | def create_queue_pipe():

FILE: shared_utils/advanced_markdown_format.py
  function tex2mathml_catch_exception (line 63) | def tex2mathml_catch_exception(content, *args, **kwargs):
  function replace_math_no_render (line 71) | def replace_math_no_render(match):
  function replace_math_render (line 80) | def replace_math_render(match):
  function markdown_bug_hunt (line 93) | def markdown_bug_hunt(content):
  function is_equation (line 105) | def is_equation(txt):
  function fix_markdown_indent (line 133) | def fix_markdown_indent(txt):
  function get_line_range (line 170) | def get_line_range(re_match_obj, txt):
  function fix_code_segment_indent (line 178) | def fix_code_segment_indent(txt):
  function fix_dollar_sticking_bug (line 217) | def fix_dollar_sticking_bug(txt):
  function markdown_convertion_for_file (line 272) | def markdown_convertion_for_file(txt):
  function compress_string (line 329) | def compress_string(s):
  function decompress_string (line 333) | def decompress_string(s):
  function markdown_convertion (line 338) | def markdown_convertion(txt):
  function code_block_title_replace_format (line 403) | def code_block_title_replace_format(match):
  function get_last_backticks_indent (line 409) | def get_last_backticks_indent(text):
  function close_up_code_segment_during_stream (line 421) | def close_up_code_segment_during_stream(gpt_reply):
  function special_render_issues_for_mermaid (line 456) | def special_render_issues_for_mermaid(text):
  function contain_html_tag (line 468) | def contain_html_tag(text):
  function contain_image (line 476) | def contain_image(text):
  function compat_non_markdown_input (line 481) | def compat_non_markdown_input(text):
  function simple_markdown_convertion (line 506) | def simple_markdown_convertion(text):
  function format_io (line 521) | def format_io(self, y):

FILE: shared_utils/char_visual_effect.py
  function is_full_width_char (line 1) | def is_full_width_char(ch):
  function scrolling_visual_effect (line 11) | def scrolling_visual_effect(text, scroller_max_len):

FILE: shared_utils/colorful.py
  function print红 (line 12) | def print红(*kw,**kargs):
  function print绿 (line 14) | def print绿(*kw,**kargs):
  function print黄 (line 16) | def print黄(*kw,**kargs):
  function print蓝 (line 18) | def print蓝(*kw,**kargs):
  function print紫 (line 20) | def print紫(*kw,**kargs):
  function print靛 (line 22) | def print靛(*kw,**kargs):
  function print亮红 (line 25) | def print亮红(*kw,**kargs):
  function print亮绿 (line 27) | def print亮绿(*kw,**kargs):
  function print亮黄 (line 29) | def print亮黄(*kw,**kargs):
  function print亮蓝 (line 31) | def print亮蓝(*kw,**kargs):
  function print亮紫 (line 33) | def print亮紫(*kw,**kargs):
  function print亮靛 (line 35) | def print亮靛(*kw,**kargs):
  function sprint红 (line 39) | def sprint红(*kw):
  function sprint绿 (line 41) | def sprint绿(*kw):
  function sprint黄 (line 43) | def sprint黄(*kw):
  function sprint蓝 (line 45) | def sprint蓝(*kw):
  function sprint紫 (line 47) | def sprint紫(*kw):
  function sprint靛 (line 49) | def sprint靛(*kw):
  function sprint亮红 (line 51) | def sprint亮红(*kw):
  function sprint亮绿 (line 53) | def sprint亮绿(*kw):
  function sprint亮黄 (line 55) | def sprint亮黄(*kw):
  function sprint亮蓝 (line 57) | def sprint亮蓝(*kw):
  function sprint亮紫 (line 59) | def sprint亮紫(*kw):
  function sprint亮靛 (line 61) | def sprint亮靛(*kw):
  function log红 (line 64) | def log红(*kw,**kargs):
  function log绿 (line 66) | def log绿(*kw,**kargs):
  function log黄 (line 68) | def log黄(*kw,**kargs):
  function log蓝 (line 70) | def log蓝(*kw,**kargs):
  function log紫 (line 72) | def log紫(*kw,**kargs):
  function log靛 (line 74) | def log靛(*kw,**kargs):
  function log亮红 (line 77) | def log亮红(*kw,**kargs):
  function log亮绿 (line 79) | def log亮绿(*kw,**kargs):
  function log亮黄 (line 81) | def log亮黄(*kw,**kargs):
  function log亮蓝 (line 83) | def log亮蓝(*kw,**kargs):
  function log亮紫 (line 85) | def log亮紫(*kw,**kargs):
  function log亮靛 (line 87) | def log亮靛(*kw,**kargs):

FILE: shared_utils/config_loader.py
  function read_env_variable (line 10) | def read_env_variable(arg, default_value):
  function read_single_conf_with_lru_cache (line 65) | def read_single_conf_with_lru_cache(arg):
  function get_conf (line 103) | def get_conf(*args):
  function set_conf (line 120) | def set_conf(key, value):
  function set_multi_conf (line 129) | def set_multi_conf(dic):

FILE: shared_utils/connect_void_terminal.py
  function get_plugin_handle (line 16) | def get_plugin_handle(plugin_name):
  function get_chat_handle (line 30) | def get_chat_handle():
  function get_plugin_default_kwargs (line 39) | def get_plugin_default_kwargs():
  function get_chat_default_kwargs (line 68) | def get_chat_default_kwargs():

FILE: shared_utils/context_clip_policy.py
  function get_token_num (line 4) | def get_token_num(txt, tokenizer):
  function get_model_info (line 7) | def get_model_info():
  function clip_history (line 11) | def clip_history(inputs, history, tokenizer, max_token_limit):
  function auto_context_clip_each_message (line 72) | def auto_context_clip_each_message(current, history):
  function auto_context_clip_search_optimal (line 168) | def auto_context_clip_search_optimal(current, history, promote_latest_lo...

FILE: shared_utils/cookie_manager.py
  function load_web_cookie_cache__fn_builder (line 5) | def load_web_cookie_cache__fn_builder(customize_btns, cookies, predefine...
  function assign_btn__fn_builder (line 28) | def assign_btn__fn_builder(customize_btns, predefined_btns, cookies, web...
  function make_cookie_cache (line 65) | def make_cookie_cache():
  function make_history_cache (line 76) | def make_history_cache():
  function create_button_with_javascript_callback (line 105) | def create_button_with_javascript_callback(btn_value, elem_id, variant, ...

FILE: shared_utils/doc_loader_dynamic.py
  function start_with_url (line 5) | def start_with_url(inputs:str):
  function load_web_content (line 22) | def load_web_content(inputs:str, chatbot_with_cookie, history:list):
  function extract_file_path (line 45) | def extract_file_path(text):
  function contain_uploaded_files (line 53) | def contain_uploaded_files(inputs: str):
  function load_uploaded_files (line 60) | def load_uploaded_files(inputs, method, llm_kwargs, plugin_kwargs, chatb...

FILE: shared_utils/docker_as_service_api.py
  class DockerServiceApiComModel (line 9) | class DockerServiceApiComModel(BaseModel):
  function process_received (line 17) | def process_received(received: DockerServiceApiComModel, save_file_dir="...
  function stream_daas (line 40) | def stream_daas(docker_service_api_com_model, server_url, save_file_dir):

FILE: shared_utils/fastapi_server.py
  function validate_path_safety (line 50) | def validate_path_safety(path_or_url, user):
  function _authorize_user (line 72) | def _authorize_user(path_or_url, request, gradio_app):
  class Server (line 93) | class Server(uvicorn.Server):
    method install_signal_handlers (line 95) | def install_signal_handlers(self):
    method run_in_thread (line 98) | def run_in_thread(self):
    method close (line 104) | def close(self):
  function start_app (line 109) | def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEY...

FILE: shared_utils/fastapi_stream_server.py
  class UserInterfaceMsg (line 43) | class UserInterfaceMsg(BaseModel):
  function setup_initial_com (line 72) | def setup_initial_com(initial_msg: UserInterfaceMsg):
  class DummyRequest (line 120) | class DummyRequest(object):
    method __call__ (line 121) | def __call__(self, username):
  function task_executor (line 125) | def task_executor(initial_msg:UserInterfaceMsg, queue_blocking_from_clie...
  class FutureEvent (line 212) | class FutureEvent(threading.Event):
    method __init__ (line 219) | def __init__(self) -> None:
    method terminate (line 223) | def terminate(self, return_value):
    method wait_and_get_result (line 233) | def wait_and_get_result(self):
  class AtomicQueue (line 244) | class AtomicQueue:
    method __init__ (line 246) | def __init__(self):
    method put (line 249) | def put(self, item):
    method put_nowait (line 252) | def put_nowait(self, item):
    method get (line 255) | def get(self, timeout=600):
  class PythonMethod_AsyncConnectionMaintainer_AgentcraftInterface (line 264) | class PythonMethod_AsyncConnectionMaintainer_AgentcraftInterface():
    method make_queue (line 272) | def make_queue(self):
    method maintain_connection_forever (line 290) | async def maintain_connection_forever(self, initial_msg: UserInterface...
  class MasterMindWebSocketServer (line 424) | class MasterMindWebSocketServer(PythonMethod_AsyncConnectionMaintainer_A...
    method __init__ (line 432) | def __init__(self, host, port) -> None:
    method create_event (line 447) | def create_event(self, event_name: str):
    method terminate_event (line 460) | def terminate_event(self, event_name: str, msg:UserInterfaceMsg):
    method long_task_01_wait_incoming_connection (line 471) | async def long_task_01_wait_incoming_connection(self):

FILE: shared_utils/handle_upload.py
  function html_local_file (line 13) | def html_local_file(file):
  function html_local_img (line 20) | def html_local_img(__file, layout="left", max_width=None, max_height=Non...
  function file_manifest_filter_type (line 33) | def file_manifest_filter_type(file_list, filter_: list = None):
  function zip_extract_member_new (line 45) | def zip_extract_member_new(self, member, targetpath, pwd):
  function safe_extract_rar (line 92) | def safe_extract_rar(file_path, dest_dir):
  function extract_archive (line 117) | def extract_archive(file_path, dest_dir):

FILE: shared_utils/key_pattern_manager.py
  function is_openai_api_key (line 20) | def is_openai_api_key(key):
  function is_azure_api_key (line 29) | def is_azure_api_key(key):
  function is_api2d_key (line 34) | def is_api2d_key(key):
  function is_openroute_api_key (line 38) | def is_openroute_api_key(key):
  function is_cohere_api_key (line 42) | def is_cohere_api_key(key):
  function is_any_api_key (line 47) | def is_any_api_key(key):
  function what_keys (line 64) | def what_keys(keys):
  function is_o_family_for_openai (line 82) | def is_o_family_for_openai(llm_model):
  function select_api_key (line 91) | def select_api_key(keys, llm_model):
  function select_api_key_for_embed_models (line 124) | def select_api_key_for_embed_models(keys, llm_model):

FILE: shared_utils/logging.py
  function chat_log_filter (line 6) | def chat_log_filter(record):
  function not_chat_log_filter (line 9) | def not_chat_log_filter(record):
  function formatter_with_clip (line 12) | def formatter_with_clip(record):
  function setup_logging (line 22) | def setup_logging(PATH_LOGGING):

FILE: shared_utils/map_names.py
  function map_model_to_friendly_names (line 13) | def map_model_to_friendly_names(m):
  function map_friendly_names_to_model (line 18) | def map_friendly_names_to_model(m):
  function read_one_api_model_name (line 23) | def read_one_api_model_name(model: str):

FILE: shared_utils/nltk_downloader.py
  class Package (line 199) | class Package:
    method __init__ (line 208) | def __init__(
    method fromxml (line 278) | def fromxml(xml):
    method __lt__ (line 285) | def __lt__(self, other):
    method __repr__ (line 288) | def __repr__(self):
  class Collection (line 292) | class Collection:
    method __init__ (line 299) | def __init__(self, id, children, name=None, **kw):
    method fromxml (line 318) | def fromxml(xml):
    method __lt__ (line 326) | def __lt__(self, other):
    method __repr__ (line 329) | def __repr__(self):
  class DownloaderMessage (line 338) | class DownloaderMessage:
  class StartCollectionMessage (line 343) | class StartCollectionMessage(DownloaderMessage):
    method __init__ (line 346) | def __init__(self, collection):
  class FinishCollectionMessage (line 350) | class FinishCollectionMessage(DownloaderMessage):
    method __init__ (line 353) | def __init__(self, collection):
  class StartPackageMessage (line 357) | class StartPackageMessage(DownloaderMessage):
    method __init__ (line 360) | def __init__(self, package):
  class FinishPackageMessage (line 364) | class FinishPackageMessage(DownloaderMessage):
    method __init__ (line 367) | def __init__(self, package):
  class StartDownloadMessage (line 371) | class StartDownloadMessage(DownloaderMessage):
    method __init__ (line 374) | def __init__(self, package):
  class FinishDownloadMessage (line 378) | class FinishDownloadMessage(DownloaderMessage):
    method __init__ (line 381) | def __init__(self, package):
  class StartUnzipMessage (line 385) | class StartUnzipMessage(DownloaderMessage):
    method __init__ (line 388) | def __init__(self, package):
  class FinishUnzipMessage (line 392) | class FinishUnzipMessage(DownloaderMessage):
    method __init__ (line 395) | def __init__(self, package):
  class UpToDateMessage (line 399) | class UpToDateMessage(DownloaderMessage):
    method __init__ (line 402) | def __init__(self, package):
  class StaleMessage (line 406) | class StaleMessage(DownloaderMessage):
    method __init__ (line 409) | def __init__(self, package):
  class ErrorMessage (line 413) | class ErrorMessage(DownloaderMessage):
    method __init__ (line 416) | def __init__(self, package, message):
  class ProgressMessage (line 424) | class ProgressMessage(DownloaderMessage):
    method __init__ (line 427) | def __init__(self, progress):
  class SelectDownloadDirMessage (line 431) | class SelectDownloadDirMessage(DownloaderMessage):
    method __init__ (line 434) | def __init__(self, download_dir):
  class Downloader (line 443) | class Downloader:
    method __init__ (line 484) | def __init__(self, server_index_url=None, download_dir=None):
    method list (line 521) | def list(
    method packages (line 581) | def packages(self):
    method corpora (line 585) | def corpora(self):
    method models (line 589) | def models(self):
    method collections (line 593) | def collections(self):
    method _info_or_id (line 601) | def _info_or_id(self, info_or_id):
    method incr_download (line 616) | def incr_download(self, info_or_id, download_dir=None, force=False):
    method _num_packages (line 644) | def _num_packages(self, item):
    method _download_list (line 650) | def _download_list(self, items, download_dir, force):
    method _download_package (line 675) | def _download_package(self, info, download_dir, force):
    method download (line 744) | def download(
    method is_stale (line 841) | def is_stale(self, info_or_id, download_dir=None):
    method is_installed (line 844) | def is_installed(self, info_or_id, download_dir=None):
    method clear_status_cache (line 847) | def clear_status_cache(self, id=None):
    method status (line 853) | def status(self, info_or_id, download_dir=None):
    method _pkg_status (line 887) | def _pkg_status(self, info, filepath):
    method update (line 923) | def update(self, quiet=False, prefix="[nltk_data] "):
    method _update_index (line 936) | def _update_index(self, url=None):
    method index (line 999) | def index(self):
    method info (line 1008) | def info(self, id):
    method xmlinfo (line 1018) | def xmlinfo(self, id):
    method _get_url (line 1033) | def _get_url(self):
    method _set_url (line 1037) | def _set_url(self, url):
    method default_download_dir (line 1051) | def default_download_dir(self):
    method _get_download_dir (line 1090) | def _get_download_dir(self):
    method _set_download_dir (line 1099) | def _set_download_dir(self, download_dir):
    method _interactive_download (line 1110) | def _interactive_download(self):
  class DownloaderShell (line 1122) | class DownloaderShell:
    method __init__ (line 1123) | def __init__(self, dataserver):
    method _simple_interactive_menu (line 1126) | def _simple_interactive_menu(self, *options):
    method run (line 1132) | def run(self):
    method _simple_interactive_download (line 1173) | def _simple_interactive_download(self, args):
    method _simple_interactive_update (line 1203) | def _simple_interactive_update(self):
    method _simple_interactive_help (line 1235) | def _simple_interactive_help(self):
    method _show_config (line 1244) | def _show_config(self):
    method _simple_interactive_config (line 1254) | def _simple_interactive_config(self):
  class DownloaderGUI (line 1287) | class DownloaderGUI:
    method __init__ (line 1377) | def __init__(self, dataserver, use_threads=True):
    method _log (line 1426) | def _log(self, msg):
    method _init_widgets (line 1435) | def _init_widgets(self):
    method _init_menu (line 1546) | def _init_menu(self):
    method _select_columns (line 1615) | def _select_columns(self):
    method _refresh (line 1622) | def _refresh(self):
    method _info_edit (line 1632) | def _info_edit(self, info_key):
    method _info_save (line 1639) | def _info_save(self, e=None):
    method _table_reprfunc (line 1652) | def _table_reprfunc(self, row, col, val):
    method _set_url (line 1668) | def _set_url(self, url):
    method _set_download_dir (line 1678) | def _set_download_dir(self, download_dir):
    method _show_info (line 1693) | def _show_info(self):
    method _prev_tab (line 1703) | def _prev_tab(self, *e):
    method _next_tab (line 1714) | def _next_tab(self, *e):
    method _select_tab (line 1725) | def _select_tab(self, event):
    method _fill_table (line 1738) | def _fill_table(self):
    method _update_table_status (line 1779) | def _update_table_status(self):
    method _download (line 1785) | def _download(self, *e):
    method _download_cb (line 1806) | def _download_cb(self, download_iter, ids):
    method _select (line 1850) | def _select(self, id):
    method _color_table (line 1856) | def _color_table(self):
    method _clear_mark (line 1873) | def _clear_mark(self, id):
    method _mark_all (line 1878) | def _mark_all(self, *e):
    method _table_mark (line 1882) | def _table_mark(self, *e):
    method _show_log (line 1891) | def _show_log(self):
    method _package_to_columns (line 1895) | def _package_to_columns(self, pkg):
    method destroy (line 1917) | def destroy(self, *e):
    method _destroy (line 1923) | def _destroy(self, *e):
    method mainloop (line 1937) | def mainloop(self, *args, **kwargs):
    method help (line 1973) | def help(self, *e):
    method about (line 1986) | def about(self, *e):
    method _init_progressbar (line 2002) | def _init_progressbar(self):
    method _show_progress (line 2022) | def _show_progress(self, percent):
    method _progress_alive (line 2032) | def _progress_alive(self):
    method _download_threaded (line 2050) | def _download_threaded(self, *e):
    method _abort_download (line 2094) | def _abort_download(self):
    class _DownloadThread (line 2100) | class _DownloadThread(threading.Thread):
      method __init__ (line 2101) | def __init__(self, data_server, items, lock, message_queue, abort):
      method run (line 2109) | def run(self):
    method _monitor_message_queue (line 2125) | def _monitor_message_queue(self):
  function md5_hexdigest (line 2208) | def md5_hexdigest(file):
  function _md5_hexdigest (line 2219) | def _md5_hexdigest(fp):
  function unzip (line 2232) | def unzip(filename, root, verbose=True):
  function _unzip_iter (line 2242) | def _unzip_iter(filename, root, verbose=True):
  function build_index (line 2268) | def build_index(root, base_url):
  function _indent_xml (line 2338) | def _indent_xml(xml, prefix=""):
  function _check_package (line 2354) | def _check_package(pkg_xml, zipfilename, zf):
  function _svn_revision (line 2375) | def _svn_revision(filename):
  function _find_collections (line 2394) | def _find_collections(root):
  function _find_packages (line 2406) | def _find_packages(root):
  function download_shell (line 2485) | def download_shell():
  function download_gui (line 2489) | def download_gui():
  function update (line 2493) | def update():

FILE: shared_utils/text_mask.py
  function apply_gpt_academic_string_mask (line 24) | def apply_gpt_academic_string_mask(string, mode="show_all"):
  function build_gpt_academic_masked_string (line 46) | def build_gpt_academic_masked_string(text_show_llm="", text_show_render=...
  function apply_gpt_academic_string_mask_langbased (line 54) | def apply_gpt_academic_string_mask_langbased(string, lang_reference):
  function build_gpt_academic_masked_string_langbased (line 90) | def build_gpt_academic_masked_string_langbased(text_show_english="", tex...

FILE: tests/init_test.py
  function validate_path (line 1) | def validate_path():

FILE: tests/test_embed.py
  function validate_path (line 1) | def validate_path():

FILE: tests/test_key_pattern_manager.py
  function validate_path (line 3) | def validate_path():
  class TestKeyPatternManager (line 16) | class TestKeyPatternManager(unittest.TestCase):
    method test_is_openai_api_key_with_valid_key (line 17) | def test_is_openai_api_key_with_valid_key(self):
    method test_is_openai_api_key_with_invalid_key (line 53) | def test_is_openai_api_key_with_invalid_key(self):
    method test_is_openai_api_key_with_custom_pattern (line 57) | def test_is_openai_api_key_with_custom_pattern(self):

FILE: tests/test_latex_auto_correct.py
  function validate_path (line 9) | def validate_path():

FILE: tests/test_llms.py
  function validate_path (line 4) | def validate_path():

FILE: tests/test_markdown.py
  function validate_path (line 205) | def validate_path():

FILE: tests/test_markdown_format.py
  function preprocess_newbing_out (line 11) | def preprocess_newbing_out(s):
  function close_up_code_segment_during_stream (line 34) | def close_up_code_segment_during_stream(gpt_reply):
  function markdown_convertion (line 64) | def markdown_convertion(txt):

FILE: tests/test_python_auto_docstring.py
  class ContextWindowManager (line 116) | class ContextWindowManager():
    method __init__ (line 118) | def __init__(self, llm_kwargs) -> None:
    method generate_tagged_code_from_full_context (line 126) | def generate_tagged_code_from_full_context(self):
    method read_file (line 134) | def read_file(self, path):
    method find_next_function_begin (line 140) | def find_next_function_begin(self, tagged_code:list, begin_and_end):
    method _get_next_window (line 168) | def _get_next_window(self):
    method dedent (line 194) | def dedent(self, text):
    method get_next_batch (line 236) | def get_next_batch(self):
    method tag_code (line 240) | def tag_code(self, fn):
    method sync_and_patch (line 269) | def sync_and_patch(self, original, revised):

FILE: tests/test_safe_pickle.py
  function validate_path (line 1) | def validate_path():

FILE: tests/test_save_chat_to_html.py
  function validate_path (line 1) | def validate_path():
  function write_chat_to_file (line 9) | def write_chat_to_file(chatbot, history=None, file_name=None):

FILE: tests/test_searxng.py
  function validate_path (line 1) | def validate_path():
  function searxng_request (line 12) | def searxng_request(query, proxies, categories='general', searxng_url=No...

FILE: tests/test_tts.py
  function test_tts (line 7) | async def test_tts():

FILE: tests/test_utils.py
  function chat_to_markdown_str (line 13) | def chat_to_markdown_str(chat):
  function silence_stdout (line 22) | def silence_stdout(func):
  function silence_stdout_fn (line 39) | def silence_stdout_fn(func):
  class VoidTerminal (line 53) | class VoidTerminal:
    method __init__ (line 54) | def __init__(self) -> None:
  function plugin_test (line 88) | def plugin_test(main_input, plugin, advanced_arg=None, debug=True):

FILE: tests/test_vector_plugins.py
  function validate_path (line 9) | def validate_path():

FILE: themes/common.js
  function push_data_to_gradio_component (line 8) | function push_data_to_gradio_component(DAT, ELEM_ID, TYPE) {
  function get_gradio_component (line 33) | async function get_gradio_component(ELEM_ID) {
  function get_data_from_gradio_component (line 50) | async function get_data_from_gradio_component(ELEM_ID) {
  function update_array (line 56) | function update_array(arr, item, mode) {
  function gradioApp (line 81) | function gradioApp() {
  function setCookie (line 94) | function setCookie(name, value, days) {
  function getCookie (line 107) | function getCookie(name) {
  function toast_push (line 124) | function toast_push(msg, duration) {
  function toast_up (line 146) | function toast_up(msg) {
  function toast_down (line 159) | function toast_down() {
  function begin_loading_status (line 167) | function begin_loading_status() {
  function cancel_loading_status (line 237) | function cancel_loading_status() {
  function decompressString (line 263) | function decompressString(compressedString) {
  function addCopyButton (line 283) | function addCopyButton(botElement, index, is_last_in_arr) {
  function do_something_but_not_too_frequently (line 379) | function do_something_but_not_too_frequently(min_interval, func) {
  function chatbotContentChanged (line 403) | function chatbotContentChanged(attempt = 1, force = false) {
  function addStopButton (line 427) | function addStopButton(botElement, index, is_last_in_arr) {
  function chatbotAutoHeight (line 505) | function chatbotAutoHeight() {
  function swap_input_area (line 538) | function swap_input_area() {
  function get_elements (line 557) | function get_elements(consider_state_panel = false) {
  function locate_upload_elems (line 619) | function locate_upload_elems() {
  function upload_files (line 629) | async function upload_files(files) {
  function register_func_paste (line 655) | function register_func_paste(input) {
  function register_func_drag (line 682) | function register_func_drag(elem) {
  function elem_upload_component_pop_message (line 719) | function elem_upload_component_pop_message(elem) {
  function register_upload_event (line 749) | function register_upload_event() {
  function monitoring_input_box (line 772) | function monitoring_input_box() {
  function audio_fn_init (line 805) | function audio_fn_init() {
  function minor_ui_adjustment (line 842) | function minor_ui_adjustment() {
  function ButtonWithDropdown_init (line 914) | function ButtonWithDropdown_init() {
  function limit_scroll_position (line 936) | function limit_scroll_position() {
  function loadLive2D (line 968) | function loadLive2D() {
  function get_checkbox_selected_items (line 1009) | function get_checkbox_selected_items(elem_id) {
  function gpt_academic_gradio_saveload (line 1024) | function gpt_academic_gradio_saveload(
  function generateUUID (line 1051) | function generateUUID() {
  function update_conversation_metadata (line 1067) | function update_conversation_metadata() {
  function generatePreview (line 1085) | function generatePreview(conversation, timestamp, maxLength = 100) {
  function save_conversation_history (line 1095) | async function save_conversation_history() {
  function restore_chat_from_local_storage (line 1169) | function restore_chat_from_local_storage(event) {
  function clear_conversation (line 1182) | async function clear_conversation(a, b, c) {
  function reset_conversation (line 1200) | function reset_conversation(a, b) {
  function on_plugin_exe_complete (line 1205) | async function on_plugin_exe_complete(fn_name) {
  function generate_menu (line 1233) | async function generate_menu(guiBase64String, btnName) {
  function execute_current_pop_up_plugin (line 1317) | async function execute_current_pop_up_plugin() {
  function hide_all_elem (line 1363) | function hide_all_elem() {
  function close_current_pop_up_plugin (line 1384) | function close_current_pop_up_plugin() {
  function register_plugin_init (line 1396) | function register_plugin_init(key, base64String) {
  function register_advanced_plugin_init_code (line 1412) | function register_advanced_plugin_init_code(key, code) {
  function run_advanced_plugin_launch_code (line 1420) | function run_advanced_plugin_launch_code(key) {
  function on_flex_button_click (line 1424) | function on_flex_button_click(key) {
  function run_dropdown_shift (line 1431) | async function run_dropdown_shift(dropdown) {
  function duplicate_in_new_window (line 1455) | async function duplicate_in_new_window() {
  function run_classic_plugin_via_id (line 1462) | async function run_classic_plugin_via_id(plugin_elem_id) {
  function call_plugin_via_name (line 1475) | async function call_plugin_via_name(current_btn_name) {
  function click_real_submit_btn (line 1500) | async function click_real_submit_btn() {
  function multiplex_function_begin (line 1503) | async function multiplex_function_begin(multiplex_sel) {
  function run_multiplex_shift (line 1512) | async function run_multiplex_shift(multiplex_sel) {
  function persistent_cookie_init (line 1526) | async function persistent_cookie_init(web_cookie_cache, cookie) {

FILE: themes/common.py
  function inject_mutex_button_code (line 5) | def inject_mutex_button_code(js_content):
  function minimize_js (line 24) | def minimize_js(common_js_path):
  function get_common_html_javascript_code (line 49) | def get_common_html_javascript_code():

FILE: themes/contrast.py
  function adjust_theme (line 10) | def adjust_theme():

FILE: themes/default.py
  function adjust_theme (line 10) | def adjust_theme():

FILE: themes/gradios.py
  function dynamic_set_theme (line 10) | def dynamic_set_theme(THEME):
  function adjust_theme (line 25) | def adjust_theme():

FILE: themes/green.js
  function set_elements (line 8) | function set_elements() {
  function setSlider (line 21) | function setSlider() {

FILE: themes/green.py
  function adjust_theme (line 10) | def adjust_theme():

FILE: themes/gui_advanced_plugin_class.py
  function define_gui_advanced_plugin_class (line 5) | def define_gui_advanced_plugin_class(plugins):

FILE: themes/gui_floating_menu.py
  function define_gui_floating_menu (line 3) | def define_gui_floating_menu(customize_btns, functional, predefined_btns...

FILE: themes/gui_toolbar.py
  function define_gui_toolbar (line 4) | def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THE...

FILE: themes/init.js
  function remove_legacy_cookie (line 1) | function remove_legacy_cookie() {
  function processFontFamily (line 8) | function processFontFamily(fontfamily) {
  function checkFontAvailability (line 28) | function checkFontAvailability(fontfamily) {
  function checkFontAvailabilityV2 (line 45) | async function checkFontAvailabilityV2(fontfamily) {
  function loadFont (line 64) | function loadFont(fontfamily, fontUrl) {
  function gpt_academic_change_chatbot_font (line 80) | function gpt_academic_change_chatbot_font(fontfamily, fontsize, fontcolo...
  function footer_show_hide (line 131) | function footer_show_hide(show) {
  function GptAcademicJavaScriptInit (line 139) | async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, t...
  function apply_checkbox_change_for_group2 (line 369) | function apply_checkbox_change_for_group2(display_panel_arr) {

FILE: themes/theme.js
  function try_load_previous_theme (line 1) | async function try_load_previous_theme(){
  function change_theme (line 11) | async function change_theme(theme_selection, css) {

FILE: themes/theme.py
  function load_dynamic_theme (line 17) | def load_dynamic_theme(THEME):
  function assign_user_uuid (line 51) | def assign_user_uuid(cookies):
  function to_cookie_str (line 57) | def to_cookie_str(d):
  function from_cookie_str (line 64) | def from_cookie_str(c):

FILE: themes/tts.js
  class AudioPlayer (line 5) | class AudioPlayer {
    method constructor (line 6) | constructor() {
    method base64ToArrayBuffer (line 14) | base64ToArrayBuffer(base64) {
    method checkQueue (line 25) | checkQueue() {
    method enqueueAudio (line 34) | enqueueAudio(audio_buf_wave) {
    method play_wave (line 42) | async play_wave(encodedAudio) {
    method stop (line 68) | stop() {
  class FIFOLock (line 82) | class FIFOLock {
    method constructor (line 83) | constructor() {
    method lock (line 88) | lock() {
    method _dequeueNext (line 103) | _dequeueNext() {
    method unlock (line 113) | unlock() {
  function delay (line 126) | function delay(ms) {
  function trigger (line 131) | function trigger(T, fire) {
  function on_live_stream_terminate (line 157) | function on_live_stream_terminate(latest_text) {
  function is_continue_from_prev (line 166) | function is_continue_from_prev(text, prev_text) {
  function isEmptyOrWhitespaceOnly (line 177) | function isEmptyOrWhitespaceOnly(remaining_text) {
  function process_increased_text (line 183) | function process_increased_text(remaining_text) {
  function process_latest_text_output (line 204) | function process_latest_text_output(text, chatbot_index) {
  function push_text_to_audio (line 246) | async function push_text_to_audio(text) {
  function UpdatePlayQueue (line 278) | async function UpdatePlayQueue(cnt, audio_buf_wave) {
  function post_text (line 308) | function post_text(url, payload, cnt) {
  function postData (line 323) | async function postData(url = '', data = {}) {

FILE: themes/waifu_plugin/live2d.js
  function i (line 3) | function i(r) {
  function r (line 34) | function r() {
  function o (line 37) | function o() {
  function n (line 40) | function n() {
  function s (line 43) | function s() {
  function _ (line 46) | function _() {
  function a (line 49) | function a(t, i) {
  function h (line 52) | function h() {
  function l (line 55) | function l() {
  function $ (line 58) | function $() {
  function u (line 61) | function u(t) {
  function p (line 64) | function p() {
  function f (line 67) | function f() {
  function c (line 70) | function c() {}
  function r (line 536) | function r(t) {
  function o (line 539) | function o() {
  function r (line 549) | function r() {}
  function r (line 575) | function r(t) {
  function o (line 581) | function o(t) {
  function n (line 584) | function n(t) {
  function s (line 604) | function s() {
  function _ (line 616) | function _() {
  function a (line 631) | function a(t) {
  function h (line 634) | function h(t, i) {
  function l (line 637) | function l(t, i) {
  function $ (line 644) | function $(t, i, e) {
  function u (line 700) | function u(t) {
  function p (line 717) | function p(t) {
  function f (line 734) | function f(t) {
  function c (line 750) | function c() {
  function d (line 753) | function d() {
  function g (line 757) | function g(t) {
  function y (line 772) | function y(t) {
  function m (line 776) | function m(t) {
  function T (line 780) | function T(t) {
  function P (line 784) | function P(t) {
  function S (line 787) | function S(t) {
  function v (line 790) | function v() {
  function L (line 799) | function L(t, i, e) {
  function i (line 830) | function i() {
  function e (line 833) | function e(t) {
  function r (line 841) | function r(t, i, e) {
  function o (line 846) | function o(t, i) {
  function n (line 849) | function n() {
  function s (line 852) | function s() {
  function _ (line 855) | function _() {}
  function a (line 856) | function a() {
  function h (line 859) | function h() {
  function l (line 862) | function l(t) {
  function $ (line 865) | function $() {}
  function u (line 866) | function u(t) {
  function p (line 869) | function p() {
  function f (line 872) | function f() {
  function c (line 875) | function c() {
  function d (line 878) | function d(t, i, e) {
  function g (line 881) | function g(t, i, e, r) {
  function y (line 884) | function y(t, i, e) {
  function T (line 887) | function T(t, i, e, r) {
  function P (line 890) | function P() {
  function S (line 893) | function S() {
  function v (line 896) | function v() {}
  function L (line 897) | function L() {
  function M (line 900) | function M() {
  function E (line 903) | function E() {
  function A (line 906) | function A() {
  function I (line 909) | function I() {
  function w (line 912) | function w() {}
  function x (line 913) | function x() {
  function O (line 916) | function O() {}
  function D (line 917) | function D() {
  function R (line 920) | function R() {
  function b (line 923) | function b(t) {
  function F (line 926) | function F() {
  function C (line 929) | function C() {
  function N (line 932) | function N() {
  function B (line 935) | function B() {
  function U (line 938) | function U() {}
  function G (line 939) | function G() {}
  function Y (line 940) | function Y(t) {
  function k (line 943) | function k() {}
  function V (line 944) | function V() {
  function X (line 947) | function X() {
  function z (line 950) | function z() {
  function H (line 953) | function H(t) {
  function W (line 956) | function W() {
  function j (line 959) | function j() {
  function q (line 962) | function q() {
  function J (line 965) | function J() {
  function Q (line 968) | function Q(t, i) {
  function N (line 971) | function N() {
  function B (line 974) | function B() {
  function Z (line 977) | function Z() {
  function K (line 980) | function K(t) {
  function tt (line 983) | function tt() {
  function it (line 986) | function it(t) {
  function et (line 989) | function et(t) {
  function rt (line 992) | function rt() {}
  function ot (line 993) | function ot() {
  function nt (line 996) | function nt(t, i) {
  function st (line 999) | function st() {
  function _t (line 1002) | function _t(t) {
  function at (line 1005) | function at() {}
  function ht (line 1006) | function ht() {}
  function lt (line 1007) | function lt(t) {
  function $t (line 1010) | function $t() {
  function ut (line 1013) | function ut(t) {
  function pt (line 1016) | function pt() {
  function ft (line 1019) | function ft(t) {
  function ct (line 1022) | function ct() {
  function dt (line 1025) | function dt() {
  function gt (line 1028) | function gt() {
  function yt (line 1031) | function yt(t) {
  function mt (line 1034) | function mt(t) {
  function Tt (line 1037) | function Tt(t, i, e) {
  function Pt (line 1040) | function Pt(t, i, e) {
  function St (line 1043) | function St(t) {
  function vt (line 1046) | function vt() {}
  function Lt (line 1047) | function Lt() {}
  function Mt (line 1048) | function Mt(t) {
  function Et (line 1051) | function Et() {}
  function t (line 3873) | function t(t, i) {
  function r (line 3906) | function r(t) {
  function o (line 3912) | function o() {
  function r (line 3978) | function r() {}
  function r (line 4029) | function r(t) {
  function o (line 4035) | function o() {
  function r (line 4174) | function r() {

FILE: themes/waifu_plugin/waifu-tips.js
  function empty (line 97) | function empty(obj) {return typeof obj=="undefined"||obj==null||obj==""?...
  function getRandText (line 98) | function getRandText(text) {return Array.isArray(text) ? text[Math.floor...
  function showMessage (line 100) | function showMessage(text, timeout, flag) {
  function hideMessage (line 114) | function hideMessage(timeout) {
  function initModel (line 121) | function initModel(waifuPath, type) {
  function loadModel (line 201) | function loadModel(modelId, modelTexturesId=0) {
  function loadTipsMessage (line 211) | function loadTipsMessage(result) {

FILE: themes/welcome.js
  class WelcomeMessage (line 1) | class WelcomeMessage {
    method constructor (line 2) | constructor() {
    method begin_render (line 108) | begin_render() {
    method startAutoUpdate (line 112) | async startAutoUpdate() {
    method startRefleshCards (line 118) | async startRefleshCards() {
    method reflesh_cards (line 131) | async reflesh_cards() {
    method shuffle (line 192) | shuffle(array) {
    method can_display (line 210) | async can_display() {
    method update (line 232) | async update() {
    method showCard (line 244) | showCard(message) {
    method showWelcome (line 281) | async showWelcome() {
    method removeWelcome (line 325) | async removeWelcome() {
    method isChatbotEmpty (line 346) | async isChatbotEmpty() {
  class PageFocusHandler (line 356) | class PageFocusHandler {
    method constructor (line 357) | constructor() {
    method handleFocus (line 366) | handleFocus() {
    method addFocusCallback (line 374) | addFocusCallback(callback) {

FILE: toolbox.py
  class ChatBotWithCookies (line 61) | class ChatBotWithCookies(list):
    method __init__ (line 62) | def __init__(self, cookie):
    method write_list (line 78) | def write_list(self, list):
    method get_list (line 82) | def get_list(self):
    method get_cookies (line 85) | def get_cookies(self):
    method get_user (line 88) | def get_user(self):
  function ArgsGeneralWrapper (line 93) | def ArgsGeneralWrapper(f):
  function update_ui (line 161) | def update_ui(chatbot:ChatBotWithCookies, history:list, msg:str="正常", **...
  function update_ui_latest_msg (line 194) | def update_ui_latest_msg(lastmsg:str, chatbot:ChatBotWithCookies, histor...
  function trimmed_format_exc (line 206) | def trimmed_format_exc():
  function trimmed_format_exc_markdown (line 215) | def trimmed_format_exc_markdown():
  class FriendlyException (line 219) | class FriendlyException(Exception):
    method generate_error_html (line 220) | def generate_error_html(self):
  function CatchException (line 228) | def CatchException(f):
  function HotReload (line 256) | def HotReload(f):
  function get_reduce_token_percent (line 298) | def get_reduce_token_percent(text:str):
  function write_history_to_file (line 316) | def write_history_to_file(
  function regular_txt_to_markdown (line 351) | def regular_txt_to_markdown(text:str):
  function report_exception (line 361) | def report_exception(chatbot:ChatBotWithCookies, history:list, a:str, b:...
  function find_free_port (line 369) | def find_free_port()->int:
  function find_recent_files (line 382) | def find_recent_files(directory:str)->List[str]:
  function file_already_in_downloadzone (line 407) | def file_already_in_downloadzone(file:str, user_path:str):
  function promote_file_to_downloadzone (line 419) | def promote_file_to_downloadzone(file:str, rename_file:str=None, chatbot...
  function disable_auto_promotion (line 454) | def disable_auto_promotion(chatbot:ChatBotWithCookies):
  function del_outdated_uploads (line 459) | def del_outdated_uploads(outdate_time_seconds:float, target_path_base:st...
  function to_markdown_tabs (line 479) | def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=F...
  function on_file_uploaded (line 511) | def on_file_uploaded(
  function generate_file_link (line 577) | def generate_file_link(report_files:List[str]):
  function on_report_generated (line 586) | def on_report_generated(cookies:dict, files:List[str], chatbot:ChatBotWi...
  function load_chat_cookies (line 603) | def load_chat_cookies():
  function clear_line_break (line 650) | def clear_line_break(txt):
  class DummyWith (line 657) | class DummyWith:
    method __enter__ (line 668) | def __enter__(self):
    method __exit__ (line 671) | def __exit__(self, exc_type, exc_value, traceback):
  function run_gradio_in_subpath (line 675) | def run_gradio_in_subpath(demo, auth, port, custom_path):
  function auto_context_clip (line 723) | def auto_context_clip(current, history, policy='search_optimal'):
  function zip_folder (line 745) | def zip_folder(source_folder, dest_folder, zip_name):
  function zip_result (line 778) | def zip_result(folder):
  function gen_time_str (line 784) | def gen_time_str():
  function get_log_folder (line 790) | def get_log_folder(user=default_user_name, plugin_name="shared"):
  function get_upload_folder (line 803) | def get_upload_folder(user=default_user_name, tag=None):
  function is_the_upload_folder (line 814) | def is_the_upload_folder(string):
  function get_user (line 824) | def get_user(chatbotwithcookies:ChatBotWithCookies):
  class ProxyNetworkActivate (line 828) | class ProxyNetworkActivate:
    method __init__ (line 833) | def __init__(self, task=None) -> None:
    method __enter__ (line 845) | def __enter__(self):
    method __exit__ (line 860) | def __exit__(self, exc_type, exc_value, traceback):
  function Singleton (line 869) | def Singleton(cls):
  function get_pictures_list (line 883) | def get_pictures_list(path):
  function have_any_recent_upload_image_files (line 890) | def have_any_recent_upload_image_files(chatbot:ChatBotWithCookies, pop:b...
  function every_image_file_in_path (line 911) | def every_image_file_in_path(chatbot:ChatBotWithCookies):
  function encode_image (line 924) | def encode_image(image_path):
  function get_max_token (line 929) | def get_max_token(llm_kwargs):
  function check_packages (line 935) | def check_packages(packages=[]):
  function map_file_to_sha256 (line 944) | def map_file_to_sha256(file_path):
  function check_repeat_upload (line 956) | def check_repeat_upload(new_pdf_path, pdf_hash):
  function log_chat (line 993) | def log_chat(llm_model: str, input_str: str, output_str: str):
Condensed preview — 418 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (8,092K chars).
[
  {
    "path": ".dockerignore",
    "chars": 46,
    "preview": ".venv\n.github\n.vscode\ngpt_log\ntests\nREADME.md\n"
  },
  {
    "path": ".gitattributes",
    "chars": 157,
    "preview": "*.h linguist-detectable=false\n*.cpp linguist-detectable=false\n*.tex linguist-detectable=false\n*.cs linguist-detectable=f"
  },
  {
    "path": ".gitignore",
    "chars": 2421,
    "preview": "# Byte-compiled / optimized / DLL files\r\n__pycache__/\r\n*.py[cod]\r\n*$py.class\r\n\r\n# C extensions\r\n*.so\r\n\r\n# Distribution /"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 1159,
    "preview": "repos:\n  - repo: https://github.com/pre-commit/pre-commit-hooks\n    rev: v4.4.0\n    hooks:\n      - id: trailing-whitespa"
  },
  {
    "path": "Dockerfile",
    "chars": 1437,
    "preview": "# 此Dockerfile适用于“无本地模型”的迷你运行环境构建\n# 如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml\n# - 如何构建: 先修改 `config.py`, 然后 `d"
  },
  {
    "path": "LICENSE",
    "chars": 35823,
    "preview": "                    GNU GENERAL PUBLIC LICENSE\r\n                       Version 3, 29 June 2007\r\n\r\n Copyright (C) 2007 Fr"
  },
  {
    "path": "README.md",
    "chars": 20573,
    "preview": "> [!IMPORTANT]\n>\n> `master主分支`最新动态(2026.1.25): 新GUI前端测试中,Coming Soon<br/>\n> `master主分支`最新动态(2025.8.23): Dockerfile构建效率大幅"
  },
  {
    "path": "check_proxy.py",
    "chars": 9983,
    "preview": "from loguru import logger\n\ndef check_proxy(proxies, return_ip=False):\n    \"\"\"\n    检查代理配置并返回结果。\n\n    Args:\n        proxie"
  },
  {
    "path": "config.py",
    "chars": 14104,
    "preview": "\"\"\"\n    以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。\n    读取优先级:环境变量 > config_private.py > config.py\n    --- --- --- -"
  },
  {
    "path": "core_functional.py",
    "chars": 7909,
    "preview": "# 'primary' 颜色对应 theme.py 中的 primary_hue\n# 'secondary' 颜色对应 theme.py 中的 neutral_hue\n# 'stop' 颜色对应 theme.py 中的 color_er\ni"
  },
  {
    "path": "crazy_functional.py",
    "chars": 27138,
    "preview": "from toolbox import HotReload  # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效\nfrom toolbox import trimmed_format_exc\nfrom lo"
  },
  {
    "path": "crazy_functions/Academic_Conversation.py",
    "chars": 11348,
    "preview": "import re\nimport os\nimport asyncio\nfrom typing import List, Dict, Tuple\nfrom dataclasses import dataclass\nfrom textwrap "
  },
  {
    "path": "crazy_functions/Arxiv_Downloader.py",
    "chars": 5909,
    "preview": "import re, requests, unicodedata, os\nfrom toolbox import update_ui, get_log_folder\nfrom toolbox import write_history_to_"
  },
  {
    "path": "crazy_functions/Audio_Assistant.py",
    "chars": 8503,
    "preview": "from toolbox import update_ui\nfrom toolbox import CatchException, get_conf, markdown_convertion\nfrom request_llms.bridge"
  },
  {
    "path": "crazy_functions/Audio_Summary.py",
    "chars": 7118,
    "preview": "from toolbox import CatchException, report_exception, select_api_key, update_ui, get_conf\nfrom crazy_functions.crazy_uti"
  },
  {
    "path": "crazy_functions/Commandline_Assistant.py",
    "chars": 1062,
    "preview": "from toolbox import CatchException, update_ui, gen_time_str\nfrom crazy_functions.crazy_utils import request_gpt_model_in"
  },
  {
    "path": "crazy_functions/Conversation_To_File.py",
    "chars": 14808,
    "preview": "import re\nfrom toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_folder, get_user, update_"
  },
  {
    "path": "crazy_functions/Document_Conversation.py",
    "chars": 20434,
    "preview": "import os\nimport threading\nimport time\nfrom dataclasses import dataclass\nfrom typing import List, Tuple, Dict, Generator"
  },
  {
    "path": "crazy_functions/Document_Conversation_Wrap.py",
    "chars": 1415,
    "preview": "import random\nfrom toolbox import get_conf\nfrom crazy_functions.Document_Conversation import 批量文件询问\nfrom crazy_functions"
  },
  {
    "path": "crazy_functions/Document_Optimize.py",
    "chars": 27310,
    "preview": "import os\nimport time\nimport glob\nimport re\nimport threading\nfrom typing import Dict, List, Generator, Tuple\nfrom datacl"
  },
  {
    "path": "crazy_functions/Dynamic_Function_Generate.py",
    "chars": 10485,
    "preview": "# 本源代码中, ⭐ = 关键步骤\n\"\"\"\n测试:\n    - 裁剪图像,保留下半部分\n    - 交换图像的蓝色通道和红色通道\n    - 将图像转为灰度图像\n    - 将csv文件转excel表格\n\nTesting:\n    - Cr"
  },
  {
    "path": "crazy_functions/Google_Scholar_Assistant_Legacy.py",
    "chars": 7285,
    "preview": "from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive\nfrom toolbox import CatchException"
  },
  {
    "path": "crazy_functions/Helpers.py",
    "chars": 1891,
    "preview": "# encoding: utf-8\n# @Time   : 2023/4/19\n# @Author : Spike\n# @Descr   :\nfrom toolbox import update_ui, get_conf, get_user"
  },
  {
    "path": "crazy_functions/Image_Generate.py",
    "chars": 18812,
    "preview": "import requests\nimport base64\nimport json\nimport time\nimport os\nfrom request_llms.bridge_chatgpt import make_multimodal_"
  },
  {
    "path": "crazy_functions/Image_Generate_Wrap.py",
    "chars": 3174,
    "preview": "\nfrom toolbox import get_conf, update_ui\nfrom crazy_functions.Image_Generate import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE"
  },
  {
    "path": "crazy_functions/Interactive_Func_Template.py",
    "chars": 2829,
    "preview": "from toolbox import CatchException, update_ui\nfrom crazy_functions.crazy_utils import request_gpt_model_in_new_thread_wi"
  },
  {
    "path": "crazy_functions/Interactive_Mini_Game.py",
    "chars": 1754,
    "preview": "from toolbox import CatchException, update_ui, update_ui_latest_msg\nfrom crazy_functions.multi_stage.multi_stage_utils i"
  },
  {
    "path": "crazy_functions/Internet_GPT.py",
    "chars": 13733,
    "preview": "import requests\nimport random\nimport time\nimport re\nimport json\nfrom bs4 import BeautifulSoup\nfrom functools import lru_"
  },
  {
    "path": "crazy_functions/Internet_GPT_Bing_Legacy.py",
    "chars": 4273,
    "preview": "from toolbox import CatchException, update_ui\nfrom crazy_functions.crazy_utils import request_gpt_model_in_new_thread_wi"
  },
  {
    "path": "crazy_functions/Internet_GPT_Legacy.py",
    "chars": 4346,
    "preview": "from toolbox import CatchException, update_ui\nfrom crazy_functions.crazy_utils import request_gpt_model_in_new_thread_wi"
  },
  {
    "path": "crazy_functions/Internet_GPT_Wrap.py",
    "chars": 2326,
    "preview": "import random\nfrom toolbox import get_conf\nfrom crazy_functions.Internet_GPT import 连接网络回答问题\nfrom crazy_functions.plugin"
  },
  {
    "path": "crazy_functions/Latex_Function.py",
    "chars": 27606,
    "preview": "from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone, check_repeat_"
  },
  {
    "path": "crazy_functions/Latex_Function_Wrap.py",
    "chars": 3902,
    "preview": "\nfrom crazy_functions.Latex_Function import Latex翻译中文并重新编译PDF, PDF翻译中文并重新编译PDF\nfrom crazy_functions.plugin_template.plug"
  },
  {
    "path": "crazy_functions/Latex_Project_Polish.py",
    "chars": 10330,
    "preview": "from toolbox import update_ui, trimmed_format_exc, promote_file_to_downloadzone, get_log_folder\nfrom toolbox import Catc"
  },
  {
    "path": "crazy_functions/Latex_Project_Translate_Legacy.py",
    "chars": 7519,
    "preview": "from toolbox import update_ui, promote_file_to_downloadzone\nfrom toolbox import CatchException, report_exception, write_"
  },
  {
    "path": "crazy_functions/Markdown_Translate.py",
    "chars": 11798,
    "preview": "import glob, shutil, os, re\nfrom loguru import logger\nfrom toolbox import update_ui, trimmed_format_exc, gen_time_str\nfr"
  },
  {
    "path": "crazy_functions/Math_Animation_Gen.py",
    "chars": 6028,
    "preview": "import os\nfrom loguru import logger\nfrom toolbox import CatchException, update_ui, gen_time_str, promote_file_to_downloa"
  },
  {
    "path": "crazy_functions/Mermaid_Figure_Gen.py",
    "chars": 12606,
    "preview": "from toolbox import CatchException, update_ui, report_exception\nfrom crazy_functions.crazy_utils import request_gpt_mode"
  },
  {
    "path": "crazy_functions/Multi_Agent_Legacy.py",
    "chars": 4389,
    "preview": "# 本源代码中, ⭐ = 关键步骤\n\"\"\"\n测试:\n    - show me the solution of $x^2=cos(x)$, solve this problem with figure, and plot and save "
  },
  {
    "path": "crazy_functions/Multi_LLM_Query.py",
    "chars": 2726,
    "preview": "from toolbox import CatchException, update_ui, get_conf\nfrom crazy_functions.crazy_utils import request_gpt_model_in_new"
  },
  {
    "path": "crazy_functions/PDF_QA.py",
    "chars": 5403,
    "preview": "from loguru import logger\nfrom toolbox import update_ui\nfrom toolbox import CatchException, report_exception\nfrom crazy_"
  },
  {
    "path": "crazy_functions/PDF_Summary.py",
    "chars": 7783,
    "preview": "from loguru import logger\n\nfrom toolbox import update_ui, promote_file_to_downloadzone, gen_time_str\nfrom toolbox import"
  },
  {
    "path": "crazy_functions/PDF_Translate.py",
    "chars": 3880,
    "preview": "from toolbox import CatchException, check_packages, get_conf\nfrom toolbox import update_ui, update_ui_latest_msg, disabl"
  },
  {
    "path": "crazy_functions/PDF_Translate_Nougat.py",
    "chars": 5126,
    "preview": "from toolbox import CatchException, report_exception, get_log_folder, gen_time_str\nfrom toolbox import update_ui, promot"
  },
  {
    "path": "crazy_functions/PDF_Translate_Wrap.py",
    "chars": 1414,
    "preview": "from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty\nfrom .PDF_Trans"
  },
  {
    "path": "crazy_functions/Paper_Abstract_Writer.py",
    "chars": 3315,
    "preview": "from toolbox import update_ui\nfrom toolbox import CatchException, report_exception\nfrom toolbox import write_history_to_"
  },
  {
    "path": "crazy_functions/Paper_Reading.py",
    "chars": 13006,
    "preview": "import os\nimport time\nimport glob\nfrom pathlib import Path\nfrom datetime import datetime\nfrom dataclasses import datacla"
  },
  {
    "path": "crazy_functions/Program_Comment_Gen.py",
    "chars": 2642,
    "preview": "from loguru import logger\nfrom toolbox import update_ui\nfrom toolbox import CatchException, report_exception\nfrom toolbo"
  },
  {
    "path": "crazy_functions/Rag_Interface.py",
    "chars": 6564,
    "preview": "import os,glob\nfrom typing import List\n\nfrom shared_utils.fastapi_server import validate_path_safety\n\nfrom toolbox impor"
  },
  {
    "path": "crazy_functions/Social_Helper.py",
    "chars": 7182,
    "preview": "import pickle, os, random\nfrom toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_latest_msg\n"
  },
  {
    "path": "crazy_functions/SourceCode_Analyse.py",
    "chars": 20952,
    "preview": "from toolbox import update_ui, promote_file_to_downloadzone\nfrom toolbox import CatchException, report_exception, write_"
  },
  {
    "path": "crazy_functions/SourceCode_Analyse_JupyterNotebook.py",
    "chars": 5900,
    "preview": "from toolbox import update_ui\nfrom toolbox import CatchException, report_exception\nfrom toolbox import write_history_to_"
  },
  {
    "path": "crazy_functions/SourceCode_Comment.py",
    "chars": 7787,
    "preview": "import os, copy, time\nfrom toolbox import CatchException, report_exception, update_ui, zip_result, promote_file_to_downl"
  },
  {
    "path": "crazy_functions/SourceCode_Comment_Wrap.py",
    "chars": 1434,
    "preview": "\nfrom toolbox import get_conf, update_ui\nfrom crazy_functions.plugin_template.plugin_class_template import GptAcademicPl"
  },
  {
    "path": "crazy_functions/Vectorstore_QA.py",
    "chars": 6075,
    "preview": "from toolbox import CatchException, update_ui, ProxyNetworkActivate, update_ui_latest_msg, get_log_folder, get_user\nfrom"
  },
  {
    "path": "crazy_functions/VideoResource_GPT.py",
    "chars": 8907,
    "preview": "import requests\nimport random\nimport time\nimport re\nimport json\nfrom bs4 import BeautifulSoup\nfrom functools import lru_"
  },
  {
    "path": "crazy_functions/Void_Terminal.py",
    "chars": 8091,
    "preview": "\"\"\"\nExplanation of the Void Terminal Plugin:\n\nPlease describe in natural language what you want to do.\n\n1. You can open "
  },
  {
    "path": "crazy_functions/Word_Summary.py",
    "chars": 5188,
    "preview": "from toolbox import update_ui\nfrom toolbox import CatchException, report_exception\nfrom toolbox import write_history_to_"
  },
  {
    "path": "crazy_functions/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crazy_functions/agent_fns/auto_agent.py",
    "chars": 976,
    "preview": "from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, ProxyNetworkActivate\nfrom toolbox impor"
  },
  {
    "path": "crazy_functions/agent_fns/echo_agent.py",
    "chars": 878,
    "preview": "from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom\nfrom loguru import logger\n\nclass EchoDemo("
  },
  {
    "path": "crazy_functions/agent_fns/general.py",
    "chars": 5880,
    "preview": "from toolbox import trimmed_format_exc, get_conf, ProxyNetworkActivate\nfrom crazy_functions.agent_fns.pipe import Plugin"
  },
  {
    "path": "crazy_functions/agent_fns/persistent.py",
    "chars": 399,
    "preview": "from toolbox import Singleton\n@Singleton\nclass GradioMultiuserManagerForPersistentClasses():\n    def __init__(self):\n   "
  },
  {
    "path": "crazy_functions/agent_fns/pipe.py",
    "chars": 8570,
    "preview": "from toolbox import get_log_folder, update_ui, gen_time_str, get_conf, promote_file_to_downloadzone\nfrom crazy_functions"
  },
  {
    "path": "crazy_functions/agent_fns/python_comment_agent.py",
    "chars": 16284,
    "preview": "import datetime\nimport re\nimport os\nfrom loguru import logger\nfrom textwrap import dedent\nfrom toolbox import CatchExcep"
  },
  {
    "path": "crazy_functions/agent_fns/python_comment_compare.html",
    "chars": 980,
    "preview": "<!DOCTYPE html>\n<html lang=\"zh-CN\">\n<head>\n    <style>ADVANCED_CSS</style>\n    <meta charset=\"UTF-8\">\n    <title>源文件对比</"
  },
  {
    "path": "crazy_functions/agent_fns/watchdog.py",
    "chars": 821,
    "preview": "import threading, time\nfrom loguru import logger\n\nclass WatchDog():\n    def __init__(self, timeout, bark_fn, interval=3,"
  },
  {
    "path": "crazy_functions/ast_fns/comment_remove.py",
    "chars": 2114,
    "preview": "import token\nimport tokenize\nimport copy\nimport io\n\n\ndef remove_python_comments(input_source: str) -> str:\n    source_fl"
  },
  {
    "path": "crazy_functions/crazy_utils.py",
    "chars": 26595,
    "preview": "import os\nimport threading\nfrom loguru import logger\nfrom shared_utils.char_visual_effect import scrolling_visual_effect"
  },
  {
    "path": "crazy_functions/diagram_fns/file_tree.py",
    "chars": 5352,
    "preview": "import os\nfrom textwrap import indent\nfrom loguru import logger\n\nclass FileNode:\n    def __init__(self, name, build_mani"
  },
  {
    "path": "crazy_functions/doc_fns/AI_review_doc.py",
    "chars": 25278,
    "preview": "import os\nimport time\nfrom abc import ABC, abstractmethod\nfrom datetime import datetime\nfrom docx import Document\nfrom d"
  },
  {
    "path": "crazy_functions/doc_fns/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crazy_functions/doc_fns/batch_file_query_doc.py",
    "chars": 25279,
    "preview": "import os\nimport time\nfrom abc import ABC, abstractmethod\nfrom datetime import datetime\nfrom docx import Document\nfrom d"
  },
  {
    "path": "crazy_functions/doc_fns/content_folder.py",
    "chars": 6400,
    "preview": "from abc import ABC, abstractmethod\nfrom typing import Any, Dict, Optional, Type, TypeVar, Generic, Union\n\nfrom dataclas"
  },
  {
    "path": "crazy_functions/doc_fns/conversation_doc/excel_doc.py",
    "chars": 5798,
    "preview": "import re\nimport os\nimport pandas as pd\nfrom datetime import datetime\nfrom openpyxl import Workbook\n\n\nclass ExcelTableFo"
  },
  {
    "path": "crazy_functions/doc_fns/conversation_doc/html_doc.py",
    "chars": 5244,
    "preview": "\n\nclass HtmlFormatter:\n    \"\"\"聊天记录HTML格式生成器\"\"\"\n\n    def __init__(self, chatbot, history):\n        self.chatbot = chatbot"
  },
  {
    "path": "crazy_functions/doc_fns/conversation_doc/markdown_doc.py",
    "chars": 949,
    "preview": "\nclass MarkdownFormatter:\n    \"\"\"Markdown格式文档生成器 - 用于生成对话记录的markdown文档\"\"\"\n\n    def __init__(self):\n        self.content "
  },
  {
    "path": "crazy_functions/doc_fns/conversation_doc/pdf_doc.py",
    "chars": 5228,
    "preview": "from datetime import datetime\nimport os\nimport re\nfrom reportlab.pdfbase import pdfmetrics\nfrom reportlab.pdfbase.ttfont"
  },
  {
    "path": "crazy_functions/doc_fns/conversation_doc/txt_doc.py",
    "chars": 2968,
    "preview": "\nimport re\n\n\ndef convert_markdown_to_txt(markdown_text):\n    \"\"\"Convert markdown text to plain text while preserving for"
  },
  {
    "path": "crazy_functions/doc_fns/conversation_doc/word2pdf.py",
    "chars": 4893,
    "preview": "from docx2pdf import convert\nimport os\nimport platform\nimport subprocess\nfrom typing import Union\nfrom pathlib import Pa"
  },
  {
    "path": "crazy_functions/doc_fns/conversation_doc/word_doc.py",
    "chars": 7311,
    "preview": "import re\nfrom docx import Document\nfrom docx.shared import Cm, Pt\nfrom docx.enum.text import WD_PARAGRAPH_ALIGNMENT, WD"
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/docx_reader.py",
    "chars": 172,
    "preview": "import nltk\nnltk.data.path.append('~/nltk_data')\nnltk.download('averaged_perceptron_tagger', download_dir='~/nltk_data')"
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/excel_reader.py",
    "chars": 9631,
    "preview": "from __future__ import annotations\n\nimport pandas as pd\nimport numpy as np\nfrom pathlib import Path\nfrom typing import O"
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/markitdown/markdown_reader.py",
    "chars": 10585,
    "preview": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Optional, Set, Dict, Union, List\nfrom da"
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/unstructured_all/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/unstructured_all/paper_metadata_extractor.py",
    "chars": 17008,
    "preview": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Optional, Set, Dict, Union, List\nfrom da"
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/unstructured_all/paper_structure_extractor.py",
    "chars": 43713,
    "preview": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Optional, Set, Dict, Union, List, Tuple,"
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/unstructured_all/unstructured_md.py",
    "chars": 2431,
    "preview": "from pathlib import Path\nfrom crazy_functions.doc_fns.read_fns.unstructured_all.paper_structure_extractor import PaperSt"
  },
  {
    "path": "crazy_functions/doc_fns/read_fns/web_reader.py",
    "chars": 5665,
    "preview": "from __future__ import annotations\n\nfrom dataclasses import dataclass, field\nfrom typing import Dict, Optional, Union\nfr"
  },
  {
    "path": "crazy_functions/doc_fns/text_content_loader.py",
    "chars": 15836,
    "preview": "import os\nimport re\nimport glob\nimport time\nimport queue\nimport threading\nfrom concurrent.futures import ThreadPoolExecu"
  },
  {
    "path": "crazy_functions/game_fns/game_ascii_art.py",
    "chars": 2322,
    "preview": "from toolbox import CatchException, update_ui, update_ui_latest_msg\nfrom crazy_functions.multi_stage.multi_stage_utils i"
  },
  {
    "path": "crazy_functions/game_fns/game_interactive_story.py",
    "chars": 7645,
    "preview": "prompts_hs = \"\"\" 请以“{headstart}”为开头,编写一个小说的第一幕。\n\n- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。\n- 出现人物时,给出人物的名字。\n- 积极地运"
  },
  {
    "path": "crazy_functions/game_fns/game_utils.py",
    "chars": 1630,
    "preview": "\nfrom crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError\nfrom request_llms.bridge_all import predict"
  },
  {
    "path": "crazy_functions/gen_fns/gen_fns_shared.py",
    "chars": 3048,
    "preview": "import time\nimport importlib\nfrom toolbox import trimmed_format_exc, gen_time_str, get_log_folder\nfrom toolbox import Ca"
  },
  {
    "path": "crazy_functions/ipc_fns/mp.py",
    "chars": 1490,
    "preview": "import platform\nimport pickle\nimport multiprocessing\n\ndef run_in_subprocess_wrapper_func(v_args):\n    func, args, kwargs"
  },
  {
    "path": "crazy_functions/json_fns/pydantic_io.py",
    "chars": 4147,
    "preview": "\"\"\"\nhttps://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/model_io/output_parsers/pydantic.ipynb\n\nEx"
  },
  {
    "path": "crazy_functions/json_fns/select_tool.py",
    "chars": 807,
    "preview": "from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError\n\ndef structure_output(txt, prompt, err_msg, "
  },
  {
    "path": "crazy_functions/latex_fns/latex_actions.py",
    "chars": 28984,
    "preview": "import os\nimport re\nimport shutil\nimport numpy as np\nfrom loguru import logger\nfrom toolbox import update_ui, update_ui_"
  },
  {
    "path": "crazy_functions/latex_fns/latex_pickle_io.py",
    "chars": 1507,
    "preview": "import pickle\n\n\nclass SafeUnpickler(pickle.Unpickler):\n\n    def get_safe_classes(self):\n        from crazy_functions.lat"
  },
  {
    "path": "crazy_functions/latex_fns/latex_toolbox.py",
    "chars": 35536,
    "preview": "import os\nimport re\nimport shutil\nimport numpy as np\nfrom loguru import logger\n\nPRESERVE = 0\nTRANSFORM = 1\n\npj = os.path"
  },
  {
    "path": "crazy_functions/live_audio/aliyunASR.py",
    "chars": 9475,
    "preview": "import time, json, sys, struct\nimport numpy as np\nfrom loguru import logger as logging\nfrom scipy.io.wavfile import WAVE"
  },
  {
    "path": "crazy_functions/live_audio/audio_io.py",
    "chars": 1461,
    "preview": "import numpy as np\nfrom scipy import interpolate\n\ndef Singleton(cls):\n    _instance = {}\n\n    def _singleton(*args, **ka"
  },
  {
    "path": "crazy_functions/media_fns/get_media.py",
    "chars": 2179,
    "preview": "from toolbox import update_ui, get_conf, promote_file_to_downloadzone, update_ui_latest_msg, generate_file_link\nfrom sha"
  },
  {
    "path": "crazy_functions/multi_stage/multi_stage_utils.py",
    "chars": 3356,
    "preview": "from pydantic import BaseModel, Field\nfrom typing import List\nfrom toolbox import update_ui_latest_msg, disable_auto_pro"
  },
  {
    "path": "crazy_functions/paper_fns/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crazy_functions/paper_fns/auto_git/handlers/base_handler.py",
    "chars": 13248,
    "preview": "from abc import ABC, abstractmethod\nfrom typing import List, Dict, Any\nfrom ..query_analyzer import SearchCriteria\nfrom "
  },
  {
    "path": "crazy_functions/paper_fns/auto_git/handlers/code_handler.py",
    "chars": 4751,
    "preview": "from typing import List, Dict, Any\nfrom .base_handler import BaseHandler\nfrom ..query_analyzer import SearchCriteria\nimp"
  },
  {
    "path": "crazy_functions/paper_fns/auto_git/handlers/repo_handler.py",
    "chars": 5250,
    "preview": "from typing import List, Dict, Any\nfrom .base_handler import BaseHandler\nfrom ..query_analyzer import SearchCriteria\nimp"
  },
  {
    "path": "crazy_functions/paper_fns/auto_git/handlers/topic_handler.py",
    "chars": 6253,
    "preview": "from typing import List, Dict, Any\nfrom .base_handler import BaseHandler\nfrom ..query_analyzer import SearchCriteria\nimp"
  },
  {
    "path": "crazy_functions/paper_fns/auto_git/handlers/user_handler.py",
    "chars": 4836,
    "preview": "from typing import List, Dict, Any\nfrom .base_handler import BaseHandler\nfrom ..query_analyzer import SearchCriteria\nimp"
  },
  {
    "path": "crazy_functions/paper_fns/auto_git/query_analyzer.py",
    "chars": 11678,
    "preview": "from typing import Dict, List\nfrom dataclasses import dataclass\nimport re\n\n@dataclass\nclass SearchCriteria:\n    \"\"\"搜索条件\""
  },
  {
    "path": "crazy_functions/paper_fns/auto_git/sources/github_source.py",
    "chars": 21164,
    "preview": "import aiohttp\nimport asyncio\nimport base64\nimport json\nimport random\nfrom datetime import datetime\nfrom typing import L"
  },
  {
    "path": "crazy_functions/paper_fns/document_structure_extractor.py",
    "chars": 18426,
    "preview": "from typing import List, Dict, Optional, Tuple, Union, Any\nfrom dataclasses import dataclass, field\nimport os\nimport re\n"
  },
  {
    "path": "crazy_functions/paper_fns/file2file_doc/__init__.py",
    "chars": 150,
    "preview": "from .txt_doc import TxtFormatter\nfrom .markdown_doc import MarkdownFormatter\nfrom .html_doc import HtmlFormatter\nfrom ."
  },
  {
    "path": "crazy_functions/paper_fns/file2file_doc/html_doc.py",
    "chars": 8920,
    "preview": "class HtmlFormatter:\n    \"\"\"HTML格式文档生成器 - 保留原始文档结构\"\"\"\n\n    def __init__(self, processing_type=\"文本处理\"):\n        self.proc"
  },
  {
    "path": "crazy_functions/paper_fns/file2file_doc/markdown_doc.py",
    "chars": 1019,
    "preview": "class MarkdownFormatter:\n    \"\"\"Markdown格式文档生成器 - 保留原始文档结构\"\"\"\n\n    def __init__(self):\n        self.content = []\n\n    de"
  },
  {
    "path": "crazy_functions/paper_fns/file2file_doc/txt_doc.py",
    "chars": 2407,
    "preview": "import re\n\ndef convert_markdown_to_txt(markdown_text):\n    \"\"\"Convert markdown text to plain text while preserving forma"
  },
  {
    "path": "crazy_functions/paper_fns/file2file_doc/word2pdf.py",
    "chars": 3607,
    "preview": "from docx2pdf import convert\nimport os\nimport platform\nfrom typing import Union\nfrom pathlib import Path\nfrom datetime i"
  },
  {
    "path": "crazy_functions/paper_fns/file2file_doc/word_doc.py",
    "chars": 9336,
    "preview": "import re\nfrom docx import Document\nfrom docx.shared import Cm, Pt\nfrom docx.enum.text import WD_PARAGRAPH_ALIGNMENT, WD"
  },
  {
    "path": "crazy_functions/paper_fns/github_search.py",
    "chars": 11604,
    "preview": "from typing import List, Dict, Tuple\nimport asyncio\nfrom dataclasses import dataclass\nfrom toolbox import CatchException"
  },
  {
    "path": "crazy_functions/paper_fns/journal_paper_recom.py",
    "chars": 21382,
    "preview": "import os\nimport time\nimport glob\nfrom typing import Dict, List, Generator, Tuple\nfrom dataclasses import dataclass\n\nfro"
  },
  {
    "path": "crazy_functions/paper_fns/paper_download.py",
    "chars": 9496,
    "preview": "import re\nimport os\nimport zipfile\nfrom toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_"
  },
  {
    "path": "crazy_functions/paper_fns/reduce_aigc.py",
    "chars": 31870,
    "preview": "import os\nimport time\nimport glob\nimport re\nimport threading\nfrom typing import Dict, List, Generator, Tuple\nfrom datacl"
  },
  {
    "path": "crazy_functions/paper_fns/wiki/wikipedia_api.py",
    "chars": 12503,
    "preview": "import aiohttp\nimport asyncio\nfrom typing import List, Dict, Optional\nimport re\nimport random\nimport time\n\nclass Wikiped"
  },
  {
    "path": "crazy_functions/pdf_fns/breakdown_pdf_txt.py",
    "chars": 10687,
    "preview": "from crazy_functions.ipc_fns.mp import run_in_subprocess_with_timeout\nfrom loguru import logger\nimport time\nimport re\n\nd"
  },
  {
    "path": "crazy_functions/pdf_fns/breakdown_txt.py",
    "chars": 5141,
    "preview": "from crazy_functions.ipc_fns.mp import run_in_subprocess_with_timeout\nfrom loguru import logger\n\ndef force_breakdown(txt"
  },
  {
    "path": "crazy_functions/pdf_fns/parse_pdf.py",
    "chars": 7609,
    "preview": "from functools import lru_cache\nfrom toolbox import gen_time_str\nfrom toolbox import promote_file_to_downloadzone\nfrom t"
  },
  {
    "path": "crazy_functions/pdf_fns/parse_pdf_grobid.py",
    "chars": 1695,
    "preview": "import os\nfrom toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages\nfrom toolbo"
  },
  {
    "path": "crazy_functions/pdf_fns/parse_pdf_legacy.py",
    "chars": 5770,
    "preview": "from toolbox import get_log_folder\nfrom toolbox import update_ui, promote_file_to_downloadzone\nfrom toolbox import write"
  },
  {
    "path": "crazy_functions/pdf_fns/parse_pdf_via_doc2x.py",
    "chars": 12032,
    "preview": "from toolbox import get_log_folder, gen_time_str, get_conf\nfrom toolbox import update_ui, promote_file_to_downloadzone\nf"
  },
  {
    "path": "crazy_functions/pdf_fns/parse_word.py",
    "chars": 3424,
    "preview": "from crazy_functions.crazy_utils import read_and_clean_pdf_text, get_files_from_everything\nimport os\nimport re\ndef extra"
  },
  {
    "path": "crazy_functions/pdf_fns/report_gen_html.py",
    "chars": 2378,
    "preview": "from toolbox import update_ui, get_conf, trimmed_format_exc, get_log_folder\nimport os\n\n\n\n\nclass construct_html():\n    de"
  },
  {
    "path": "crazy_functions/pdf_fns/report_template.html",
    "chars": 109847,
    "preview": "<!DOCTYPE html>\n\n<head>\n  <meta charset=\"utf-8\">\n  <script>\n  !function(t,e){\"object\"==typeof exports&&\"undefined\"!=type"
  },
  {
    "path": "crazy_functions/pdf_fns/report_template_v2.html",
    "chars": 2115,
    "preview": "<!DOCTYPE html>\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\n\n<head>\n    <meta http-equiv=\"Content-Type\" content=\"text/ht"
  },
  {
    "path": "crazy_functions/plugin_template/plugin_class_template.py",
    "chars": 2572,
    "preview": "import os, json, base64\nfrom pydantic import BaseModel, Field\nfrom textwrap import dedent\nfrom typing import List\n\nclass"
  },
  {
    "path": "crazy_functions/prompts/internet.py",
    "chars": 1963,
    "preview": "SearchOptimizerPrompt=\"\"\"作为一个网页搜索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高网页检索的精度。生成的问题要求指向对象清晰明确,并与“原问题语言相同”。例如:\n历史"
  },
  {
    "path": "crazy_functions/rag_fns/llama_index_worker.py",
    "chars": 5251,
    "preview": "import atexit\nfrom loguru import logger\nfrom typing import List\n\nfrom llama_index.core import Document\nfrom llama_index."
  },
  {
    "path": "crazy_functions/rag_fns/milvus_worker.py",
    "chars": 4666,
    "preview": "import llama_index\nimport os\nimport atexit\nfrom typing import List\nfrom loguru import logger\nfrom llama_index.core impor"
  },
  {
    "path": "crazy_functions/rag_fns/rag_file_support.py",
    "chars": 1706,
    "preview": "import subprocess\nimport os\n\nsupports_format = ['.csv', '.docx', '.epub', '.ipynb',  '.mbox', '.md', '.pdf',  '.txt', '."
  },
  {
    "path": "crazy_functions/rag_fns/vector_store_index.py",
    "chars": 1979,
    "preview": "from llama_index.core import VectorStoreIndex\nfrom typing import Any,  List, Optional\n\nfrom llama_index.core.callbacks.b"
  },
  {
    "path": "crazy_functions/review_fns/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crazy_functions/review_fns/conversation_doc/endnote_doc.py",
    "chars": 2085,
    "preview": "from typing import List\nfrom crazy_functions.review_fns.data_sources.base_source import PaperMetadata\n\nclass EndNoteForm"
  },
  {
    "path": "crazy_functions/review_fns/conversation_doc/excel_doc.py",
    "chars": 5806,
    "preview": "import re\nimport os\nimport pandas as pd\nfrom datetime import datetime\n\n\nclass ExcelTableFormatter:\n    \"\"\"聊天记录中Markdown表"
  },
  {
    "path": "crazy_functions/review_fns/conversation_doc/html_doc.py",
    "chars": 16851,
    "preview": "class HtmlFormatter:\n    \"\"\"聊天记录HTML格式生成器\"\"\"\n\n    def __init__(self):\n        self.css_styles = \"\"\"\n        :root {\n    "
  },
  {
    "path": "crazy_functions/review_fns/conversation_doc/markdown_doc.py",
    "chars": 1409,
    "preview": "class MarkdownFormatter:\n    \"\"\"Markdown格式文档生成器 - 用于生成对话记录的markdown文档\"\"\"\n\n    def __init__(self):\n        self.content ="
  },
  {
    "path": "crazy_functions/review_fns/conversation_doc/reference_formatter.py",
    "chars": 6246,
    "preview": "from typing import List\nfrom crazy_functions.review_fns.data_sources.base_source import PaperMetadata\nimport re\n\nclass R"
  },
  {
    "path": "crazy_functions/review_fns/conversation_doc/word2pdf.py",
    "chars": 4181,
    "preview": "from docx2pdf import convert\nimport os\nimport platform\nfrom typing import Union\nfrom pathlib import Path\nfrom datetime i"
  },
  {
    "path": "crazy_functions/review_fns/conversation_doc/word_doc.py",
    "chars": 10186,
    "preview": "import re\nfrom docx import Document\nfrom docx.shared import Cm, Pt\nfrom docx.enum.text import WD_PARAGRAPH_ALIGNMENT, WD"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crazy_functions/review_fns/data_sources/adsabs_source.py",
    "chars": 9152,
    "preview": "from typing import List, Optional, Dict, Union\nfrom datetime import datetime\nimport aiohttp\nimport asyncio\nfrom crazy_fu"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/arxiv_source.py",
    "chars": 20052,
    "preview": "import arxiv\nfrom typing import List, Optional, Union, Literal, Dict\nfrom datetime import datetime\nfrom .base_source imp"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/base_source.py",
    "chars": 2694,
    "preview": "from abc import ABC, abstractmethod\nfrom typing import List, Dict, Optional\nfrom dataclasses import dataclass\n\nclass Pap"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/cas_if.json",
    "chars": 2975420,
    "preview": "[{\"journal\":\"CA-A CANCER JOURNAL FOR CLINICIANS\",\"jabb\":\"CA-CANCER J CLIN\",\"issn\":\"0007-9235\",\"eissn\":\"1542-4863\",\"IF\":\""
  },
  {
    "path": "crazy_functions/review_fns/data_sources/crossref_source.py",
    "chars": 14632,
    "preview": "import aiohttp\nfrom typing import List, Dict, Optional\nfrom datetime import datetime\nfrom crazy_functions.review_fns.dat"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/elsevier_source.py",
    "chars": 14831,
    "preview": "from typing import List, Optional, Dict, Union\nfrom datetime import datetime\nimport aiohttp\nimport asyncio\nfrom crazy_fu"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/github_source.py",
    "chars": 21082,
    "preview": "import aiohttp\nimport asyncio\nimport base64\nimport json\nimport random\nfrom datetime import datetime\nfrom typing import L"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/journal_metrics.py",
    "chars": 4472,
    "preview": "import json\nimport os\nfrom typing import Dict, Optional\n\nclass JournalMetrics:\n    \"\"\"期刊指标管理类\"\"\"\n\n    def __init__(self)"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/openalex_source.py",
    "chars": 5697,
    "preview": "import aiohttp\nfrom typing import List, Dict, Optional\nfrom datetime import datetime\nfrom .base_source import DataSource"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/pubmed_source.py",
    "chars": 14023,
    "preview": "from typing import List, Optional, Dict, Union\nfrom datetime import datetime\nimport aiohttp\nimport asyncio\nfrom crazy_fu"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/scihub_source.py",
    "chars": 11941,
    "preview": "from pathlib import Path\nimport requests\nfrom bs4 import BeautifulSoup\nimport time\nfrom loguru import logger\nimport PyPD"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/scopus_source.py",
    "chars": 12171,
    "preview": "from typing import List, Optional, Dict, Union\nfrom datetime import datetime\nimport aiohttp\nimport random\nfrom .base_sou"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/semantic_source.py",
    "chars": 17793,
    "preview": "from typing import List, Optional\nfrom datetime import datetime\nfrom crazy_functions.review_fns.data_sources.base_source"
  },
  {
    "path": "crazy_functions/review_fns/data_sources/unpaywall_source.py",
    "chars": 1684,
    "preview": "import aiohttp\nfrom typing import List, Dict, Optional\nfrom datetime import datetime\nfrom .base_source import DataSource"
  },
  {
    "path": "crazy_functions/review_fns/handlers/base_handler.py",
    "chars": 15837,
    "preview": "import asyncio\nfrom datetime import datetime\nfrom abc import ABC, abstractmethod\nfrom typing import List, Dict, Any\nfrom"
  },
  {
    "path": "crazy_functions/review_fns/handlers/latest_handler.py",
    "chars": 3461,
    "preview": "from typing import List, Dict, Any\nfrom .base_handler import BaseHandler\nfrom crazy_functions.review_fns.query_analyzer "
  },
  {
    "path": "crazy_functions/review_fns/handlers/paper_handler.py",
    "chars": 11392,
    "preview": "from typing import List, Dict, Any, Optional, Tuple\nfrom .base_handler import BaseHandler\nfrom crazy_functions.review_fn"
  },
  {
    "path": "crazy_functions/review_fns/handlers/qa_handler.py",
    "chars": 5341,
    "preview": "from typing import List, Dict, Any\nfrom .base_handler import BaseHandler\nfrom crazy_functions.review_fns.query_analyzer "
  },
  {
    "path": "crazy_functions/review_fns/handlers/recommend_handler.py",
    "chars": 7388,
    "preview": "from typing import List, Dict, Any\nfrom .base_handler import BaseHandler\nfrom textwrap import dedent\nfrom crazy_function"
  },
  {
    "path": "crazy_functions/review_fns/handlers/review_handler.py",
    "chars": 7737,
    "preview": "from typing import List, Dict, Any, Tuple\nfrom .base_handler import BaseHandler\nfrom crazy_functions.review_fns.query_an"
  },
  {
    "path": "crazy_functions/review_fns/paper_processor/paper_llm_ranker.py",
    "chars": 17667,
    "preview": "from typing import List, Dict\nfrom crazy_functions.review_fns.data_sources.base_source import PaperMetadata\nfrom request"
  },
  {
    "path": "crazy_functions/review_fns/prompts/adsabs_prompts.py",
    "chars": 3153,
    "preview": "# ADS query optimization prompt\nADSABS_QUERY_PROMPT = \"\"\"Analyze and optimize the following query for NASA ADS search.\nI"
  },
  {
    "path": "crazy_functions/review_fns/prompts/arxiv_prompts.py",
    "chars": 11794,
    "preview": "# Basic type analysis prompt\nARXIV_TYPE_PROMPT = \"\"\"Analyze the research query and determine if arXiv search is needed a"
  },
  {
    "path": "crazy_functions/review_fns/prompts/crossref_prompts.py",
    "chars": 2315,
    "preview": "# Crossref query optimization prompt\nCROSSREF_QUERY_PROMPT = \"\"\"Analyze and optimize the query for Crossref search.\n\nQue"
  },
  {
    "path": "crazy_functions/review_fns/prompts/paper_prompts.py",
    "chars": 1458,
    "preview": "# 新建文件,添加论文识别提示\nPAPER_IDENTIFY_PROMPT = \"\"\"Analyze the query to identify paper details.\n\nQuery: {query}\n\nTask: Extract p"
  },
  {
    "path": "crazy_functions/review_fns/prompts/pubmed_prompts.py",
    "chars": 3673,
    "preview": "# PubMed search type prompt\nPUBMED_TYPE_PROMPT = \"\"\"Analyze the research query and determine the appropriate PubMed sear"
  },
  {
    "path": "crazy_functions/review_fns/prompts/semantic_prompts.py",
    "chars": 9312,
    "preview": "# Search type prompt\nSEMANTIC_TYPE_PROMPT = \"\"\"Determine the most appropriate search type for Semantic Scholar.\n\nQuery: "
  },
  {
    "path": "crazy_functions/review_fns/query_analyzer.py",
    "chars": 20433,
    "preview": "from typing import Dict, List\nfrom dataclasses import dataclass\nfrom textwrap import dedent\nfrom datetime import datetim"
  },
  {
    "path": "crazy_functions/review_fns/query_processor.py",
    "chars": 1940,
    "preview": "from typing import List, Dict, Any\nfrom .query_analyzer import QueryAnalyzer, SearchCriteria\nfrom .data_sources.arxiv_so"
  },
  {
    "path": "crazy_functions/vector_fns/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crazy_functions/vector_fns/general_file_loader.py",
    "chars": 3262,
    "preview": "# From project chatglm-langchain\n\n\nfrom langchain.document_loaders import UnstructuredFileLoader\nfrom langchain.text_spl"
  },
  {
    "path": "crazy_functions/vector_fns/vector_database.py",
    "chars": 12613,
    "preview": "# From project chatglm-langchain\n\nimport os\nimport os\nimport uuid\nimport tqdm\nimport shutil\nimport threading\nimport nump"
  },
  {
    "path": "crazy_functions/vt_fns/vt_call_plugin.py",
    "chars": 6631,
    "preview": "from pydantic import BaseModel, Field\nfrom typing import List\nfrom toolbox import update_ui_latest_msg, disable_auto_pro"
  },
  {
    "path": "crazy_functions/vt_fns/vt_modify_config.py",
    "chars": 3712,
    "preview": "from pydantic import BaseModel, Field\nfrom typing import List\nfrom toolbox import update_ui_latest_msg, get_conf\nfrom re"
  },
  {
    "path": "crazy_functions/vt_fns/vt_state.py",
    "chars": 934,
    "preview": "import pickle\n\nclass VoidTerminalState():\n    def __init__(self):\n        self.reset_state()\n\n    def reset_state(self):"
  },
  {
    "path": "crazy_functions/word_dfa/dfa_algo.py",
    "chars": 170381,
    "preview": "text = \"\"\"\nLarge language models (LLMs) have demonstrated remarkable potential in solving complex tasks across diverse d"
  },
  {
    "path": "crazy_functions/高级功能函数模板.py",
    "chars": 6604,
    "preview": "from toolbox import CatchException, update_ui\nfrom crazy_functions.crazy_utils import request_gpt_model_in_new_thread_wi"
  },
  {
    "path": "docker-compose.yml",
    "chars": 14010,
    "preview": "## ===================================================\n#                docker-compose.yml\n## =========================="
  },
  {
    "path": "docs/DOCUMENTATION_PLAN.md",
    "chars": 36933,
    "preview": "# GPT Academic 文档撰写规划\n\n> **文档版本**: v1.0\n> **创建日期**: 2025-01-09\n> **首要目标**: 教导用户如何配置和使用 GPT Academic\n\n---\n\n## 一、文档设计原则\n\n#"
  },
  {
    "path": "docs/GithubAction+AllCapacity",
    "chars": 2391,
    "preview": "# 此Dockerfile适用于“无本地模型”的迷你运行环境构建\n# 如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml\n# - 如何构建: 先修改 `config.py`, 然后 `d"
  },
  {
    "path": "docs/GithubAction+ChatGLM+Moss",
    "chars": 1222,
    "preview": "\n# 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)\nFROM nvidia/cuda:11.3.1-runtime-ubuntu20.04\nRUN apt-get update\nRUN"
  },
  {
    "path": "docs/GithubAction+JittorLLMs",
    "chars": 1290,
    "preview": "# 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)\nFROM nvidia/cuda:11.3.1-runtime-ubuntu20.04\nARG useProxyNetwork=''\n"
  },
  {
    "path": "docs/GithubAction+NoLocal",
    "chars": 1346,
    "preview": "# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM\n# 如何构建: 先修改 `config.py`, 然后 docker build -t "
  },
  {
    "path": "docs/GithubAction+NoLocal+AudioAssistant",
    "chars": 1647,
    "preview": "# 此Dockerfile适用于“无本地模型”的迷你运行环境构建\n# 如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml\n# - 如何构建: 先修改 `config.py`, 然后 `d"
  },
  {
    "path": "docs/GithubAction+NoLocal+Latex",
    "chars": 1135,
    "preview": "# 此Dockerfile适用于\"无本地模型\"的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM\n# - 1 修改 `config.py`\n# - 2 构建 docker build -"
  },
  {
    "path": "docs/GithubAction+NoLocal+Vectordb",
    "chars": 1965,
    "preview": "# 此Dockerfile适用于“无本地模型”的迷你运行环境构建\n# 如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml\n# - 如何构建: 先修改 `config.py`, 然后 `d"
  },
  {
    "path": "docs/README.Arabic.md",
    "chars": 20704,
    "preview": "\n\n\n> **ملحوظة**\n>\n> تمت ترجمة هذا الملف README باستخدام GPT (بواسطة المكون الإضافي لهذا المشروع) وقد لا تكون الترجمة 100"
  },
  {
    "path": "docs/README.English.md",
    "chars": 23992,
    "preview": "\n\n\n> **Note**\n>\n> This README was translated by GPT (implemented by the plugin of this project) and may not be 100% reli"
  },
  {
    "path": "docs/README.French.md",
    "chars": 24406,
    "preview": "\n\n\n> **Remarque**\n>\n> Ce README a été traduit par GPT (implémenté par le plugin de ce projet) et n'est pas fiable à 100 "
  },
  {
    "path": "docs/README.German.md",
    "chars": 25440,
    "preview": "\n\n\n> **Hinweis**\n>\n> Dieses README wurde mithilfe der GPT-Übersetzung (durch das Plugin dieses Projekts) erstellt und is"
  }
]

// ... and 218 more files (download for full content)

About this extraction

This page contains the full source code of the binary-husky/gpt_academic GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 418 files (6.9 MB), approximately 1.8M tokens, and a symbol index with 2251 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!