Repository: nomic-ai/gpt4all
Branch: main
Commit: b666d16db5ae
Files: 307
Total size: 3.0 MB
Directory structure:
gitextract_o3o4fycr/
├── .circleci/
│ ├── config.yml
│ ├── continue_config.yml
│ └── grab_notary_id.py
├── .codespellrc
├── .github/
│ ├── ISSUE_TEMPLATE/
│ │ ├── bindings-bug.md
│ │ ├── chat-bug.md
│ │ ├── config.yml
│ │ ├── documentation.md
│ │ ├── feature-request.md
│ │ └── other-bug.md
│ ├── pull_request_template.md
│ └── workflows/
│ ├── close_issues.yml
│ └── codespell.yml
├── .gitignore
├── .gitmodules
├── CONTRIBUTING.md
├── LICENSE.txt
├── MAINTAINERS.md
├── README.md
├── common/
│ └── common.cmake
├── gpt4all-backend/
│ ├── CMakeLists.txt
│ ├── README.md
│ ├── include/
│ │ └── gpt4all-backend/
│ │ ├── llmodel.h
│ │ ├── llmodel_c.h
│ │ └── sysinfo.h
│ ├── llama.cpp.cmake
│ └── src/
│ ├── dlhandle.cpp
│ ├── dlhandle.h
│ ├── llamamodel.cpp
│ ├── llamamodel_impl.h
│ ├── llmodel.cpp
│ ├── llmodel_c.cpp
│ ├── llmodel_shared.cpp
│ └── utils.h
├── gpt4all-bindings/
│ ├── README.md
│ ├── cli/
│ │ ├── README.md
│ │ ├── app.py
│ │ └── developer_notes.md
│ ├── python/
│ │ ├── .gitignore
│ │ ├── .isort.cfg
│ │ ├── CHANGELOG.md
│ │ ├── LICENSE.txt
│ │ ├── MANIFEST.in
│ │ ├── README.md
│ │ ├── docs/
│ │ │ ├── css/
│ │ │ │ └── custom.css
│ │ │ ├── gpt4all_api_server/
│ │ │ │ └── home.md
│ │ │ ├── gpt4all_desktop/
│ │ │ │ ├── chat_templates.md
│ │ │ │ ├── chats.md
│ │ │ │ ├── cookbook/
│ │ │ │ │ ├── use-local-ai-models-to-privately-chat-with-Obsidian.md
│ │ │ │ │ ├── use-local-ai-models-to-privately-chat-with-One-Drive.md
│ │ │ │ │ ├── use-local-ai-models-to-privately-chat-with-google-drive.md
│ │ │ │ │ └── use-local-ai-models-to-privately-chat-with-microsoft-excel.md
│ │ │ │ ├── localdocs.md
│ │ │ │ ├── models.md
│ │ │ │ ├── quickstart.md
│ │ │ │ └── settings.md
│ │ │ ├── gpt4all_help/
│ │ │ │ ├── faq.md
│ │ │ │ └── troubleshooting.md
│ │ │ ├── gpt4all_python/
│ │ │ │ ├── home.md
│ │ │ │ ├── monitoring.md
│ │ │ │ └── ref.md
│ │ │ ├── index.md
│ │ │ └── old/
│ │ │ ├── gpt4all_chat.md
│ │ │ ├── gpt4all_cli.md
│ │ │ ├── gpt4all_faq.md
│ │ │ ├── gpt4all_monitoring.md
│ │ │ ├── gpt4all_nodejs.md
│ │ │ ├── gpt4all_python.md
│ │ │ ├── gpt4all_python_embedding.md
│ │ │ └── index.md
│ │ ├── gpt4all/
│ │ │ ├── __init__.py
│ │ │ ├── _pyllmodel.py
│ │ │ ├── gpt4all.py
│ │ │ └── tests/
│ │ │ ├── __init__.py
│ │ │ ├── test_embed_timings.py
│ │ │ └── test_gpt4all.py
│ │ ├── makefile
│ │ ├── mkdocs.yml
│ │ └── setup.py
│ └── typescript/
│ ├── .clang-format
│ ├── .gitignore
│ ├── .npmignore
│ ├── .yarnrc.yml
│ ├── README.md
│ ├── binding.ci.gyp
│ ├── binding.gyp
│ ├── index.cc
│ ├── index.h
│ ├── package.json
│ ├── prompt.cc
│ ├── prompt.h
│ ├── scripts/
│ │ ├── build.js
│ │ ├── docs.js
│ │ ├── mkclangd.js
│ │ └── prebuild.js
│ ├── spec/
│ │ ├── callbacks.mjs
│ │ ├── chat-memory.mjs
│ │ ├── chat-minimal.mjs
│ │ ├── concurrency.mjs
│ │ ├── embed-jsonl.mjs
│ │ ├── embed.mjs
│ │ ├── llmodel.mjs
│ │ ├── long-context.mjs
│ │ ├── model-switching.mjs
│ │ ├── stateless.mjs
│ │ ├── streaming.mjs
│ │ └── system.mjs
│ ├── src/
│ │ ├── chat-session.js
│ │ ├── config.js
│ │ ├── gpt4all.d.ts
│ │ ├── gpt4all.js
│ │ ├── models.js
│ │ └── util.js
│ └── test/
│ ├── gpt4all.test.js
│ └── models.json
├── gpt4all-chat/
│ ├── .flake8
│ ├── CHANGELOG.md
│ ├── CMakeLists.txt
│ ├── LICENSE
│ ├── cmake/
│ │ ├── Modules/
│ │ │ ├── SignMacOSBinaries.cmake
│ │ │ └── SignWindowsBinaries.cmake
│ │ ├── cpack-steal-config.cmake.in
│ │ ├── cpack_config.cmake
│ │ ├── deploy-qt-linux.cmake.in
│ │ ├── deploy-qt-mac.cmake.in
│ │ ├── deploy-qt-windows.cmake.in
│ │ ├── download_model.cmake
│ │ ├── installer_control.qs
│ │ ├── installer_gpt4all_component.qs
│ │ ├── installer_maintenancetool_component.qs
│ │ └── sign_dmg.py
│ ├── contributing_translations.md
│ ├── deps/
│ │ └── CMakeLists.txt
│ ├── dev-requirements.txt
│ ├── flatpak-manifest/
│ │ ├── io.gpt4all.gpt4all.appdata.xml
│ │ └── io.gpt4all.gpt4all.desktop
│ ├── main.qml
│ ├── metadata/
│ │ ├── latestnews.md
│ │ ├── models.json
│ │ ├── models2.json
│ │ ├── models3.json
│ │ └── release.json
│ ├── pyproject.toml
│ ├── qa_checklist.md
│ ├── qml/
│ │ ├── AddCollectionView.qml
│ │ ├── AddGPT4AllModelView.qml
│ │ ├── AddHFModelView.qml
│ │ ├── AddModelView.qml
│ │ ├── AddRemoteModelView.qml
│ │ ├── ApplicationSettings.qml
│ │ ├── ChatCollapsibleItem.qml
│ │ ├── ChatDrawer.qml
│ │ ├── ChatItemView.qml
│ │ ├── ChatMessageButton.qml
│ │ ├── ChatTextItem.qml
│ │ ├── ChatView.qml
│ │ ├── CollectionsDrawer.qml
│ │ ├── ConfirmationDialog.qml
│ │ ├── HomeView.qml
│ │ ├── LocalDocsSettings.qml
│ │ ├── LocalDocsView.qml
│ │ ├── ModelSettings.qml
│ │ ├── ModelsView.qml
│ │ ├── MyBusyIndicator.qml
│ │ ├── MyButton.qml
│ │ ├── MyCheckBox.qml
│ │ ├── MyComboBox.qml
│ │ ├── MyDialog.qml
│ │ ├── MyDirectoryField.qml
│ │ ├── MyFancyLink.qml
│ │ ├── MyFileDialog.qml
│ │ ├── MyFileIcon.qml
│ │ ├── MyFolderDialog.qml
│ │ ├── MyMenu.qml
│ │ ├── MyMenuItem.qml
│ │ ├── MyMiniButton.qml
│ │ ├── MySettingsButton.qml
│ │ ├── MySettingsDestructiveButton.qml
│ │ ├── MySettingsLabel.qml
│ │ ├── MySettingsStack.qml
│ │ ├── MySettingsTab.qml
│ │ ├── MySlug.qml
│ │ ├── MyTabButton.qml
│ │ ├── MyTextArea.qml
│ │ ├── MyTextButton.qml
│ │ ├── MyTextField.qml
│ │ ├── MyToolButton.qml
│ │ ├── MyWelcomeButton.qml
│ │ ├── NetworkDialog.qml
│ │ ├── NewVersionDialog.qml
│ │ ├── PopupDialog.qml
│ │ ├── RemoteModelCard.qml
│ │ ├── SettingsView.qml
│ │ ├── StartupDialog.qml
│ │ ├── Theme.qml
│ │ ├── ThumbsDownDialog.qml
│ │ ├── Toast.qml
│ │ └── ToastManager.qml
│ ├── resources/
│ │ ├── gpt4all.icns
│ │ └── gpt4all.rc
│ ├── src/
│ │ ├── chat.cpp
│ │ ├── chat.h
│ │ ├── chatapi.cpp
│ │ ├── chatapi.h
│ │ ├── chatlistmodel.cpp
│ │ ├── chatlistmodel.h
│ │ ├── chatllm.cpp
│ │ ├── chatllm.h
│ │ ├── chatmodel.cpp
│ │ ├── chatmodel.h
│ │ ├── chatviewtextprocessor.cpp
│ │ ├── chatviewtextprocessor.h
│ │ ├── codeinterpreter.cpp
│ │ ├── codeinterpreter.h
│ │ ├── config.h.in
│ │ ├── database.cpp
│ │ ├── database.h
│ │ ├── download.cpp
│ │ ├── download.h
│ │ ├── embllm.cpp
│ │ ├── embllm.h
│ │ ├── jinja_helpers.cpp
│ │ ├── jinja_helpers.h
│ │ ├── jinja_replacements.cpp
│ │ ├── jinja_replacements.h
│ │ ├── llm.cpp
│ │ ├── llm.h
│ │ ├── localdocs.cpp
│ │ ├── localdocs.h
│ │ ├── localdocsmodel.cpp
│ │ ├── localdocsmodel.h
│ │ ├── logger.cpp
│ │ ├── logger.h
│ │ ├── macosdock.h
│ │ ├── macosdock.mm
│ │ ├── main.cpp
│ │ ├── modellist.cpp
│ │ ├── modellist.h
│ │ ├── mysettings.cpp
│ │ ├── mysettings.h
│ │ ├── network.cpp
│ │ ├── network.h
│ │ ├── server.cpp
│ │ ├── server.h
│ │ ├── tool.cpp
│ │ ├── tool.h
│ │ ├── toolcallparser.cpp
│ │ ├── toolcallparser.h
│ │ ├── toolmodel.cpp
│ │ ├── toolmodel.h
│ │ ├── utils.h
│ │ ├── utils.inl
│ │ ├── xlsxtomd.cpp
│ │ └── xlsxtomd.h
│ ├── system_requirements.md
│ ├── test-requirements.txt
│ ├── tests/
│ │ ├── CMakeLists.txt
│ │ ├── cpp/
│ │ │ ├── basic_test.cpp
│ │ │ └── test_main.cpp
│ │ └── python/
│ │ ├── __init__.py
│ │ ├── config.py.in
│ │ └── test_server_api.py
│ └── translations/
│ ├── gpt4all_en_US.ts
│ ├── gpt4all_es_MX.ts
│ ├── gpt4all_it_IT.ts
│ ├── gpt4all_pt_BR.ts
│ ├── gpt4all_ro_RO.ts
│ ├── gpt4all_zh_CN.ts
│ └── gpt4all_zh_TW.ts
├── gpt4all-training/
│ ├── GPT-J_MAP.md
│ ├── README.md
│ ├── TRAINING_LOG.md
│ ├── clean.py
│ ├── configs/
│ │ ├── deepspeed/
│ │ │ ├── ds_config.json
│ │ │ ├── ds_config_gptj.json
│ │ │ ├── ds_config_gptj_lora.json
│ │ │ ├── ds_config_mpt.json
│ │ │ └── ds_config_pythia.json
│ │ ├── eval/
│ │ │ ├── generate_baseline.yaml
│ │ │ ├── generate_gpt4all_gptj.yaml
│ │ │ ├── generate_gpt4all_gptj_lora.yaml
│ │ │ └── generate_gpt4all_llama_lora.yaml
│ │ ├── generate/
│ │ │ ├── generate.yaml
│ │ │ ├── generate_gptj.yaml
│ │ │ ├── generate_gptj_lora.yaml
│ │ │ └── generate_llama.yaml
│ │ ├── inference/
│ │ │ └── gptj.yaml
│ │ └── train/
│ │ ├── finetune.yaml
│ │ ├── finetune_falcon.yaml
│ │ ├── finetune_gptj.yaml
│ │ ├── finetune_gptj_lora.yaml
│ │ ├── finetune_lora.yaml
│ │ ├── finetune_mpt.yaml
│ │ └── finetune_openllama.yaml
│ ├── create_hostname.sh
│ ├── data.py
│ ├── env.yaml
│ ├── eval_figures.py
│ ├── eval_self_instruct.py
│ ├── generate.py
│ ├── inference.py
│ ├── launcher.sh
│ ├── old-README.md
│ ├── read.py
│ ├── requirements.txt
│ └── train.py
└── roadmap.md
================================================
FILE CONTENTS
================================================
================================================
FILE: .circleci/config.yml
================================================
version: 2.1
setup: true
orbs:
path-filtering: circleci/path-filtering@1.3.0
workflows:
version: 2.1
generate-config:
jobs:
- path-filtering/filter:
filters:
tags:
only:
- /.*/
base-revision: main
config-path: .circleci/continue_config.yml
mapping: |
.circleci/.* run-all-workflows true
gpt4all-backend/.* run-all-workflows true
gpt4all-bindings/python/.* run-python-workflow true
gpt4all-bindings/typescript/.* run-ts-workflow true
gpt4all-chat/.* run-chat-workflow true
================================================
FILE: .circleci/continue_config.yml
================================================
version: 2.1
orbs:
win: circleci/windows@5.0
python: circleci/python@1.2
node: circleci/node@5.1
parameters:
run-all-workflows:
type: boolean
default: false
run-python-workflow:
type: boolean
default: false
run-chat-workflow:
type: boolean
default: false
run-ts-workflow:
type: boolean
default: false
job-macos-executor: &job-macos-executor
macos:
xcode: 16.2.0
resource_class: macos.m1.medium.gen1
environment:
HOMEBREW_NO_AUTO_UPDATE: 1
job-macos-install-deps: &job-macos-install-deps
name: Install basic macOS build dependencies
command: brew install ccache llvm wget
job-linux-install-chat-deps: &job-linux-install-chat-deps
name: Install Linux build dependencies for gpt4all-chat
command: |
# Prevent apt-get from interactively prompting for service restart
echo "\$nrconf{restart} = 'l'" | sudo tee /etc/needrestart/conf.d/90-autorestart.conf >/dev/null
wget -qO- 'https://apt.llvm.org/llvm-snapshot.gpg.key' | sudo tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc >/dev/null
sudo add-apt-repository -yn 'deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-19 main'
wget -qO- "https://packages.lunarg.com/lunarg-signing-key-pub.asc" \
| sudo tee /etc/apt/trusted.gpg.d/lunarg.asc >/dev/null
wget -qO- "https://packages.lunarg.com/vulkan/1.3.290/lunarg-vulkan-1.3.290-jammy.list" \
| sudo tee /etc/apt/sources.list.d/lunarg-vulkan-1.3.290-jammy.list >/dev/null
wget "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb"
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
bison build-essential ccache clang-19 clang-tools-19 cuda-compiler-11-8 flex gperf libcublas-dev-11-8
libfontconfig1 libfreetype6 libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5
libstdc++-12-dev libwayland-dev libx11-6 libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0
libxcb-keysyms1 libxcb-randr0 libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1
libxcb-xfixes0 libxcb-xinerama0 libxcb-xkb1 libxcb1 libxext6 libxfixes3 libxi6 libxkbcommon-dev libxkbcommon-x11-0
libxrender1 patchelf python3 vulkan-sdk python3 vulkan-sdk
)
sudo apt-get update
sudo apt-get install -y "${packages[@]}"
wget "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-linux-x64-4.8.1.run"
chmod +x qt-online-installer-linux-x64-4.8.1.run
./qt-online-installer-linux-x64-4.8.1.run --no-force-installations --no-default-installations \
--no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations \
--email "$QT_EMAIL" --password "$QT_PASSWORD" install \
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.linux_gcc_64 qt.qt6.682.addons.qt5compat \
qt.qt6.682.debug_info extensions.qtpdf.682 qt.qt6.682.addons.qthttpserver
job-linux-install-backend-deps: &job-linux-install-backend-deps
name: Install Linux build dependencies for gpt4all-backend
command: |
wget -qO- 'https://apt.llvm.org/llvm-snapshot.gpg.key' | sudo tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc >/dev/null
sudo add-apt-repository -yn 'deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-19 main'
wget -qO- "https://packages.lunarg.com/lunarg-signing-key-pub.asc" \
| sudo tee /etc/apt/trusted.gpg.d/lunarg.asc >/dev/null
wget -qO- "https://packages.lunarg.com/vulkan/1.3.290/lunarg-vulkan-1.3.290-jammy.list" \
| sudo tee /etc/apt/sources.list.d/lunarg-vulkan-1.3.290-jammy.list >/dev/null
wget "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb"
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
build-essential ccache clang-19 clang-tools-19 cuda-compiler-11-8 libcublas-dev-11-8
libnvidia-compute-550-server libstdc++-12-dev vulkan-sdk
)
sudo apt-get update
sudo apt-get install -y "${packages[@]}"
pyenv global 3.13.2
pip install setuptools wheel cmake ninja
jobs:
# work around CircleCI-Public/path-filtering-orb#20
noop:
docker:
- image: cimg/base:stable
steps:
- run: "true"
validate-commit-on-main:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: Verify that commit is on the main branch
command: git merge-base --is-ancestor HEAD main
build-offline-chat-installer-macos:
<<: *job-macos-executor
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-macos-
- run:
<<: *job-macos-install-deps
- run:
name: Install Rosetta
command: softwareupdate --install-rosetta --agree-to-license # needed for QtIFW
- run:
name: Installing Qt
command: |
wget "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-macOS-x64-4.8.1.dmg"
hdiutil attach qt-online-installer-macOS-x64-4.8.1.dmg
/Volumes/qt-online-installer-macOS-x64-4.8.1/qt-online-installer-macOS-x64-4.8.1.app/Contents/MacOS/qt-online-installer-macOS-x64-4.8.1 \
--no-force-installations --no-default-installations --no-size-checking --default-answer \
--accept-licenses --confirm-command --accept-obligations --email "$QT_EMAIL" --password "$QT_PASSWORD" \
install \
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.clang_64 qt.qt6.682.addons.qt5compat \
extensions.qtpdf.682 qt.qt6.682.addons.qthttpserver
hdiutil detach /Volumes/qt-online-installer-macOS-x64-4.8.1
- run:
name: Setup Keychain
command: |
echo $MAC_SIGNING_CERT | base64 --decode > cert.p12
security create-keychain -p "$MAC_KEYCHAIN_KEY" sign.keychain
security default-keychain -s sign.keychain
security unlock-keychain -p "$MAC_KEYCHAIN_KEY" sign.keychain
security import cert.p12 -k sign.keychain -P "$MAC_SIGNING_CERT_PWD" -T /usr/bin/codesign
security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k "$MAC_KEYCHAIN_KEY" sign.keychain
- run:
name: Build
no_output_timeout: 30m
command: |
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
mkdir build
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.8/bin
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-S ../gpt4all-chat -B . -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.8.2/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-DCMAKE_C_COMPILER=/opt/homebrew/opt/llvm/bin/clang \
-DCMAKE_CXX_COMPILER=/opt/homebrew/opt/llvm/bin/clang++ \
-DCMAKE_RANLIB=/usr/bin/ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6 \
-DMACDEPLOYQT=~/Qt/6.8.2/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=ON \
-DGPT4ALL_SIGN_INSTALL=ON \
-DGPT4ALL_GEN_CPACK_CONFIG=ON
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake . -DGPT4ALL_GEN_CPACK_CONFIG=OFF
# The 'install' step here *should* be completely unnecessary. There is absolutely no reason we should have
# to copy all of the build artifacts to an output directory that we do not use (because we package GPT4All
# as an installer instead).
# However, because of the way signing is implemented in the cmake script, the *source* files are signed at
# install time instead of the *installed* files. This side effect is the *only* way libraries that are not
# processed by macdeployqt, such as libllmodel.so, get signed (at least, with -DBUILD_UNIVERSAL=ON).
# Also, we have to run this as a *separate* step. Telling cmake to run both targets in one command causes it
# to execute them in parallel, since it is not aware of the dependency of the package target on the install
# target.
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
ccache -s
mkdir upload
cp gpt4all-installer-* upload
# persist the unsigned installer
- store_artifacts:
path: build/upload
- save_cache:
key: ccache-gpt4all-macos-{{ epoch }}
when: always
paths:
- ../.ccache
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
sign-offline-chat-installer-macos:
<<: *job-macos-executor
steps:
- checkout
# attach to a workspace containing unsigned dmg
- attach_workspace:
at: build
- run:
name: "Setup Keychain"
command: |
echo $MAC_SIGNING_CERT | base64 --decode > cert.p12
security create-keychain -p "$MAC_KEYCHAIN_KEY" sign.keychain
security default-keychain -s sign.keychain
security unlock-keychain -p "$MAC_KEYCHAIN_KEY" sign.keychain
security import cert.p12 -k sign.keychain -P "$MAC_SIGNING_CERT_PWD" -T /usr/bin/codesign
security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k "$MAC_KEYCHAIN_KEY" sign.keychain
rm cert.p12
- run:
name: "Sign App Bundle"
command: |
python3 -m pip install click
python3 gpt4all-chat/cmake/sign_dmg.py --input-dmg build/upload/gpt4all-installer-darwin.dmg --output-dmg build/upload/gpt4all-installer-darwin-signed.dmg --signing-identity "$MAC_SIGNING_CERT_NAME"
- run:
name: "Sign DMG"
command: |
codesign --options runtime --timestamp -s "$MAC_SIGNING_CERT_NAME" build/upload/gpt4all-installer-darwin-signed.dmg
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
notarize-offline-chat-installer-macos:
<<: *job-macos-executor
steps:
- checkout
- attach_workspace:
at: build
- run:
name: "Notarize"
command: |
xcrun notarytool submit build/upload/gpt4all-installer-darwin-signed.dmg --apple-id "$MAC_NOTARIZATION_ID" --team-id "$MAC_NOTARIZATION_TID" --password "$MAC_NOTARIZATION_KEY" --wait | tee notarize_log.txt
- run:
name: "Report Notarization Failure"
command: |
NID=`python3 .circleci/grab_notary_id.py notarize_log.txt` && export NID
xcrun notarytool log $NID --keychain-profile "notary-profile"
exit 1
when: on_fail
- run:
name: "Staple"
command: |
xcrun stapler staple build/upload/gpt4all-installer-darwin-signed.dmg
- store_artifacts:
path: build/upload
- run:
name: Install Rosetta
command: softwareupdate --install-rosetta --agree-to-license # needed for QtIFW
- run:
name: Test installation and verify that it is signed
command: |
set -e
hdiutil attach build/upload/gpt4all-installer-darwin-signed.dmg
codesign --verify --deep --verbose /Volumes/gpt4all-installer-darwin/gpt4all-installer-darwin.app
/Volumes/gpt4all-installer-darwin/gpt4all-installer-darwin.app/Contents/MacOS/gpt4all-installer-darwin \
--no-size-checking --default-answer --accept-licenses --confirm-command \
install gpt4all
codesign --verify --deep --verbose /Applications/gpt4all/bin/gpt4all.app
codesign --verify --deep --verbose /Applications/gpt4all/maintenancetool.app
hdiutil detach /Volumes/gpt4all-installer-darwin
build-online-chat-installer-macos:
<<: *job-macos-executor
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-macos-
- run:
<<: *job-macos-install-deps
- run:
name: Install Rosetta
command: softwareupdate --install-rosetta --agree-to-license # needed for QtIFW
- run:
name: Installing Qt
command: |
wget "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-macOS-x64-4.8.1.dmg"
hdiutil attach qt-online-installer-macOS-x64-4.8.1.dmg
/Volumes/qt-online-installer-macOS-x64-4.8.1/qt-online-installer-macOS-x64-4.8.1.app/Contents/MacOS/qt-online-installer-macOS-x64-4.8.1 \
--no-force-installations --no-default-installations --no-size-checking --default-answer \
--accept-licenses --confirm-command --accept-obligations --email "$QT_EMAIL" --password "$QT_PASSWORD" \
install \
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.clang_64 qt.qt6.682.addons.qt5compat \
extensions.qtpdf.682 qt.qt6.682.addons.qthttpserver
hdiutil detach /Volumes/qt-online-installer-macOS-x64-4.8.1
- run:
name: Setup Keychain
command: |
echo $MAC_SIGNING_CERT | base64 --decode > cert.p12
security create-keychain -p "$MAC_KEYCHAIN_KEY" sign.keychain
security default-keychain -s sign.keychain
security unlock-keychain -p "$MAC_KEYCHAIN_KEY" sign.keychain
security import cert.p12 -k sign.keychain -P "$MAC_SIGNING_CERT_PWD" -T /usr/bin/codesign
security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k "$MAC_KEYCHAIN_KEY" sign.keychain
- run:
name: Build
no_output_timeout: 30m
command: |
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
mkdir build
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.8/bin
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-S ../gpt4all-chat -B . -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.8.2/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-DCMAKE_C_COMPILER=/opt/homebrew/opt/llvm/bin/clang \
-DCMAKE_CXX_COMPILER=/opt/homebrew/opt/llvm/bin/clang++ \
-DCMAKE_RANLIB=/usr/bin/ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6 \
-DMACDEPLOYQT=~/Qt/6.8.2/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=OFF \
-DGPT4ALL_SIGN_INSTALL=ON \
-DGPT4ALL_GEN_CPACK_CONFIG=ON
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake . -DGPT4ALL_GEN_CPACK_CONFIG=OFF
# See comment above related to the 'install' target.
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
ccache -s
mkdir upload
cp gpt4all-installer-* upload
tar -cvzf upload/repository.tar.gz -C _CPack_Packages/Darwin/IFW/gpt4all-installer-darwin repository
# persist the unsigned installer
- store_artifacts:
path: build/upload
- save_cache:
key: ccache-gpt4all-macos-{{ epoch }}
when: always
paths:
- ../.ccache
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
sign-online-chat-installer-macos:
<<: *job-macos-executor
steps:
- checkout
# attach to a workspace containing unsigned dmg
- attach_workspace:
at: build
- run:
name: "Setup Keychain"
command: |
echo $MAC_SIGNING_CERT | base64 --decode > cert.p12
security create-keychain -p "$MAC_KEYCHAIN_KEY" sign.keychain
security default-keychain -s sign.keychain
security unlock-keychain -p "$MAC_KEYCHAIN_KEY" sign.keychain
security import cert.p12 -k sign.keychain -P "$MAC_SIGNING_CERT_PWD" -T /usr/bin/codesign
security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k "$MAC_KEYCHAIN_KEY" sign.keychain
rm cert.p12
- run:
name: "Sign App Bundle"
command: |
python3 -m pip install click
python3 gpt4all-chat/cmake/sign_dmg.py --input-dmg build/upload/gpt4all-installer-darwin.dmg --output-dmg build/upload/gpt4all-installer-darwin-signed.dmg --signing-identity "$MAC_SIGNING_CERT_NAME"
- run:
name: "Sign DMG"
command: |
codesign --options runtime --timestamp -s "$MAC_SIGNING_CERT_NAME" build/upload/gpt4all-installer-darwin-signed.dmg
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
notarize-online-chat-installer-macos:
<<: *job-macos-executor
steps:
- checkout
- attach_workspace:
at: build
- run:
name: "Notarize"
command: |
xcrun notarytool submit build/upload/gpt4all-installer-darwin-signed.dmg --apple-id "$MAC_NOTARIZATION_ID" --team-id "$MAC_NOTARIZATION_TID" --password "$MAC_NOTARIZATION_KEY" --wait | tee notarize_log.txt
- run:
name: "Report Notarization Failure"
command: |
NID=`python3 .circleci/grab_notary_id.py notarize_log.txt` && export NID
xcrun notarytool log $NID --keychain-profile "notary-profile"
exit 1
when: on_fail
- run:
name: "Staple"
command: |
xcrun stapler staple build/upload/gpt4all-installer-darwin-signed.dmg
- store_artifacts:
path: build/upload
- run:
name: Install Rosetta
command: softwareupdate --install-rosetta --agree-to-license # needed for QtIFW
- run:
name: Test installation and verify that it is signed
command: |
set -e
hdiutil attach build/upload/gpt4all-installer-darwin-signed.dmg
codesign --verify --deep --verbose /Volumes/gpt4all-installer-darwin/gpt4all-installer-darwin.app
tar -xf build/upload/repository.tar.gz
/Volumes/gpt4all-installer-darwin/gpt4all-installer-darwin.app/Contents/MacOS/gpt4all-installer-darwin \
--no-size-checking --default-answer --accept-licenses --confirm-command --set-temp-repository repository \
install gpt4all
codesign --verify --deep --verbose /Applications/gpt4all/bin/gpt4all.app
codesign --verify --deep --verbose /Applications/gpt4all/maintenancetool.app
hdiutil detach /Volumes/gpt4all-installer-darwin
build-offline-chat-installer-linux:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-linux-amd64-
- run:
<<: *job-linux-install-chat-deps
- run:
name: Build linuxdeployqt
command: |
git clone https://github.com/nomic-ai/linuxdeployqt
cd linuxdeployqt && qmake && sudo make install
- run:
name: Build
no_output_timeout: 30m
command: |
set -eo pipefail
export CMAKE_PREFIX_PATH=~/Qt/6.8.2/gcc_64/lib/cmake
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.8/bin
export PATH=$PATH:/usr/local/cuda/bin
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
mkdir build
cd build
mkdir upload
~/Qt/Tools/CMake/bin/cmake \
-S ../gpt4all-chat -B . \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=clang-19 \
-DCMAKE_CXX_COMPILER=clang++-19 \
-DCMAKE_CXX_COMPILER_AR=ar \
-DCMAKE_CXX_COMPILER_RANLIB=ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \
-DGPT4ALL_OFFLINE_INSTALLER=ON
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target all
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target install
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target package
ccache -s
cp gpt4all-installer-* upload
- store_artifacts:
path: build/upload
- save_cache:
key: ccache-gpt4all-linux-amd64-{{ epoch }}
when: always
paths:
- ../.ccache
- run:
name: Test installation
command: |
mkdir ~/Desktop
build/upload/gpt4all-installer-linux.run --no-size-checking --default-answer --accept-licenses \
--confirm-command \
install gpt4all
build-online-chat-installer-linux:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-linux-amd64-
- run:
<<: *job-linux-install-chat-deps
- run:
name: Build linuxdeployqt
command: |
git clone https://github.com/nomic-ai/linuxdeployqt
cd linuxdeployqt && qmake && sudo make install
- run:
name: Build
no_output_timeout: 30m
command: |
set -eo pipefail
export CMAKE_PREFIX_PATH=~/Qt/6.8.2/gcc_64/lib/cmake
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.8/bin
export PATH=$PATH:/usr/local/cuda/bin
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
mkdir build
cd build
mkdir upload
~/Qt/Tools/CMake/bin/cmake \
-S ../gpt4all-chat -B . \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=clang-19 \
-DCMAKE_CXX_COMPILER=clang++-19 \
-DCMAKE_CXX_COMPILER_AR=ar \
-DCMAKE_CXX_COMPILER_RANLIB=ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \
-DGPT4ALL_OFFLINE_INSTALLER=OFF
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target all
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target install
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target package
ccache -s
cp gpt4all-installer-* upload
tar -cvzf upload/repository.tar.gz -C _CPack_Packages/Linux/IFW/gpt4all-installer-linux repository
- store_artifacts:
path: build/upload
- save_cache:
key: ccache-gpt4all-linux-amd64-{{ epoch }}
when: always
paths:
- ../.ccache
- run:
name: Test installation
command: |
mkdir ~/Desktop
build/upload/gpt4all-installer-linux.run --no-size-checking --default-answer --accept-licenses \
--confirm-command \
--set-temp-repository build/_CPack_Packages/Linux/IFW/gpt4all-installer-linux/repository \
install gpt4all
build-offline-chat-installer-windows:
machine:
# we use 2024.04.01 because nvcc complains about the MSVC ver if we use anything newer
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-win-amd64-
- run:
name: Install dependencies
command: choco install -y ccache wget
- run:
name: Installing Qt
command: |
wget.exe "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-windows-x64-4.8.1.exe"
& .\qt-online-installer-windows-x64-4.8.1.exe --no-force-installations --no-default-installations `
--no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations `
--email "${Env:QT_EMAIL}" --password "${Env:QT_PASSWORD}" install `
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.win64_msvc2022_64 qt.qt6.682.addons.qt5compat `
qt.qt6.682.debug_info extensions.qtpdf.682 qt.qt6.682.addons.qthttpserver
- run:
name: Install VulkanSDK
command: |
wget.exe "https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe"
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Install CUDA Toolkit
command: |
wget.exe "https://developer.download.nvidia.com/compute/cuda/11.8.0/network_installers/cuda_11.8.0_windows_network.exe"
.\cuda_11.8.0_windows_network.exe -s cudart_11.8 nvcc_11.8 cublas_11.8 cublas_dev_11.8
- run:
name: "Install Dotnet 8"
command: |
mkdir dotnet
cd dotnet
$dotnet_url="https://download.visualstudio.microsoft.com/download/pr/5af098e1-e433-4fda-84af-3f54fd27c108/6bd1c6e48e64e64871957289023ca590/dotnet-sdk-8.0.302-win-x64.zip"
wget.exe "$dotnet_url"
Expand-Archive -LiteralPath .\dotnet-sdk-8.0.302-win-x64.zip
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
$Env:DOTNET_SKIP_FIRST_TIME_EXPERIENCE=$true
dotnet tool install --global AzureSignTool
- run:
name: Build
no_output_timeout: 30m
command: |
$vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.8\bin"
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S ..\gpt4all-chat -B . -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.8.2\msvc2022_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
-DCMAKE_C_COMPILER_LAUNCHER=ccache `
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON `
-DGPT4ALL_OFFLINE_INSTALLER=ON
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
ccache -s
mkdir upload
copy gpt4all-installer-win64.exe upload
- store_artifacts:
path: build/upload
# add workspace so signing jobs can connect & obtain dmg
- save_cache:
key: ccache-gpt4all-win-amd64-{{ epoch }}
when: always
paths:
- ..\.ccache
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
sign-offline-chat-installer-windows:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- attach_workspace:
at: build
- run:
name: Install dependencies
command: choco install -y wget
- run:
name: "Install Dotnet 8 && Azure Sign Tool"
command: |
mkdir dotnet
cd dotnet
$dotnet_url="https://download.visualstudio.microsoft.com/download/pr/5af098e1-e433-4fda-84af-3f54fd27c108/6bd1c6e48e64e64871957289023ca590/dotnet-sdk-8.0.302-win-x64.zip"
wget.exe "$dotnet_url"
Expand-Archive -LiteralPath .\dotnet-sdk-8.0.302-win-x64.zip
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
$Env:DOTNET_SKIP_FIRST_TIME_EXPERIENCE=$true
dotnet tool install --global AzureSignTool
- run:
name: "Sign Windows Installer With AST"
command: |
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
AzureSignTool.exe sign -du "https://gpt4all.io/index.html" -kvu https://gpt4all.vault.azure.net -kvi "$Env:AZSignGUID" -kvs "$Env:AZSignPWD" -kvc "$Env:AZSignCertName" -kvt "$Env:AZSignTID" -tr http://timestamp.digicert.com -v "$($(Get-Location).Path)\build\upload\gpt4all-installer-win64.exe"
- store_artifacts:
path: build/upload
- run:
name: Test installation
command: |
build\upload\gpt4all-installer-win64.exe --no-size-checking --default-answer --accept-licenses `
--confirm-command `
install gpt4all
build-online-chat-installer-windows:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-win-amd64-
- run:
name: Install dependencies
command: choco install -y ccache wget
- run:
name: Installing Qt
command: |
wget.exe "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-windows-x64-4.8.1.exe"
& .\qt-online-installer-windows-x64-4.8.1.exe --no-force-installations --no-default-installations `
--no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations `
--email "${Env:QT_EMAIL}" --password "${Env:QT_PASSWORD}" install `
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.win64_msvc2022_64 qt.qt6.682.addons.qt5compat `
qt.qt6.682.debug_info extensions.qtpdf.682 qt.qt6.682.addons.qthttpserver
- run:
name: Install VulkanSDK
command: |
wget.exe "https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe"
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Install CUDA Toolkit
command: |
wget.exe "https://developer.download.nvidia.com/compute/cuda/11.8.0/network_installers/cuda_11.8.0_windows_network.exe"
.\cuda_11.8.0_windows_network.exe -s cudart_11.8 nvcc_11.8 cublas_11.8 cublas_dev_11.8
- run:
name: "Install Dotnet 8"
command: |
mkdir dotnet
cd dotnet
$dotnet_url="https://download.visualstudio.microsoft.com/download/pr/5af098e1-e433-4fda-84af-3f54fd27c108/6bd1c6e48e64e64871957289023ca590/dotnet-sdk-8.0.302-win-x64.zip"
wget.exe "$dotnet_url"
Expand-Archive -LiteralPath .\dotnet-sdk-8.0.302-win-x64.zip
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
- run:
name: "Setup Azure SignTool"
command: |
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
$Env:DOTNET_SKIP_FIRST_TIME_EXPERIENCE=$true
dotnet tool install --global AzureSignTool
- run:
name: Build
no_output_timeout: 30m
command: |
$vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.8\bin"
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S ..\gpt4all-chat -B . -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.8.2\msvc2022_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
-DCMAKE_C_COMPILER_LAUNCHER=ccache `
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON `
-DGPT4ALL_OFFLINE_INSTALLER=OFF
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
ccache -s
mkdir upload
copy gpt4all-installer-win64.exe upload
Set-Location -Path "_CPack_Packages/win64/IFW/gpt4all-installer-win64"
Compress-Archive -Path 'repository' -DestinationPath '..\..\..\..\upload\repository.zip'
- store_artifacts:
path: build/upload
- save_cache:
key: ccache-gpt4all-win-amd64-{{ epoch }}
when: always
paths:
- ..\.ccache
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
sign-online-chat-installer-windows:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- attach_workspace:
at: build
- run:
name: Install dependencies
command: choco install -y wget
- run:
name: "Install Dotnet 8"
command: |
mkdir dotnet
cd dotnet
$dotnet_url="https://download.visualstudio.microsoft.com/download/pr/5af098e1-e433-4fda-84af-3f54fd27c108/6bd1c6e48e64e64871957289023ca590/dotnet-sdk-8.0.302-win-x64.zip"
wget.exe "$dotnet_url"
Expand-Archive -LiteralPath .\dotnet-sdk-8.0.302-win-x64.zip
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
- run:
name: "Setup Azure SignTool"
command: |
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
$Env:DOTNET_SKIP_FIRST_TIME_EXPERIENCE=$true
dotnet tool install --global AzureSignTool
- run:
name: "Sign Windows Installer With AST"
command: |
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
AzureSignTool.exe sign -du "https://gpt4all.io/index.html" -kvu https://gpt4all.vault.azure.net -kvi "$Env:AZSignGUID" -kvs "$Env:AZSignPWD" -kvc "$Env:AZSignCertName" -kvt "$Env:AZSignTID" -tr http://timestamp.digicert.com -v "$($(Get-Location).Path)/build/upload/gpt4all-installer-win64.exe"
- store_artifacts:
path: build/upload
- run:
name: Test installation
command: |
Expand-Archive -LiteralPath build\upload\repository.zip -DestinationPath .
build\upload\gpt4all-installer-win64.exe --no-size-checking --default-answer --accept-licenses `
--confirm-command --set-temp-repository repository `
install gpt4all
build-offline-chat-installer-windows-arm:
machine:
# we use 2024.04.01 because nvcc complains about the MSVC ver if we use anything newer
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-win-aarch64-
- run:
name: Install dependencies
command: choco install -y ccache wget
- run:
name: Installing Qt
command: |
wget.exe "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-windows-x64-4.8.1.exe"
& .\qt-online-installer-windows-x64-4.8.1.exe --no-force-installations --no-default-installations `
--no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations `
--email "${Env:QT_EMAIL}" --password "${Env:QT_PASSWORD}" install `
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.win64_msvc2022_64 `
qt.qt6.682.win64_msvc2022_arm64_cross_compiled qt.qt6.682.addons.qt5compat qt.qt6.682.debug_info `
qt.qt6.682.addons.qthttpserver
- run:
name: "Install Dotnet 8"
command: |
mkdir dotnet
cd dotnet
$dotnet_url="https://download.visualstudio.microsoft.com/download/pr/5af098e1-e433-4fda-84af-3f54fd27c108/6bd1c6e48e64e64871957289023ca590/dotnet-sdk-8.0.302-win-x64.zip"
wget.exe "$dotnet_url"
Expand-Archive -LiteralPath .\dotnet-sdk-8.0.302-win-x64.zip
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
$Env:DOTNET_SKIP_FIRST_TIME_EXPERIENCE=$true
dotnet tool install --global AzureSignTool
- run:
name: Build
no_output_timeout: 30m
command: |
$vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -Arch arm64 -HostArch amd64 -DevCmdArguments '-no_logo'
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.8\bin"
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S ..\gpt4all-chat -B . -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.8.2\msvc2022_arm64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
"-DCMAKE_TOOLCHAIN_FILE=C:\Qt\6.8.2\msvc2022_arm64\lib\cmake\Qt6\qt.toolchain.cmake" `
-DCMAKE_C_COMPILER_LAUNCHER=ccache `
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
-DLLMODEL_CUDA=OFF `
-DLLMODEL_KOMPUTE=OFF `
"-DWINDEPLOYQT=C:\Qt\6.8.2\msvc2022_64\bin\windeployqt.exe;--qtpaths;C:\Qt\6.8.2\msvc2022_arm64\bin\qtpaths.bat" `
-DGPT4ALL_TEST=OFF `
-DGPT4ALL_OFFLINE_INSTALLER=ON
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
ccache -s
mkdir upload
copy gpt4all-installer-win64-arm.exe upload
- store_artifacts:
path: build/upload
# add workspace so signing jobs can connect & obtain dmg
- save_cache:
key: ccache-gpt4all-win-aarch64-{{ epoch }}
when: always
paths:
- ..\.ccache
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
sign-offline-chat-installer-windows-arm:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- attach_workspace:
at: build
- run:
name: Install dependencies
command: choco install -y wget
- run:
name: "Install Dotnet 8 && Azure Sign Tool"
command: |
mkdir dotnet
cd dotnet
$dotnet_url="https://download.visualstudio.microsoft.com/download/pr/5af098e1-e433-4fda-84af-3f54fd27c108/6bd1c6e48e64e64871957289023ca590/dotnet-sdk-8.0.302-win-x64.zip"
wget.exe "$dotnet_url"
Expand-Archive -LiteralPath .\dotnet-sdk-8.0.302-win-x64.zip
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
$Env:DOTNET_SKIP_FIRST_TIME_EXPERIENCE=$true
dotnet tool install --global AzureSignTool
- run:
name: "Sign Windows Installer With AST"
command: |
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
AzureSignTool.exe sign -du "https://gpt4all.io/index.html" -kvu https://gpt4all.vault.azure.net -kvi "$Env:AZSignGUID" -kvs "$Env:AZSignPWD" -kvc "$Env:AZSignCertName" -kvt "$Env:AZSignTID" -tr http://timestamp.digicert.com -v "$($(Get-Location).Path)\build\upload\gpt4all-installer-win64-arm.exe"
- store_artifacts:
path: build/upload
- run:
name: Test installation
command: |
build\upload\gpt4all-installer-win64-arm.exe --no-size-checking --default-answer --accept-licenses `
--confirm-command `
install gpt4all
build-online-chat-installer-windows-arm:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-win-aarch64-
- run:
name: Install dependencies
command: choco install -y ccache wget
- run:
name: Installing Qt
command: |
wget.exe "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-windows-x64-4.8.1.exe"
& .\qt-online-installer-windows-x64-4.8.1.exe --no-force-installations --no-default-installations `
--no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations `
--email "${Env:QT_EMAIL}" --password "${Env:QT_PASSWORD}" install `
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.win64_msvc2022_64 `
qt.qt6.682.win64_msvc2022_arm64_cross_compiled qt.qt6.682.addons.qt5compat qt.qt6.682.debug_info `
qt.qt6.682.addons.qthttpserver
- run:
name: "Install Dotnet 8"
command: |
mkdir dotnet
cd dotnet
$dotnet_url="https://download.visualstudio.microsoft.com/download/pr/5af098e1-e433-4fda-84af-3f54fd27c108/6bd1c6e48e64e64871957289023ca590/dotnet-sdk-8.0.302-win-x64.zip"
wget.exe "$dotnet_url"
Expand-Archive -LiteralPath .\dotnet-sdk-8.0.302-win-x64.zip
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
- run:
name: "Setup Azure SignTool"
command: |
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
$Env:DOTNET_SKIP_FIRST_TIME_EXPERIENCE=$true
dotnet tool install --global AzureSignTool
- run:
name: Build
no_output_timeout: 30m
command: |
$vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -Arch arm64 -HostArch amd64 -DevCmdArguments '-no_logo'
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.8\bin"
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S ..\gpt4all-chat -B . -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.8.2\msvc2022_arm64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
"-DCMAKE_TOOLCHAIN_FILE=C:\Qt\6.8.2\msvc2022_arm64\lib\cmake\Qt6\qt.toolchain.cmake" `
-DCMAKE_C_COMPILER_LAUNCHER=ccache `
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
-DLLMODEL_CUDA=OFF `
-DLLMODEL_KOMPUTE=OFF `
"-DWINDEPLOYQT=C:\Qt\6.8.2\msvc2022_64\bin\windeployqt.exe;--qtpaths;C:\Qt\6.8.2\msvc2022_arm64\bin\qtpaths.bat" `
-DGPT4ALL_TEST=OFF `
-DGPT4ALL_OFFLINE_INSTALLER=OFF
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
ccache -s
mkdir upload
copy gpt4all-installer-win64-arm.exe upload
Set-Location -Path "_CPack_Packages/win64/IFW/gpt4all-installer-win64-arm"
Compress-Archive -Path 'repository' -DestinationPath '..\..\..\..\upload\repository.zip'
- store_artifacts:
path: build/upload
- save_cache:
key: ccache-gpt4all-win-aarch64-{{ epoch }}
when: always
paths:
- ..\.ccache
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
sign-online-chat-installer-windows-arm:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- attach_workspace:
at: build
- run:
name: Install dependencies
command: choco install -y wget
- run:
name: "Install Dotnet 8"
command: |
mkdir dotnet
cd dotnet
$dotnet_url="https://download.visualstudio.microsoft.com/download/pr/5af098e1-e433-4fda-84af-3f54fd27c108/6bd1c6e48e64e64871957289023ca590/dotnet-sdk-8.0.302-win-x64.zip"
wget.exe "$dotnet_url"
Expand-Archive -LiteralPath .\dotnet-sdk-8.0.302-win-x64.zip
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
- run:
name: "Setup Azure SignTool"
command: |
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
$Env:DOTNET_SKIP_FIRST_TIME_EXPERIENCE=$true
dotnet tool install --global AzureSignTool
- run:
name: "Sign Windows Installer With AST"
command: |
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
AzureSignTool.exe sign -du "https://gpt4all.io/index.html" -kvu https://gpt4all.vault.azure.net -kvi "$Env:AZSignGUID" -kvs "$Env:AZSignPWD" -kvc "$Env:AZSignCertName" -kvt "$Env:AZSignTID" -tr http://timestamp.digicert.com -v "$($(Get-Location).Path)/build/upload/gpt4all-installer-win64-arm.exe"
- store_artifacts:
path: build/upload
- run:
name: Test installation
command: |
Expand-Archive -LiteralPath build\upload\repository.zip -DestinationPath .
build\upload\gpt4all-installer-win64-arm.exe --no-size-checking --default-answer --accept-licenses `
--confirm-command --set-temp-repository repository `
install gpt4all
build-gpt4all-chat-linux:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-linux-amd64-
- run:
<<: *job-linux-install-chat-deps
- run:
name: Build
no_output_timeout: 30m
command: |
export CMAKE_PREFIX_PATH=~/Qt/6.8.2/gcc_64/lib/cmake
export PATH=$PATH:/usr/local/cuda/bin
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
~/Qt/Tools/CMake/bin/cmake \
-S gpt4all-chat -B build \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=clang-19 \
-DCMAKE_CXX_COMPILER=clang++-19 \
-DCMAKE_CXX_COMPILER_AR=ar \
-DCMAKE_CXX_COMPILER_RANLIB=ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
~/Qt/Tools/CMake/bin/cmake --build build -j$(nproc) --target all
ccache -s
- save_cache:
key: ccache-gpt4all-linux-amd64-{{ epoch }}
when: always
paths:
- ../.ccache
build-gpt4all-chat-windows:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-win-amd64-
- run:
name: Install dependencies
command: choco install -y ccache wget
- run:
name: Installing Qt
command: |
wget.exe "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-windows-x64-4.8.1.exe"
& .\qt-online-installer-windows-x64-4.8.1.exe --no-force-installations --no-default-installations `
--no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations `
--email "${Env:QT_EMAIL}" --password "${Env:QT_PASSWORD}" install `
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.win64_msvc2022_64 qt.qt6.682.addons.qt5compat `
qt.qt6.682.debug_info extensions.qtpdf.682 qt.qt6.682.addons.qthttpserver
- run:
name: Install VulkanSDK
command: |
wget.exe "https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe"
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Install CUDA Toolkit
command: |
wget.exe "https://developer.download.nvidia.com/compute/cuda/11.8.0/network_installers/cuda_11.8.0_windows_network.exe"
.\cuda_11.8.0_windows_network.exe -s cudart_11.8 nvcc_11.8 cublas_11.8 cublas_dev_11.8
- run:
name: Build
no_output_timeout: 30m
command: |
$vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S gpt4all-chat -B build -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.8.2\msvc2022_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
-DCMAKE_C_COMPILER_LAUNCHER=ccache `
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
& "C:\Qt\Tools\Ninja\ninja.exe" -C build
ccache -s
- save_cache:
key: ccache-gpt4all-win-amd64-{{ epoch }}
when: always
paths:
- ..\.ccache
build-gpt4all-chat-macos:
<<: *job-macos-executor
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-macos-
- run:
<<: *job-macos-install-deps
- run:
name: Install Rosetta
command: softwareupdate --install-rosetta --agree-to-license # needed for QtIFW
- run:
name: Installing Qt
command: |
wget "https://qt.mirror.constant.com/archive/online_installers/4.8/qt-online-installer-macOS-x64-4.8.1.dmg"
hdiutil attach qt-online-installer-macOS-x64-4.8.1.dmg
/Volumes/qt-online-installer-macOS-x64-4.8.1/qt-online-installer-macOS-x64-4.8.1.app/Contents/MacOS/qt-online-installer-macOS-x64-4.8.1 \
--no-force-installations --no-default-installations --no-size-checking --default-answer \
--accept-licenses --confirm-command --accept-obligations --email "$QT_EMAIL" --password "$QT_PASSWORD" \
install \
qt.tools.cmake qt.tools.ifw.48 qt.tools.ninja qt.qt6.682.clang_64 qt.qt6.682.addons.qt5compat \
extensions.qtpdf.682 qt.qt6.682.addons.qthttpserver
hdiutil detach /Volumes/qt-online-installer-macOS-x64-4.8.1
- run:
name: Build
no_output_timeout: 30m
command: |
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-S gpt4all-chat -B build -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.8.2/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-DCMAKE_C_COMPILER=/opt/homebrew/opt/llvm/bin/clang \
-DCMAKE_CXX_COMPILER=/opt/homebrew/opt/llvm/bin/clang++ \
-DCMAKE_RANLIB=/usr/bin/ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build build --target all
ccache -s
- save_cache:
key: ccache-gpt4all-macos-{{ epoch }}
when: always
paths:
- ../.ccache
build-ts-docs:
docker:
- image: cimg/base:stable
steps:
- checkout
- node/install:
node-version: "18.16"
- run: node --version
- run: corepack enable
- node/install-packages:
pkg-manager: npm
app-dir: gpt4all-bindings/typescript
override-ci-command: npm install --ignore-scripts
- run:
name: build docs ts yo
command: |
cd gpt4all-bindings/typescript
npm run docs:build
deploy-docs:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Install dependencies
command: |
sudo apt-get update
sudo apt-get -y install python3 python3-pip
sudo pip3 install awscli --upgrade
sudo pip3 install mkdocs mkdocs-material mkautodoc 'mkdocstrings[python]' markdown-captions pillow cairosvg
- run:
name: Make Documentation
command: |
cd gpt4all-bindings/python
mkdocs build
- run:
name: Deploy Documentation
command: |
cd gpt4all-bindings/python
aws s3 sync --delete site/ s3://docs.gpt4all.io/
- run:
name: Invalidate docs.gpt4all.io cloudfront
command: aws cloudfront create-invalidation --distribution-id E1STQOW63QL2OH --paths "/*"
build-py-linux:
machine:
image: ubuntu-2204:current
steps:
- checkout
- restore_cache:
keys:
- ccache-gpt4all-linux-amd64-
- run:
<<: *job-linux-install-backend-deps
- run:
name: Build C library
no_output_timeout: 30m
command: |
export PATH=$PATH:/usr/local/cuda/bin
git submodule update --init --recursive
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
cd gpt4all-backend
cmake -B build -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=clang-19 \
-DCMAKE_CXX_COMPILER=clang++-19 \
-DCMAKE_CXX_COMPILER_AR=ar \
-DCMAKE_CXX_COMPILER_RANLIB=ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \
-DCMAKE_CUDA_ARCHITECTURES='50-virtual;52-virtual;61-virtual;70-virtual;75-virtual'
cmake --build build -j$(nproc)
ccache -s
- run:
name: Build wheel
command: |
cd gpt4all-bindings/python/
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
- store_artifacts:
path: gpt4all-bindings/python/dist
- save_cache:
key: ccache-gpt4all-linux-amd64-{{ epoch }}
when: always
paths:
- ../.ccache
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
build-py-macos:
<<: *job-macos-executor
steps:
- checkout
- restore_cache:
keys:
- ccache-gpt4all-macos-
- run:
<<: *job-macos-install-deps
- run:
name: Install dependencies
command: |
pip install setuptools wheel cmake
- run:
name: Build C library
no_output_timeout: 30m
command: |
git submodule update --init # don't use --recursive because macOS doesn't use Kompute
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
cd gpt4all-backend
cmake -B build \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=/opt/homebrew/opt/llvm/bin/clang \
-DCMAKE_CXX_COMPILER=/opt/homebrew/opt/llvm/bin/clang++ \
-DCMAKE_RANLIB=/usr/bin/ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6
cmake --build build --parallel
ccache -s
- run:
name: Build wheel
command: |
cd gpt4all-bindings/python
python setup.py bdist_wheel --plat-name=macosx_10_15_universal2
- store_artifacts:
path: gpt4all-bindings/python/dist
- save_cache:
key: ccache-gpt4all-macos-{{ epoch }}
when: always
paths:
- ../.ccache
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
build-py-windows:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-win-amd64-
- run:
name: Install dependencies
command:
choco install -y ccache cmake ninja wget --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Install VulkanSDK
command: |
wget.exe "https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe"
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Install CUDA Toolkit
command: |
wget.exe "https://developer.download.nvidia.com/compute/cuda/11.8.0/network_installers/cuda_11.8.0_windows_network.exe"
.\cuda_11.8.0_windows_network.exe -s cudart_11.8 nvcc_11.8 cublas_11.8 cublas_dev_11.8
- run:
name: Install Python dependencies
command: pip install setuptools wheel cmake
- run:
name: Build C library
no_output_timeout: 30m
command: |
$vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
$Env:PATH += ";C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
cd gpt4all-backend
cmake -B build -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
-DCMAKE_C_COMPILER_LAUNCHER=ccache `
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON `
-DCMAKE_CUDA_ARCHITECTURES='50-virtual;52-virtual;61-virtual;70-virtual;75-virtual'
cmake --build build --parallel
ccache -s
- run:
name: Build wheel
command: |
cd gpt4all-bindings/python
python setup.py bdist_wheel --plat-name=win_amd64
- store_artifacts:
path: gpt4all-bindings/python/dist
- save_cache:
key: ccache-gpt4all-win-amd64-{{ epoch }}
when: always
paths:
- ..\.ccache
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
deploy-wheels:
docker:
- image: circleci/python:3.8
steps:
- setup_remote_docker
- attach_workspace:
at: /tmp/workspace
- run:
name: Install dependencies
command: |
sudo apt-get update
sudo apt-get install -y build-essential cmake
pip install setuptools wheel twine
- run:
name: Upload Python package
command: |
twine upload /tmp/workspace/*.whl --username __token__ --password $PYPI_CRED
- store_artifacts:
path: /tmp/workspace
build-bindings-backend-linux:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-linux-amd64-
- run:
<<: *job-linux-install-backend-deps
- run:
name: Build Libraries
no_output_timeout: 30m
command: |
export PATH=$PATH:/usr/local/cuda/bin
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
cmake ../.. -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=clang-19 \
-DCMAKE_CXX_COMPILER=clang++-19 \
-DCMAKE_CXX_COMPILER_AR=ar \
-DCMAKE_CXX_COMPILER_RANLIB=ranlib \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
cmake --build . -j$(nproc)
ccache -s
mkdir ../linux-x64
cp -L *.so ../linux-x64 # otherwise persist_to_workspace seems to mess symlinks
- save_cache:
key: ccache-gpt4all-linux-amd64-{{ epoch }}
when: always
paths:
- ../.ccache
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/linux-x64/*.so
build-bindings-backend-macos:
<<: *job-macos-executor
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-macos-
- run:
<<: *job-macos-install-deps
- run:
name: Build Libraries
no_output_timeout: 30m
command: |
ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
cmake ../.. \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=/opt/homebrew/opt/llvm/bin/clang \
-DCMAKE_CXX_COMPILER=/opt/homebrew/opt/llvm/bin/clang++ \
-DCMAKE_RANLIB=/usr/bin/ranlib \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6
cmake --build . --parallel
ccache -s
mkdir ../osx-x64
cp -L *.dylib ../osx-x64
cp ../../llama.cpp-mainline/*.metal ../osx-x64
ls ../osx-x64
- save_cache:
key: ccache-gpt4all-macos-{{ epoch }}
when: always
paths:
- ../.ccache
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/osx-x64/*.dylib
- runtimes/osx-x64/*.metal
build-bindings-backend-windows:
machine:
image: windows-server-2022-gui:2024.04.1
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache:
keys:
- ccache-gpt4all-win-amd64-
- run:
name: Install dependencies
command: |
choco install -y ccache cmake ninja wget --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Install VulkanSDK
command: |
wget.exe "https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe"
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Install CUDA Toolkit
command: |
wget.exe "https://developer.download.nvidia.com/compute/cuda/11.8.0/network_installers/cuda_11.8.0_windows_network.exe"
.\cuda_11.8.0_windows_network.exe -s cudart_11.8 nvcc_11.8 cublas_11.8 cublas_dev_11.8
- run:
name: Build Libraries
no_output_timeout: 30m
command: |
$vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
$Env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
cd gpt4all-backend
mkdir runtimes/win-x64_msvc
cd runtimes/win-x64_msvc
cmake -S ../.. -B . -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
-DCMAKE_C_COMPILER_LAUNCHER=ccache `
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
cmake --build . --parallel
ccache -s
cp bin/Release/*.dll .
- save_cache:
key: ccache-gpt4all-win-amd64-{{ epoch }}
when: always
paths:
- ..\.ccache
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/win-x64_msvc/*.dll
build-nodejs-linux:
docker:
- image: cimg/base:stable
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- run: corepack enable
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
command: |
mkdir -p gpt4all-backend/prebuilds/linux-x64
mkdir -p gpt4all-backend/runtimes/linux-x64
cp /tmp/gpt4all-backend/runtimes/linux-x64/*-*.so gpt4all-backend/runtimes/linux-x64
cp gpt4all-bindings/typescript/prebuilds/linux-x64/*.node gpt4all-backend/prebuilds/linux-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/linux-x64/*.node
- runtimes/linux-x64/*-*.so
build-nodejs-macos:
<<: *job-macos-executor
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- run: corepack enable
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
name: "Persisting all necessary things to workspace"
command: |
mkdir -p gpt4all-backend/prebuilds/darwin-x64
mkdir -p gpt4all-backend/runtimes/darwin
cp /tmp/gpt4all-backend/runtimes/osx-x64/*-*.* gpt4all-backend/runtimes/darwin
cp gpt4all-bindings/typescript/prebuilds/darwin-x64/*.node gpt4all-backend/prebuilds/darwin-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/darwin-x64/*.node
- runtimes/darwin/*-*.*
build-nodejs-windows:
executor:
name: win/default
size: large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- run: choco install wget -y
- run:
command: |
wget.exe "https://nodejs.org/dist/v18.16.0/node-v18.16.0-x86.msi" -P C:\Users\circleci\Downloads\
MsiExec.exe /i C:\Users\circleci\Downloads\node-v18.16.0-x86.msi /qn
- run:
command: |
Start-Process powershell -verb runAs -Args "-start GeneralProfile"
nvm install 18.16.0
nvm use 18.16.0
- run: node --version
- run: corepack enable
- run:
command: |
npm install -g yarn
cd gpt4all-bindings/typescript
yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
command: |
mkdir -p gpt4all-backend/prebuilds/win32-x64
mkdir -p gpt4all-backend/runtimes/win32-x64
cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll gpt4all-backend/runtimes/win32-x64
cp gpt4all-bindings/typescript/prebuilds/win32-x64/*.node gpt4all-backend/prebuilds/win32-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/win32-x64/*.node
- runtimes/win32-x64/*-*.dll
deploy-npm-pkg:
docker:
- image: cimg/base:stable
steps:
- attach_workspace:
at: /tmp/gpt4all-backend
- checkout
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- run: corepack enable
- run:
command: |
cd gpt4all-bindings/typescript
# excluding llmodel. nodejs bindings dont need llmodel.dll
mkdir -p runtimes/win32-x64/native
mkdir -p prebuilds/win32-x64/
cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll runtimes/win32-x64/native/
cp /tmp/gpt4all-backend/prebuilds/win32-x64/*.node prebuilds/win32-x64/
mkdir -p runtimes/linux-x64/native
mkdir -p prebuilds/linux-x64/
cp /tmp/gpt4all-backend/runtimes/linux-x64/*-*.so runtimes/linux-x64/native/
cp /tmp/gpt4all-backend/prebuilds/linux-x64/*.node prebuilds/linux-x64/
# darwin has univeral runtime libraries
mkdir -p runtimes/darwin/native
mkdir -p prebuilds/darwin-x64/
cp /tmp/gpt4all-backend/runtimes/darwin/*-*.* runtimes/darwin/native/
cp /tmp/gpt4all-backend/prebuilds/darwin-x64/*.node prebuilds/darwin-x64/
# Fallback build if user is not on above prebuilds
mv -f binding.ci.gyp binding.gyp
mkdir gpt4all-backend
cd ../../gpt4all-backend
mv llmodel.h llmodel.cpp llmodel_c.cpp llmodel_c.h sysinfo.h dlhandle.h ../gpt4all-bindings/typescript/gpt4all-backend/
# Test install
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn run test
- run:
command: |
cd gpt4all-bindings/typescript
npm set //registry.npmjs.org/:_authToken=$NPM_TOKEN
npm publish
# only run a job on the main branch
job_only_main: &job_only_main
filters:
branches:
only: main
# allow a job to run on tags as well as commits
job_allow_tags: &job_allow_tags
filters:
tags:
only:
- /.*/
# standard chat workflow filter
workflow-when-chat-requested: &workflow-when-chat-requested
when:
and:
- or: [ << pipeline.parameters.run-all-workflows >>, << pipeline.parameters.run-chat-workflow >> ]
- not:
equal: [ << pipeline.trigger_source >>, scheduled_pipeline ]
workflows:
version: 2
noop:
when:
not:
or:
- << pipeline.parameters.run-all-workflows >>
- << pipeline.parameters.run-python-workflow >>
- << pipeline.parameters.run-ts-workflow >>
- << pipeline.parameters.run-chat-workflow >>
- equal: [ << pipeline.trigger_source >>, scheduled_pipeline ]
jobs:
- noop
schedule:
# only run when scheduled by CircleCI
when:
equal: [ << pipeline.trigger_source >>, scheduled_pipeline ]
jobs:
- build-offline-chat-installer-macos:
context: gpt4all
- build-offline-chat-installer-windows:
context: gpt4all
- build-offline-chat-installer-windows-arm:
context: gpt4all
- build-offline-chat-installer-linux:
context: gpt4all
- sign-offline-chat-installer-macos:
context: gpt4all
requires:
- build-offline-chat-installer-macos
- notarize-offline-chat-installer-macos:
context: gpt4all
requires:
- sign-offline-chat-installer-macos
- sign-offline-chat-installer-windows:
context: gpt4all
requires:
- build-offline-chat-installer-windows
- sign-offline-chat-installer-windows-arm:
context: gpt4all
requires:
- build-offline-chat-installer-windows-arm
build-chat-installers-release:
# only run on main branch tags that start with 'v' and a digit
when:
and:
- matches: { pattern: '^v\d.*', value: << pipeline.git.tag >> }
- not:
equal: [ << pipeline.trigger_source >>, scheduled_pipeline ]
jobs:
- validate-commit-on-main:
<<: *job_allow_tags
- build-offline-chat-installer-macos:
<<: *job_allow_tags
context: gpt4all
requires:
- validate-commit-on-main
- build-offline-chat-installer-windows:
<<: *job_allow_tags
context: gpt4all
requires:
- validate-commit-on-main
- build-offline-chat-installer-windows-arm:
<<: *job_allow_tags
context: gpt4all
requires:
- validate-commit-on-main
- build-offline-chat-installer-linux:
<<: *job_allow_tags
context: gpt4all
requires:
- validate-commit-on-main
- sign-offline-chat-installer-macos:
<<: *job_allow_tags
context: gpt4all
requires:
- build-offline-chat-installer-macos
- notarize-offline-chat-installer-macos:
<<: *job_allow_tags
context: gpt4all
requires:
- sign-offline-chat-installer-macos
- sign-offline-chat-installer-windows:
<<: *job_allow_tags
context: gpt4all
requires:
- build-offline-chat-installer-windows
- sign-offline-chat-installer-windows-arm:
<<: *job_allow_tags
context: gpt4all
requires:
- build-offline-chat-installer-windows-arm
- build-online-chat-installer-macos:
<<: *job_allow_tags
context: gpt4all
requires:
- validate-commit-on-main
- build-online-chat-installer-windows:
<<: *job_allow_tags
context: gpt4all
requires:
- validate-commit-on-main
- build-online-chat-installer-windows-arm:
<<: *job_allow_tags
context: gpt4all
requires:
- validate-commit-on-main
- build-online-chat-installer-linux:
<<: *job_allow_tags
context: gpt4all
requires:
- validate-commit-on-main
- sign-online-chat-installer-macos:
<<: *job_allow_tags
context: gpt4all
requires:
- build-online-chat-installer-macos
- notarize-online-chat-installer-macos:
<<: *job_allow_tags
context: gpt4all
requires:
- sign-online-chat-installer-macos
- sign-online-chat-installer-windows:
<<: *job_allow_tags
context: gpt4all
requires:
- build-online-chat-installer-windows
- sign-online-chat-installer-windows-arm:
<<: *job_allow_tags
context: gpt4all
requires:
- build-online-chat-installer-windows-arm
build-chat-offline-installers:
<<: *workflow-when-chat-requested
jobs:
- build-hold:
type: approval
- sign-hold:
type: approval
- build-offline-chat-installer-macos:
context: gpt4all
requires:
- build-hold
- sign-offline-chat-installer-macos:
context: gpt4all
requires:
- sign-hold
- build-offline-chat-installer-macos
- notarize-offline-chat-installer-macos:
context: gpt4all
requires:
- sign-offline-chat-installer-macos
- build-offline-chat-installer-windows:
context: gpt4all
requires:
- build-hold
- sign-offline-chat-installer-windows:
context: gpt4all
requires:
- sign-hold
- build-offline-chat-installer-windows
- build-offline-chat-installer-windows-arm:
context: gpt4all
requires:
- build-hold
- sign-offline-chat-installer-windows-arm:
context: gpt4all
requires:
- sign-hold
- build-offline-chat-installer-windows-arm
- build-offline-chat-installer-linux:
context: gpt4all
requires:
- build-hold
build-chat-online-installers:
<<: *workflow-when-chat-requested
jobs:
- build-hold:
type: approval
- sign-hold:
type: approval
- build-online-chat-installer-macos:
context: gpt4all
requires:
- build-hold
- sign-online-chat-installer-macos:
context: gpt4all
requires:
- sign-hold
- build-online-chat-installer-macos
- notarize-online-chat-installer-macos:
context: gpt4all
requires:
- sign-online-chat-installer-macos
- build-online-chat-installer-windows:
context: gpt4all
requires:
- build-hold
- sign-online-chat-installer-windows:
context: gpt4all
requires:
- sign-hold
- build-online-chat-installer-windows
- build-online-chat-installer-windows-arm:
context: gpt4all
requires:
- build-hold
- sign-online-chat-installer-windows-arm:
context: gpt4all
requires:
- sign-hold
- build-online-chat-installer-windows-arm
- build-online-chat-installer-linux:
context: gpt4all
requires:
- build-hold
build-and-test-gpt4all-chat:
<<: *workflow-when-chat-requested
jobs:
- hold:
type: approval
- build-gpt4all-chat-linux:
context: gpt4all
requires:
- hold
- build-gpt4all-chat-windows:
context: gpt4all
requires:
- hold
- build-gpt4all-chat-macos:
context: gpt4all
requires:
- hold
deploy-docs:
when:
and:
- equal: [ << pipeline.git.branch >>, main ]
- or:
- << pipeline.parameters.run-all-workflows >>
- << pipeline.parameters.run-python-workflow >>
- not:
equal: [ << pipeline.trigger_source >>, scheduled_pipeline ]
jobs:
- deploy-docs:
context: gpt4all
build-python:
when:
and:
- or: [ << pipeline.parameters.run-all-workflows >>, << pipeline.parameters.run-python-workflow >> ]
- not:
equal: [ << pipeline.trigger_source >>, scheduled_pipeline ]
jobs:
- pypi-hold:
<<: *job_only_main
type: approval
- hold:
type: approval
- build-py-linux:
requires:
- hold
- build-py-macos:
requires:
- hold
- build-py-windows:
requires:
- hold
- deploy-wheels:
<<: *job_only_main
context: gpt4all
requires:
- pypi-hold
- build-py-windows
- build-py-linux
- build-py-macos
build-bindings:
when:
and:
- or: [ << pipeline.parameters.run-all-workflows >>, << pipeline.parameters.run-ts-workflow >> ]
- not:
equal: [ << pipeline.trigger_source >>, scheduled_pipeline ]
jobs:
- backend-hold:
type: approval
- nodejs-hold:
type: approval
- npm-hold:
<<: *job_only_main
type: approval
- docs-hold:
type: approval
- build-bindings-backend-linux:
requires:
- backend-hold
- build-bindings-backend-macos:
requires:
- backend-hold
- build-bindings-backend-windows:
requires:
- backend-hold
- build-nodejs-linux:
requires:
- nodejs-hold
- build-bindings-backend-linux
- build-nodejs-windows:
requires:
- nodejs-hold
- build-bindings-backend-windows
- build-nodejs-macos:
requires:
- nodejs-hold
- build-bindings-backend-macos
- build-ts-docs:
requires:
- docs-hold
- deploy-npm-pkg:
<<: *job_only_main
requires:
- npm-hold
- build-nodejs-linux
- build-nodejs-windows
- build-nodejs-macos
================================================
FILE: .circleci/grab_notary_id.py
================================================
import re
import sys
ID_REG = r"id: (.*)"
def main() -> None:
notary_log = sys.argv[1]
with open(notary_log, "r") as f:
notary_output = f.read()
id_m = re.search(ID_REG, notary_output)
if id_m:
print(id_m.group(1))
else:
raise RuntimeError("Unable to parse ID from notarization logs")
if __name__ == "__main__":
main()
================================================
FILE: .codespellrc
================================================
[codespell]
ignore-words-list = blong, afterall, assistent, crasher, requestor
skip = ./.git,./gpt4all-chat/translations,*.pdf,*.svg,*.lock
================================================
FILE: .github/ISSUE_TEMPLATE/bindings-bug.md
================================================
---
name: "\U0001F6E0 Bindings Bug Report"
about: A bug report for the GPT4All Bindings
labels: ["bindings", "bug-unconfirmed"]
---
### Bug Report
### Example Code
### Steps to Reproduce
1.
2.
3.
### Expected Behavior
### Your Environment
- Bindings version (e.g. "Version" from `pip show gpt4all`):
- Operating System:
- Chat model used (if applicable):
================================================
FILE: .github/ISSUE_TEMPLATE/chat-bug.md
================================================
---
name: "\U0001F4AC GPT4All Bug Report"
about: A bug report for GPT4All Chat
labels: ["chat", "bug-unconfirmed"]
---
### Bug Report
### Steps to Reproduce
1.
2.
3.
### Expected Behavior
### Your Environment
- GPT4All version:
- Operating System:
- Chat model used (if applicable):
================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
version: 2.1
================================================
FILE: .github/ISSUE_TEMPLATE/documentation.md
================================================
---
name: "\U0001F4C4 Documentation"
about: An issue related to the GPT4All documentation
labels: ["documentation"]
---
### Documentation
================================================
FILE: .github/ISSUE_TEMPLATE/feature-request.md
================================================
---
name: "\U0001F680 Feature Request"
about: Submit a proposal/request for a new GPT4All feature
title: "[Feature] Feature request title..."
labels: ["enhancement"]
---
### Feature Request
================================================
FILE: .github/ISSUE_TEMPLATE/other-bug.md
================================================
---
name: "\U0001F41B Other Bug Report"
about: A bug in another component of GPT4All
labels: ["bug-unconfirmed"]
---
### Bug Report
### Steps to Reproduce
1.
2.
3.
### Expected Behavior
### Your Environment
- GPT4All version (if applicable):
- Operating System:
- Chat model used (if applicable):
================================================
FILE: .github/pull_request_template.md
================================================
## Describe your changes
## Issue ticket number and link
## Checklist before requesting a review
- [ ] I have performed a self-review of my code.
- [ ] If it is a core feature, I have added thorough tests.
- [ ] I have added thorough documentation for my code.
- [ ] I have tagged PR with relevant project labels. I acknowledge that a PR without labels may be dismissed.
- [ ] If this PR addresses a bug, I have provided both a screenshot/video of the original bug and the working solution.
## Demo
### Steps to Reproduce
## Notes
================================================
FILE: .github/workflows/close_issues.yml
================================================
# This workflow will close issues that do not have labels or additional comments.
# Trigger manually.
name: "Close Issues"
on:
workflow_dispatch:
jobs:
close_issues:
runs-on: ubuntu-latest
steps:
- name: Close issues without label or comment
uses: actions/github-script@v3
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
const repo = context.repo;
let page = 1;
let issues = [];
while (true) {
const result = await github.issues.listForRepo({...repo, per_page: 100, page: page});
if (result.data.length === 0) break;
issues = issues.concat(result.data);
page += 1;
}
for (let { number } of issues) {
const issueData = await github.issues.get({...repo, issue_number: number});
const comments = await github.issues.listComments({...repo, issue_number: number});
if (issueData.data.labels.length === 0 && comments.data.length < 1) {
await github.issues.update({...repo, issue_number: number, state: 'closed'});
await github.issues.createComment({...repo, issue_number: number, body: 'Issue closed as it does not have any labels or comments.'});
}
}
================================================
FILE: .github/workflows/codespell.yml
================================================
---
name: Codespell
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Codespell
uses: codespell-project/actions-codespell@v2
================================================
FILE: .gitignore
================================================
*.arrow
squad_*
*sbert_embedded*
*.pkl
ckpts*
.deepspeed_env
*.jsonl
*tar.gz
ckpts**
wandb
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# vs code
.vscode
*.bin
.DS_Store
# gpt4all-chat
CMakeLists.txt.user
gpt4all-chat/models/*
build_*
build-*
cmake-build-*
/gpt4all-chat/tests/python/config.py
# IntelliJ
.idea/
# LLM models
*.gguf
================================================
FILE: .gitmodules
================================================
[submodule "llama.cpp-mainline"]
path = gpt4all-backend/deps/llama.cpp-mainline
url = https://github.com/nomic-ai/llama.cpp.git
branch = master
[submodule "gpt4all-chat/usearch"]
path = gpt4all-chat/deps/usearch
url = https://github.com/nomic-ai/usearch.git
[submodule "gpt4all-chat/deps/SingleApplication"]
path = gpt4all-chat/deps/SingleApplication
url = https://github.com/nomic-ai/SingleApplication.git
[submodule "gpt4all-chat/deps/fmt"]
path = gpt4all-chat/deps/fmt
url = https://github.com/fmtlib/fmt.git
[submodule "gpt4all-chat/deps/DuckX"]
path = gpt4all-chat/deps/DuckX
url = https://github.com/nomic-ai/DuckX.git
[submodule "gpt4all-chat/deps/QXlsx"]
path = gpt4all-chat/deps/QXlsx
url = https://github.com/nomic-ai/QXlsx.git
[submodule "gpt4all-chat/deps/minja"]
path = gpt4all-chat/deps/minja
url = https://github.com/nomic-ai/minja.git
[submodule "gpt4all-chat/deps/json"]
path = gpt4all-chat/deps/json
url = https://github.com/nlohmann/json.git
================================================
FILE: CONTRIBUTING.md
================================================
# Contributing
When contributing to this repository, please first discuss the change you wish to make via issue,
email, or any other method with the owners of this repository before making a change.
Please note we have a code of conduct, please follow it in all your interactions with the project.
## Pull Request Process
1. Ensure any install or build dependencies are removed before the end of the layer when doing a build.
2. Make sure Pull Request is tagged with appropriate project identifiers and has a clear description of contribution.
3. Any new or updated code must have documentation and preferably tests included with Pull Request.
4. Significant feature or code changes should provide a short video or screenshot demo.
4. Fill out relevant parts of Pull Request template.
4. Pull requests must have sign-off from one other developer. Reach out to a repository owner once your
code is ready to be merged into `main`.
## Code of Conduct
### Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
### Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
### Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
### Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
### Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at support@nomic.ai. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
### Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/
================================================
FILE: LICENSE.txt
================================================
Copyright (c) 2023 Nomic, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: MAINTAINERS.md
================================================
# MAINTAINERS
## Rules
* All content inside GPT4All shall have a documented maintainer
* If a maintainer decides to retire or resign a call for volunteers will go
out
* If no further maintainer can be found in a reasonable time frame, then the
content will be marked deprecated and removed in time
## Job
Maintainers will be...
1. Responsible for overseeing content under their stewardship
2. Responsible for triaging new issues, reviewing PRs, assigning priority
to tasks
3. Responsible for keeping content in sufficient quality in a timely fashion
## List
Adam Treat ([@manyoso](https://github.com/manyoso))
E-mail: adam@nomic.ai
Discord: `@gonzochess75`
- Overall project maintainer
- Chat UI
Jared Van Bortel ([@cebtenzzre](https://github.com/cebtenzzre))
E-mail: jared@nomic.ai
Discord: `@cebtenzzre`
- gpt4all-backend
- Python binding
- Python CLI app
Jacob Nguyen ([@jacoobes](https://github.com/jacoobes))
Discord: `@jacoobes`
E-mail: `jacoobes@sern.dev`
- TypeScript binding
Dominik ([@cosmic-snow](https://github.com/cosmic-snow))
E-mail: cosmic-snow@mailfence.com
Discord: `@cosmic__snow`
- Community documentation (GitHub Wiki)
Max Cembalest ([@mcembalest](https://github.com/mcembalest))
E-mail: max@nomic.ai
Discord: `@maxcembalest.`
- Official documentation (gpt4all-bindings/python/docs -> https://docs.gpt4all.io/)
Thiago Ramos ([@thiagojramos](https://github.com/thiagojramos))
E-mail: thiagojramos@outlook.com
- pt\_BR translation
不知火 Shiranui ([@supersonictw](https://github.com/supersonictw))
E-mail: supersonic@livemail.tw
Discord: `@supersonictw`
- zh\_TW translation
Jeremy Tayco ([@jstayco](https://github.com/jstayco))
E-mail: jstayco@protonmail.ch
Discord: `@vertana`
- es\_MX translation
Riccardo Giovanetti ([@Harvester62](https://github.com/Harvester62))
E-mail: riccardo.giovanetti@gmail.com
Discord: `@harvester62`
- it\_IT translation
Tim ([@Tim453](https://github.com/Tim453))
E-mail: tim453@mailbox.org
Discord: `@Tim453`
- Flatpak
Jack ([@wuodoo](https://github.com/wuodoo))
E-mail: 2296103047@qq.com
Discord: `@mikage`
- zh\_CN translation
================================================
FILE: README.md
================================================
Now with support for DeepSeek R1 Distillations
Website • Documentation • Discord • YouTube Tutorial
GPT4All runs large language models (LLMs) privately on everyday desktops & laptops.
No API calls or GPUs required - you can just download the application and get started.
Read about what's new in our blog.
https://github.com/nomic-ai/gpt4all/assets/70534565/513a0f15-4964-4109-89e4-4f9a9011f311GPT4All is made possible by our compute partner Paperspace.
## Download Links
—
macOS Installer
—
—
Ubuntu Installer
—
The Windows and Linux builds require Intel Core i3 2nd Gen / AMD Bulldozer, or better.
The Windows ARM build supports Qualcomm Snapdragon and Microsoft SQ1/SQ2 processors.
The Linux build is x86-64 only (no ARM).
The macOS build requires Monterey 12.6 or newer. Best results with Apple Silicon M-series processors.
See the full [System Requirements](gpt4all-chat/system_requirements.md) for more details.
Flathub (community maintained)