[
  {
    "path": ".gitignore",
    "content": "*.deb\n"
  },
  {
    "path": "Makefile",
    "content": "export RELEASE_START_SHA ?= $(shell git rev-list -1 HEAD VERSION)\nexport RELEASE ?= $(shell git rev-list $(RELEASE_START_SHA).. --count)\nexport RELEASE_NAME ?= $(shell cat VERSION)-$(RELEASE)\nexport RELEASE_VERSION ?= $(RELEASE_NAME)-g$(shell git rev-parse --short HEAD)\n\nPACKAGE_FILE ?= pve-helpers-$(RELEASE_VERSION)_all.deb\nTARGET_HOST ?= fill-me.home\n\nall: pve-helpers\n\n.PHONY: pve-helpers\npve-helpers: $(PACKAGE_FILE)\n\n$(PACKAGE_FILE):\n\tfpm \\\n\t\t--input-type dir \\\n\t\t--output-type deb \\\n\t\t--name pve-helpers \\\n\t\t--version $(RELEASE_VERSION) \\\n\t\t--package $@ \\\n\t\t--architecture all \\\n\t\t--category admin \\\n\t\t--url https://gitlab.com/ayufan/pve-helpers-build \\\n\t\t--description \"Proxmox VE Helpers\" \\\n\t\t--vendor \"Kamil Trzciński\" \\\n\t\t--maintainer \"Kamil Trzciński <ayufan@ayufan.eu>\" \\\n\t\t--license \"MIT\" \\\n\t\t--deb-priority optional \\\n\t\t--depends inotify-tools \\\n\t\t--depends qemu-server \\\n\t\t--depends expect \\\n\t\t--depends util-linux \\\n\t\t--deb-compression gz \\\n\t\troot/=/\n\ninstall: pve-helpers\n\tdpkg -i $(PACKAGE_FILE)\n\ndeploy: pve-helpers\n\tscp $(PACKAGE_FILE) $(TARGET_HOST):\n\tssh $(TARGET_HOST) dpkg -i $(PACKAGE_FILE)\n\nclean:\n\trm -f $(PACKAGE_FILE)\n"
  },
  {
    "path": "README.md",
    "content": "# Proxmox VE Helpers\n\nThis repository is a set of scripts to better handle some of the Proxmox functions:\n\n- automatically restart VMs on host suspend,\n- allow to use CPU pinning,\n- allow to set fifo scheduler\n- allow to set affinity mask for vfio devices\n\nWhy to do CPU pinning?\n\n- Usually, it is not needed as long as you don't use SMT\n- If you use SMT, each vCPU is not equal, CPU pinning allows to ensure that VMs receive a real threads\n- For having a good and predictable performance it is not needed to pin to exact cores, Linux can balance it very well\n- In general the less we configure the better it works. These settings are hints to define affinity masks for resources.\n\n## Installation\n\nClone and compile the repository:\n\n```bash\n# install dependencies\nsudo apt-get install -f ruby ruby-dev rubygems build-essential\nsudo gem install fpm\n```\n\n```bash\n# compile pve-helpers\ngit clone https://github.com/ayufan/pve-helpers\ncd pve-helpers\nsudo make install\n```\n\n## Usage\n\n### 1. Enable snippet\n\nYou need to configure each machine to enable the hookscript.\n\nThe snippet by default is installed in `/var/lib/vz`\nthat for Proxmox is present as `local`.\n\n```bash\nqm set 204 --hookscript=local:snippets/exec-cmds\n```\n\n### 2. Configure VM\n\nEdit VM description and add a new line if one or both these two commands.\n\n### 2.1. `cpu_taskset`\n\nFor the best performance you want to assign VM to physical cores,\nnot a mix of physical and virtual cores.\n\nFor example for `i7-8700` each core has two threads: 0-6, 1-7, 2-8.\nYou can easily check that with `lscpu -e`, checking which cores are\nassigned twice.\n\n```bash\nCPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ    MINMHZ\n0   0    0      0    0:0:0:0       yes    4600.0000 800.0000\n1   0    0      1    1:1:1:0       yes    4600.0000 800.0000\n2   0    0      2    2:2:2:0       yes    4600.0000 800.0000\n3   0    0      3    3:3:3:0       yes    4600.0000 800.0000\n4   0    0      4    4:4:4:0       yes    4600.0000 800.0000\n5   0    0      5    5:5:5:0       yes    4600.0000 800.0000\n6   0    0      0    0:0:0:0       yes    4600.0000 800.0000\n7   0    0      1    1:1:1:0       yes    4600.0000 800.0000\n8   0    0      2    2:2:2:0       yes    4600.0000 800.0000\n9   0    0      3    3:3:3:0       yes    4600.0000 800.0000\n10  0    0      4    4:4:4:0       yes    4600.0000 800.0000\n11  0    0      5    5:5:5:0       yes    4600.0000 800.0000\n```\n\nFor example it is advised to assign a one CPU less than a number of\nphysical cores. For the `i7-8700` it will be 5 cores.\n\nThen, you can assign the 5 cores (with CPU pinning, but not pinning specific\nthreads) to VM:\n\n```text\ncpu_taskset 7-11\n```\n\nThis does assign to VM second thread of physical cores 1-6. We deliberatly\nchoose to not assign `CORE 0`.\n\nIf you have two VMs concurrently running, you can assign it on one thread,\nsecond on another thread, like this:\n\n```text\nVM 1:\ncpu_taskset 1-5\n\nVM 2:\ncpu_taskset 7-11\n```\n\n### 2.2. use `vendor-reset` for fixing AMD Radeon reset bug\n\nInstead of `pci_unbind` and `pci_rescan` install DKMS module from https://github.com/gnif/vendor-reset:\n\n```bash\napt install dkms\ngit clone https://github.com/gnif/vendor-reset.git /usr/src/vendor-reset-0.1.1\ndkms build vendor-reset/0.1.1\ndkms install vendor-reset/0.1.1\necho vendor-reset >> /etc/modules\nmodprobe vendor-reset\n```\n\n### 2.3. `set_halt_poll`\n\nThis setting changes the value of the kvm parameter `halt_poll_ns` in `/sys/module/kvm/parameters/halt_poll_ns`\nDifferent configurations benefit from different settings. Default value is `20000`. In theory, a larger value would be beneficial for the performance/latency of a VM. \nIn practice, most Ryzen systems work best with `halt_poll_ns` set to `0`.\n\nUsage example:\n```yaml\ncat /etc/pve/qemu-server/110.conf\n\n##Set halt_poll_ns\n#set_halt_poll 0\n...\n```\n\n### 2.4. `assign_interrupts`\n\n`assign_interrupts [--sleep=10s] [cpu cores] [--all] [interrupt name] [interrupt name...]`\n\nThis setting aims to simplify the process of assigning interrupts to the correct cpu cores in order to get the best performance\nwhile doing a gpu/usb controller/audio controller passthrough. The goal is to have the same cores assigned to the VM using `cpu_taskset`, \nbe responsible for the interrupts generated by the devices that are fully passed through to the VM. \nThis is very important for achieving the lowest possible latency and eliminating random latency spikes inside the VM. \nIdeally, you would also use something like irqbalance to move all other interrupts away from the VM assigned CPU cores and onto your other hypervisor-reserved cores. Same CPU mask can be used with irqbalance to have the VM cpu cores banned from getting any other interrupts.\n\nNote: Isolating cpu cores with `isolcpus` while having its own small benefits, is not required to get these latency improvements.\n\nAn optional `--sleep=10s` can be assigned to modify\ndefault `30s` wait duration.\n\nThe `--all` can be used to automatically assign interrupts of all configured `hostpci` devices.\n\nUsage example:\n```yaml\ncat /etc/pve/qemu-server/110.conf\n##CPU pinning\n#cpu_taskset 1-5\n#assign_interrupts --sleep=10s 1-5 --all\n...\n```\n\nIn this particular use case, all interrupts with `vfio` in their name are assigned to cores `4,12,5,13,6,14,7,15,2,10,3,11`, which in term correspond to cores `2-7` and their SMT equivalents `10-15`.\nIn other words, cores `2,3,4,5,6,7` from an 8 core 3700x are assigned to the VM and to all of the interrupts from the GPU, the USB onboard controller, and the onboard audio controller.\n\n### 2.5. `qm_conflict` and `qm_depends`\n\nSometimes some VMs are conflicting with each other due to dependency on the same resources,\nlike disks, or VGA.\n\nThere are helper commands to shutdown (the `qm_conflict`) or start (the `qm_depends`)\nwhen main machine is being started.\n\n```yaml\ncat /etc/pve/qemu-server/204.conf\n\n# qm_conflict 204\n# qm_depends 207\n...\n```\n\nThis first `qm_conflict` will shuttdown VM with VMID 204 before starting the current one,\nand it will also start VMID 207, that might be a sibiling VM.\n\nI use the `qm_conflict` or `qm_depends` to run Linux VM sometimes with VGA passthrough,\nsometimes as a sibiling VM without graphics cards passed, but running in a console mode.\n\nBe careful if you use `pci_unbind` and `pci_rebind`, they should be after the `qm_*` commands.\n\n### 2.6. `pci_unbind` and `pci_rebind`\n\nIt might be desirable to bind VGA to VM, but as soon as VM finishes\nunbind that and allow to use on a host.\n\nThe `--all` can be used to unbind all devices.\n\nThe simplest is to ensure that VGA can render output on a host before\nstarting, then instruct Proxmox VE to unbind, and rebind devices:\n\n```yaml\ncat /etc/pve/qemu-server/204.conf\n\n## Rebind VGA to host\n#pci_unbind 02 00 0\n#pci_unbind 02 00 1\n#pci_unbind --all\n#pci_rebind\n```\n\n### 3. Legacy features\n\nThese are features that are no really longer needed to achieve a good latency in a VM.\n\n### 3.1. `cpu_chrt` **no longer needed, outdated**\n\nRunning virtualized environment always results in quite random latency\ndue to amount of other work being done. This is also, because Linux\nhypervisor does balance all threads that has bad effects on `DPC`\nand `ISR` execution times. Latency in Windows VM can be measured with https://www.resplendence.com/latencymon. Ideally, we want to have the latency of `< 300us`.\n\nTo improve the latency you can switch to the usage of `FIFO` scheduler.\nThis has a catastrophic effects to everything else that is not your VM,\nbut this is likely acceptable for Gaming / daily use of passthrough VMs.\n\nConfigure VM description with:\n\n```text\ncpu_chrt fifo 1\n```\n\n> Note:\n> It seems that if Hyper-V entitlements (they are enabled for `ostype: win10`) are enabled this is no longer needed.\n> I now have amazing performance without using `cpu_chrt`.\n\n### 3.2. `pci_unbind` and `pci_rescan` **no longer needed, outdated**\n\nJust use `vendor-reset`.\n\nThere are multiple approaches to handle Radeon graphics cards. I did find that\nto make it stable:\n\n1. VGA bios needs to be exported, put in `/usr/share/kvm` and passed as `romfile` of `hostpci*`,\n2. PCIE unbind/rescan needs to happen.\n\nExporting bios should happen ideally when running \"natively\", so with graphics card available,\nideally on Windows, with `GPU-Z`. Once bios is exported, you should ensure that it\ncontains UEFI section: https://pve.proxmox.com/wiki/Pci_passthrough#How_to_known_if_card_is_UEFI_.28ovmf.29_compatible.\nSometimes the bios can be found on https://www.techpowerup.com/vgabios/.\nEnsure that you find the exact one for you `vid:pid` of your graphics card.\n\nThis is how my config looks like once a bios is put in a correct place:\n\n```yaml\ncat /etc/pve/qemu-server/204.conf\n\n## Fix VGA\n#pci_rescan\n#pci_unbind 02 00 0\n#pci_unbind 02 00 1\n...\nhookscript: local:snippets/exec-cmds\n...\nhostpci0: 02:00,pcie=1,romfile=215895.rom,x-vga=1\n...\nmachine: q35\n...\n```\n\nThe comment defines a commands to execute to unbind and rebind graphics card VM.\n\nIn cases where there are bugs in getting VM up, the `suspend/resume` cycle of Proxmox\nhelps: `systemctl suspend`.\n\n### 4. Suspend/resume\n\nThere's a set of scripts that try to perform restart of machines\nwhen Proxmox VE machine goes to sleep.\n\nFirst, you might be interested in doing `suspend` on power button.\nEdit the `/etc/systemd/logind.conf` to modify:\n\n```text\nHandlePowerKey=suspend\n```\n\nThen `systemctl restart systemd-logind.service` or reboot Proxmox VE.\n\nAfter that every of your machines should restart alongside with Proxmox VE\nsuspend, thus be able to support restart on PCI passthrough devices,\nlike GPU.\n\n**Ensure that each of your machines does support Qemu Guest Agent**.\nThis function will not work if you don't have Qemu Guest Agent installed\nand running.\n\n### 5. My setup\n\nHere's a quick rundown of my environment that I currently use\nwith above quirks.\n\n#### 5.1. Hardware\n\n- i7-8700\n- 48GB DDR4\n- Intel iGPU used by Proxmox VE\n- AMD RX560 2GB used by Linux VM\n- GeForce RTX 2080 Super used by Windows VM\n- Audio is being output by both VMs to the shared speakers that are connected to Motherboard audio card\n- Each VM has it's own dedicated USB controller\n- Each VM has a dedicated amount of memory using 1G hugepages\n- Each VM does not use SMT, rather it is assigned to the thread 0 (Linux) or thread 1 (Windows) of each CPU, having only 5 vCPUs available to VM\n\n#### 5.2. Kernel config\n\n```text\nGRUB_CMDLINE_LINUX=\"\"\nGRUB_CMDLINE_LINUX=\"$GRUB_CMDLINE_LINUX pci_stub.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9,10de:13c2,10de:0fbb,1002:67ef,1002:aae0\"\nGRUB_CMDLINE_LINUX=\"$GRUB_CMDLINE_LINUX intel_iommu=on kvm_intel.ept=Y kvm_intel.nested=Y i915.enable_hd_vgaarb=1 pcie_acs_override=downstream vfio-pci.disable_idle_d3=1\"\nGRUB_CMDLINE_LINUX=\"$GRUB_CMDLINE_LINUX cgroup_enable=memory swapaccount=1\"\nGRUB_CMDLINE_LINUX=\"$GRUB_CMDLINE_LINUX intel_pstate=disable\"\nGRUB_CMDLINE_LINUX=\"$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=42\"\n```\n\n#### 5.3. Linux VM\n\nI use Linux for regular daily development work.\n\nMy Proxmox VE config looks like this:\n\n```text\n## CPU PIN\n#cpu_taskset 0-5\n#assign_interrupts 0-5 --all\n#\n## Conflict (207 shares disks, 208 shares VGA)\n#qm_conflict 207\n#qm_conflict 208\nagent: 1\nargs: -audiodev id=alsa,driver=alsa,out.period-length=100000,out.frequency=48000,out.channels=2,out.try-poll=off,out.dev=swapped -soundhw hda\nballoon: 0\nbios: ovmf\nboot: dcn\nbootdisk: scsi0\ncores: 5\ncpu: host\nhookscript: local:snippets/exec-cmds\nhostpci0: 02:00,romfile=215895.rom,x-vga=1\nhostpci1: 04:00\nhugepages: 1024\nide2: none,media=cdrom\nmemory: 32768\nname: ubuntu19-vga\nnet0: virtio=32:13:40:C7:31:4C,bridge=vmbr0\nnuma: 1\nonboot: 1\nostype: l26\nscsi0: nvme-thin:vm-206-disk-1,discard=on,iothread=1,size=200G,ssd=1\nscsi1: ssd:vm-206-disk-0,discard=on,iothread=1,size=100G,ssd=1\nscsi10: ssd:vm-206-disk-1,iothread=1,replicate=0,size=32G,ssd=1\nscsihw: virtio-scsi-pci\nserial0: socket\nsockets: 1\nusb0: host=1050:0406\nvga: none\n```\n\n#### 5.4. Windows VM\n\nI use Windows for Gaming. It has dedicated RTX 2080 Super.\n\n```text\n## CPU PIN\n#cpu_taskset 6-11\n#assign_interrupts 6-11 --all\nagent: 1\nargs: -audiodev id=alsa,driver=alsa,out.period-length=100000,out.frequency=48000,out.channels=2,out.try-poll=off,out.dev=swapped -soundhw hda\nballoon: 0\nbios: ovmf\nboot: dc\nbootdisk: scsi0\ncores: 5\ncpu: host\ncpuunits: 10000\nefidisk0: nvme-thin:vm-204-disk-1,size=4M\nhookscript: local:snippets/exec-cmds\nhostpci0: 01:00,pcie=1,x-vga=1,romfile=Gigabyte.RTX2080Super.8192.190820.rom\nhugepages: 1024\nide2: none,media=cdrom\nmachine: pc-q35-3.1\nmemory: 10240\nname: win10-vga\nnet0: e1000=3E:41:0E:4D:3D:14,bridge=vmbr0\nnuma: 1\nonboot: 1\nostype: win10\nrunningmachine: pc-q35-3.1\nscsi0: ssd:vm-204-disk-2,discard=on,iothread=1,size=64G,ssd=1\nscsi1: ssd:vm-204-disk-0,backup=0,discard=on,iothread=1,replicate=0,size=921604M\nscsi3: nvme-thin:vm-204-disk-0,backup=0,discard=on,iothread=1,replicate=0,size=100G\nscsihw: virtio-scsi-pci\nsockets: 1\nvga: none\n```\n\n#### 5.5. Switching between VMs\n\nTo switch between VMs:\n\n1. Both VMs always run concurrently.\n1. I do change the monitor input.\n1. Audio is by default being output by both VMs, no need to switch it.\n1. I use Barrier (previously Synergy) for most of time.\n1. In other cases I have Logitech multi-device keyboard and mouse,\n   so I switch it on keyboard.\n1. I also have a physical switch that I use\n   to change lighting and monitor inputs.\n1. I have the monitor with PBP and PIP, so I can watch how Windows\n   is updating while doing development work on Linux.\n\n## Author, License\n\nKamil Trzciński, 2019-2021, MIT\n"
  },
  {
    "path": "VERSION",
    "content": "0.6.0\n"
  },
  {
    "path": "old-helpers/Makefile",
    "content": "export RELEASE_START_SHA ?= $(shell git rev-list -1 HEAD VERSION)\nexport RELEASE ?= $(shell git rev-list $(RELEASE_START_SHA).. --count)\nexport RELEASE_NAME ?= $(shell cat VERSION)-$(RELEASE)\nexport RELEASE_VERSION ?= $(RELEASE_NAME)-g$(shell git rev-parse --short HEAD)\n\nPACKAGE_FILE ?= pve-helpers-$(RELEASE_VERSION)_all.deb\nTARGET_HOST ?= fill-me.home\n\nall: pve-helpers\n\n.PHONY: pve-helpers\npve-helpers: $(PACKAGE_FILE)\n\n$(PACKAGE_FILE):\n\tfpm \\\n\t\t--input-type dir \\\n\t\t--output-type deb \\\n\t\t--name pve-helpers \\\n\t\t--version $(RELEASE_VERSION) \\\n\t\t--package $@ \\\n\t\t--architecture all \\\n\t\t--category admin \\\n\t\t--url https://gitlab.com/ayufan/pve-helpers-build \\\n\t\t--description \"Proxmox VE Helpers\" \\\n\t\t--vendor \"Kamil Trzciński\" \\\n\t\t--maintainer \"Kamil Trzciński <ayufan@ayufan.eu>\" \\\n\t\t--license \"MIT\" \\\n\t\t--deb-priority optional \\\n\t\t--depends inotify-tools \\\n\t\t--depends qemu-server \\\n\t\t--depends expect \\\n\t\t--depends util-linux \\\n\t\t--deb-compression bzip2 \\\n\t\t--deb-systemd scripts/pve-qemu-hooks.service \\\n\t\troot/=/\n\ninstall: pve-helpers\n\tdpkg -i $(PACKAGE_FILE)\n\ndeploy: pve-helpers\n\tscp $(PACKAGE_FILE) $(TARGET_HOST):\n\tssh $(TARGET_HOST) dpkg -i $(PACKAGE_FILE)\n\nclean:\n\trm -f $(PACKAGE_FILE)\n"
  },
  {
    "path": "old-helpers/README.md",
    "content": "# Proxmox VE Qemu Helpers\n\nThis repository is a set of scripts to better handle some of the Proxmox functions:\n\n- automatically suspend/resume on host suspend,\n- allow to use CPU pinning,\n- allow to run actions on VM bootup\n\n## Installation\n\nClone and compile the repository:\n\n```bash\ngit clone https://github.com/ayufan/pve-helpers\ncd pve-helpers\nsudo make install\n```\n\n## Usage\n\n### 1. Enable CPU pinning (`/usr/sbin/pin-vcpus.sh`)\n\nThe CPU pinning is enabled only when you add in notes the `CPUPIN` keyword.\nIt will pin each CPU thread to one physical thread.\nThe pinning will omit the CORE0 as it assumes that you use it\nfor the purpose of the host machine.\n\nFor the best performance you should configure cores specification\nexactly the way as they are on your host machine: matching number of threads per-core.\n\nCurrently, Proxmox VE does not allow you to configure `threads`, so you have to do it manually:\n\n```bash\nqm set VMID -args -smp 10,cores=5,threads=2\n```\n\nThe above assume that you use CPU with SMT, which has two threads per-each core.\nThe CPU pinning method will properly assign each virtual thread to physical thread taking\ninto account CPUs affinity mask as produced by `lscpu -e`.\n\nTo ensure that CPU pinning does work,\nyou can try it from command line as `root` user:\n\n```bash\npin-vcpus.sh VMID\n```\n\n#### 1.1. Using `isolcpus`\n\nThe above option should be used with conjuction to `isolcpus` of kernel.\nThis is a way to disable CPU cores from being used by hypervisor,\nmaking it possible to assign cores exclusively to the VMs only.\n\nFor doing that edit `/etc/default/grub` and add:\n\n```bash\nGRUB_CMDLINE_LINUX=\"$GRUB_CMDLINE_LINUX isolcpus=1-5,7-11\"\nGRUB_CMDLINE_LINUX=\"$GRUB_CMDLINE_LINUX nohz_full=1-5,7-11\"\nGRUB_CMDLINE_LINUX=\"$GRUB_CMDLINE_LINUX rcu_nocbs=1-5,7-11\"\n```\n\nWhere `1-5,7-11` matches a cores that Proxmox VE should not use.\nYou really want to omit everything that is on CORE0.\nThe above specification is valid for latest `i7-8700` CPUs:\n\n```bash\nCPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ    MINMHZ\n0   0    0      0    0:0:0:0       yes    4600.0000 800.0000\n1   0    0      1    1:1:1:0       yes    4600.0000 800.0000\n2   0    0      2    2:2:2:0       yes    4600.0000 800.0000\n3   0    0      3    3:3:3:0       yes    4600.0000 800.0000\n4   0    0      4    4:4:4:0       yes    4600.0000 800.0000\n5   0    0      5    5:5:5:0       yes    4600.0000 800.0000\n6   0    0      0    0:0:0:0       yes    4600.0000 800.0000\n7   0    0      1    1:1:1:0       yes    4600.0000 800.0000\n8   0    0      2    2:2:2:0       yes    4600.0000 800.0000\n9   0    0      3    3:3:3:0       yes    4600.0000 800.0000\n10  0    0      4    4:4:4:0       yes    4600.0000 800.0000\n11  0    0      5    5:5:5:0       yes    4600.0000 800.0000\n```\n\nFor Ryzen CPUs you will rather see CORE0 to be assigned\nto CPU0 and CPU1, thus your specification will look `2-11`.\n\nAfter editing configuration `update-grub` and reboot Proxmox VE.\n\n### 2. Suspend/resume\n\nThere's a set of scripts that try to perform suspend of machines\nwhen Proxmox VE machine goes to sleep.\n\nFirst, you might be interested in doing `suspend` on power button.\nEdit the `/etc/systemd/logind.conf` to modify:\n\n```\nHandlePowerKey=suspend\n```\n\nThen `systemctl restart systemd-logind.service` or reboot Proxmox VE.\n\nAfter that every of your machines should suspend alongside with Proxmox VE\nsuspend, thus be able to support suspend/resume on PCI passthrough devices,\nlike GPU.\n\n**Ensure that each of your machines does support Qemu Guest Agent**.\nThis function will not work if you don't have Qemu Guest Agent installed\nand running.\n\n### 3. Run hooks on machine start and stop\n\nThis allows you to add a script `/etc/qemu-server-hooks/VMID.up` that\nwill be executed when machine starts.\n\nThis allows you to add a script `/etc/qemu-server-hooks/VMID.down` that\nwill be executed when machine stops.\n\n## Author, License\n\nKamil Trzciński, 2019, MIT\n"
  },
  {
    "path": "old-helpers/VERSION",
    "content": "0.2.0\n"
  },
  {
    "path": "old-helpers/root/lib/systemd/system-sleep/suspend-resume-all-vms",
    "content": "#!/bin/bash\n\nif [[ \"$1\" == \"pre\" ]]; then\n  /usr/lib/pve-helpers/suspend-all-vms.sh\nelif [[ \"$1\" == \"post\" ]]; then\n  /usr/lib/pve-helpers/resume-all-vms.sh\nelse\n  echo \"invalid: $@\"\n  exit 1\nfi\n"
  },
  {
    "path": "old-helpers/root/usr/lib/pve-helpers/qemu-server-hooks.sh",
    "content": "#!/bin/bash\n\nhooks=/etc/qemu-server-hooks\nwatch=/var/run/qemu-server\n\nmkdir -p \"$hooks\" \"$watch\"\n\npin_vcpus() {\n  /usr/sbin/pin-vcpus.sh \"$@\"\n}\n\nwhile read file; do\n  VMID=$(basename \"$file\" .pid)\n\n  # ignore non-pid matches\n  if [[ \"$file\" == \"$VMID\" ]]; then\n    continue\n  fi\n\n  if [[ -e \"$watch/$file\" ]]; then\n    echo \"$VMID: Did start.\"\n    [[ -f \"$hooks/$VMID.up\" ]] && \"$hooks/$VMID.up\"\n    pin_vcpus \"$VMID\" &\n  else\n    echo \"$VMID: Did stop.\"\n    [[ -f \"$hooks/$VMID.down\" ]] && \"$hooks/$VMID.down\"\n  fi\ndone < <(/usr/bin/inotifywait -mq -e create,delete --format \"%f\" \"$watch\")\n"
  },
  {
    "path": "old-helpers/root/usr/lib/pve-helpers/resume-all-vms.sh",
    "content": "#!/bin/bash\n\nresume_vm() {\n\tlocal VMID=\"$1\"\n\n\tlocal VMSTATUS=$(qm status \"$VMID\")\n\tlocal VMCONFIG=$(qm config \"$VMID\")\n\n\t# We need to reset only when hostpci.*:\n\tif grep -q ^hostpci <(echo \"$VMCONFIG\"); then\n\t\tif [[ \"$VMSTATUS\" == \"status: running\" ]]; then\n\t\t\techo \"$VMID: Resetting as it has 'hostpci*:' devices...\"\n\t\t\tqm reset \"$VMID\"\n\t\t\treturn 1\n\t\tfi\n\tfi\n\n\tif [[ ! -e \"/var/run/qemu-server/$VMID.suspended\" ]]; then\n\t\techo \"$VMID: Nothing to due, due to missing: $VMID.suspended.\"\n\t\treturn 0\n\tfi\n\n\trm -f \"/var/run/qemu-server/$VMID.suspended\"\n\n\tif [[ \"$VMSTATUS\" == \"status: stopped\" ]]; then\n\t\techo \"$VMID: Starting (stopped)...\"\n\t\tqm start \"$VMID\"\n\tfi\n\n\techo \"$VMID: Resuming...\"\n\tqm resume \"$VMID\"\n\n\tfor i in $(seq 1 30); do\n\t\tVMSTATUS=$(qm status \"$VMID\")\n\t\tif [[ \"$VMSTATUS\" == \"status: running\" ]]; then\n\t\t\techo \"$VMID: Resumed.\"\n\t\t\treturn 0\n\t\tfi\n\n\t\techo \"$VMID: Waiting for resume: $VMSTATUS...\"\n\t\tsleep 1s\n\tdone\n\n\techo \"$VMID: Failed to resume: $VMSTATUS.\"\n\tqm reset \"$VMID\"\n\treturn 1\n}\n\nfor i in /etc/pve/nodes/$(hostname)/qemu-server/*.conf; do\n\tVMID=$(basename \"$i\" .conf)\n\tresume_vm \"$VMID\" &\ndone\n\nwait\n"
  },
  {
    "path": "old-helpers/root/usr/lib/pve-helpers/suspend-all-vms.sh",
    "content": "#!/bin/bash\n\nsuspend_vm_action() {\n\tlocal VMID=\"$1\"\n\tlocal ACTION=\"$2\"\n\n\tif ! qm guest cmd \"$VMID\" ping; then\n\t\treturn 1\n\tfi\n\n\techo \"$VMID: Suspending ($ACTION)...\"\n\tqm guest cmd \"$VMID\" \"$ACTION\"\n\n\tfor i in $(seq 1 30); do\n\t\tlocal VMSTATUS=$(qm status \"$VMID\")\n\t\tif [[ \"$VMSTATUS\" == \"status: suspended\" ]] || [[ \"$VMSTATUS\" == \"status: stopped\" ]]; then\n\t\t\techo \"$VMID: Suspended.\"\n\t\t\ttouch \"/var/run/qemu-server/$VMID.suspended\"\n\t\t\treturn 0\n\t\tfi\n\n\t\techo \"$VMID: Waiting for suspend: $VMSTATUS...\"\n\t\tsleep 1s\n\tdone\n\n\techo \"$VMID: Failed to suspend: $VMSTATUS.\"\n\treturn 1\n}\n\nsuspend_vm() {\n\tlocal VMID=\"$1\"\n\n\tlocal VMSTATUS=$(qm status \"$VMID\")\n\tlocal VMCONFIG=$(qm config \"$VMID\")\n\n\tif [[ \"$VMSTATUS\" != \"status: running\" ]]; then\n\t\techo \"$VMID: Nothing to due, due to: $VMSTATUS.\"\n\t\treturn 0\n\tfi\n\n\tif ! grep -q ^hostpci <(echo \"$VMCONFIG\"); then\n\t\techo \"$VMID: VM does not use PCI-passthrough\"\n\t\treturn 0\n\tfi\n\n\t# if suspend_vm_action \"$VMID\" suspend-disk; then\n\t# \treturn 0\n\t# fi\n\n\t# echo \"$VMID: VM does not support suspend-disk via Guest Agent, using shutdown.\"\n\n\tif qm shutdown \"$VMID\"; then\n\t\ttouch \"/var/run/qemu-server/$VMID.suspended\"\n\t\treturn 0\n\tfi\n\n\techo \"$VMID: Failed to suspend or shutdown.\"\n\treturn 1\n}\n\nfor i in /etc/pve/nodes/$(hostname)/qemu-server/*.conf; do\n\tVMID=$(basename \"$i\" .conf)\n\tsuspend_vm \"$VMID\" &\ndone\n\nwait\n"
  },
  {
    "path": "old-helpers/root/usr/sbin/pin-vcpus.sh",
    "content": "#!/bin/bash\n\nset -eo pipefail\n\nif [[ $# -ne 1 ]]; then\n\techo \"Usage: $0 <VMID>\"\n\texit 1\nfi\n\nVMID=\"$1\"\n\nif ! VMCONFIG=$(qm config \"$VMID\"); then\n\techo \"$VMID: Does not exist.\"\n\texit 1\nfi\n\nif ! grep -q CPUPIN <(echo \"$VMCONFIG\"); then\n\techo \"$VMID: Does not have CPUPIN defined.\"\n\texit 1\nfi\n\nvm_cpu_tasks() {\n\texpect <<EOF | sed -n 's/^.* CPU .*thread_id=\\(.*\\)$/\\1/p' | tr -d '\\r' || true\nspawn qm monitor $VMID\nexpect \">\"\nsend \"info cpus\\r\"\nexpect \">\"\nEOF\n}\n\n# this functions returns a list of CPU cores\n# in order as they have HT threads\n# mapping Intel cpus to Qemu emulated cpus\ncores() {\n\t# tail -n+2: ignore header\n\t# sort -n -k4: sort by core-index vs threads\n\t# ignore core-0: assuming that it is assigned to host with isolcpus\n\twhile read CPU NODE SOCKET CORE REST; do\n\t\tif [[ \"$CORE\" == \"0\" ]]; then\n\t\t\t# We assume that $CORE is assigned to host (always)\n\t\t\tcontinue\n\t\tfi\n\n\t\techo \"$CPU\"\n\tdone < <(lscpu -e | tail -n+2 | sort -n -k4)\n}\n\necho \"$VMID: Checking...\"\n\nfor i in $(seq 1 10); do\n\tVMSTATUS=$(qm status $VMID)\n\tif [[ \"$VMSTATUS\" != \"status: running\" ]]; then\n\t\techo \"$VMID: VM is not running: $VMSTATUS\"\n\t\texit 1\n\tfi\n\n\tVCPUS=($(vm_cpu_tasks))\n\tVCPU_COUNT=\"${#VCPUS[@]}\"\n\n\tif [[ $VCPU_COUNT -gt 0 ]]; then\n\t\tbreak\n\tfi\n\n\techo \"* No VCPUS for $VMID\"\n\tsleep 3s\ndone\n\nif [[ $VCPU_COUNT -eq 0 ]]; then\n\texit 1\nfi\n\necho \"$VMID: Detected VCPU ${#VCPUS[@]} threads...\"\n\nfor CPU_INDEX in \"${!VCPUS[@]}\"; do\n\tCPU_TASK=\"${VCPUS[$CPU_INDEX]}\"\n\tif read CPU_INDEX; then\n\t\techo \"$VMID: Assigning $CPU_INDEX to $CPU_TASK...\"\n\t\ttaskset -pc \"$CPU_INDEX\" \"$CPU_TASK\"\n\telse\n\t\techo \"$VMID: No CPU to assign to $CPU_TASK\"\n\tfi\ndone < <(cores)\n"
  },
  {
    "path": "old-helpers/scripts/pve-qemu-hooks.service",
    "content": "[Unit]\nDescription = PVE Qemu Server Hooks\n\n[Service]\nType = simple\nExecStart = /usr/lib/pve-helpers/qemu-server-hooks.sh\n\n[Install]\nWantedBy = multi-user.target\n"
  },
  {
    "path": "root/etc/systemd/system/pve-guests.service.d/manual-start.conf",
    "content": "[Unit]\nRefuseManualStart=false\nRefuseManualStop=false\n\n"
  },
  {
    "path": "root/lib/systemd/system-sleep/restart-vms",
    "content": "#!/bin/bash\n\nif [[ \"$1\" == \"pre\" ]]; then\n  /bin/systemctl stop pve-guests.service\nelif [[ \"$1\" == \"post\" ]]; then\n  /bin/systemctl start pve-guests.service\nelse\n  echo \"invalid: $@\"\n  exit 1\nfi\n"
  },
  {
    "path": "root/var/lib/vz/snippets/exec-cmds",
    "content": "#!/bin/bash\n\nVMID=\"$1\"\nACTION=\"$2\"\nSLEPT=\"\"\n\nvmpid() {\n  cat \"/var/run/qemu-server/$VMID.pid\"\n}\n\nif_action() {\n  if [[ \"$ACTION\" == \"$1\" ]]; then\n    shift\n    eval \"$@\"\n  fi\n}\n\nsleep_once() {\n  if [[ -z \"$SLEPT\" ]]; then\n    sleep 1s\n    SLEPT=1\n  fi\n}\n\nhostpci_ids() {\n  grep '^hostpci[0-9]:.*0000' \"/etc/pve/qemu-server/$VMID.conf\" | awk '{print $2}' | awk -F, '{print $1}'\n}\n\nexec_pci_rescan() {\n  echo \"Running PCI rescan for $VMID...\"\n  echo 1 > /sys/bus/pci/rescan\n}\n\nexec_set_haltpoll() {\n  echo \"Setting haltpoll for $VMID...\"\n  echo $1 > /sys/module/kvm/parameters/halt_poll_ns\n}\n\nexec_assign_interrupts() {\n  local SLEEP=\"30s\"\n  if [[ $1 == --sleep=* ]]; then\n    SLEEP=\"${1#--sleep=}\"\n    shift\n  fi\n\n  echo \"Wating $SLEEP seconds for all vfio-gpu interrupts to show up...\"\n  sleep \"$SLEEP\"\n\n  MASK=\"$1\"\n  shift\n\n  if [[ \"$1\" == \"--all\" ]]; then\n    set -- $(hostpci_ids)\n  fi\n\n  for interrupt; do\n    interrupt=$(printf '%b' \"${interrupt//%/\\\\x}\")\n    echo \"Moving $interrupt interrupts to $MASK cpu cores $VMID...\"\n    grep \"$interrupt\" /proc/interrupts | cut -d \":\" -f 1 | while read -r i; do\n      echo \"- IRQ: $(grep \"^\\s*$i:\" /proc/interrupts)\"\n      echo \"$MASK\" > /proc/irq/$i/smp_affinity_list\n    done\n  done\n}\n\nexec_pci_unbind() {\n  if [[ \"$1\" == \"--all\" ]]; then\n    set -- $(hostpci_ids)\n  else\n    set -- \"0000:$1:$2.$3\"\n  fi\n\n  for devid; do\n    if [[ -e \"/sys/bus/pci/devices/$devid\" ]]; then\n      echo \"Running PCI unbind of '$devid' for $VMID...\"\n      echo 1 > \"/sys/bus/pci/devices/$devid/remove\"\n    elif [[ -e \"/sys/bus/pci/devices/$devid.0\" ]]; then\n      echo \"Running PCI unbind of '$devid.0' for $VMID...\"\n      echo 1 > \"/sys/bus/pci/devices/$devid.0/remove\"\n    else\n      echo \"The '$devid' not found in '/sys/bus/pci/devices'\"\n    fi\n  done\n}\n\nexec_cpu_taskset() {\n  sleep_once\n\n  echo \"Running taskset with $1 for $(vmpid)...\"\n  taskset -a -p -c \"$1\" \"$(vmpid)\"\n  echo \"\"\n}\n\nexec_cpu_chrt() {\n  sleep_once\n\n  echo \"Running chrt with $1:$2 for $(vmpid)...\"\n  chrt -v \"--$1\" -a -p \"$2\" \"$(vmpid)\"\n  echo \"\"\n}\n\nexec_qm_conflict() {\n  echo \"Conflicting with other VM$1, shutdown just in case...\"\n  qm shutdown \"$1\"\n}\n\nexec_qm_depends() {\n  echo \"VM$1 is required, ensure that it is started...\"\n  qm start \"$1\"\n}\n\nexec_cmds() {\n  while read CMD ARG1 ARG2 ARG3 REST; do\n    case \"$CMD\" in\n      \"#pci_rescan\")\n        if_action pre-start exec_pci_rescan\n        ;;\n\n      \"#cpu_taskset\")\n        if_action post-start exec_cpu_taskset \"$ARG1\"\n        ;;\n\n      \"#set_halt_poll\")\n        if_action post-start exec_set_haltpoll \"$ARG1\"\n        ;;\n\n      \"#assign_interrupts\")\n        if_action post-start exec_assign_interrupts \"$ARG1\" \"$ARG2\" \"$ARG3\" $REST\n        ;;\n\n      \"#cpu_chrt\")\n        if_action post-start exec_cpu_chrt \"${ARG1:-fifo}\" \"${ARG2:-1}\"\n        ;;\n\n      \"#qm_depends\")\n        if_action post-start exec_qm_depends \"$ARG1\"\n        ;;\n\n      \"#pci_unbind\")\n        if_action post-stop exec_pci_unbind \"$ARG1\" \"$ARG2\" \"$ARG3\"\n        ;;\n\n      \"#pci_unbind_all\")\n        if_action post-stop exec_pci_unbind_all\n        ;;\n\n      \"#pci_rebind\")\n        if_action post-stop exec_pci_rescan\n        ;;\n\n      \"#qm_conflict\")\n        if_action pre-start exec_qm_conflict \"$ARG1\"\n        ;;\n\n      \"#qm_*\"|\"#cpu_*\"|\"#pci_*\"|\"#set_*\"|\"#assign_*\")\n        echo \"exec-cmds: command is unknown '$CMD'\"\n        ;;\n    esac\n  done\n}\n\necho \"Running exec-cmds for $VMID on $ACTION...\"\n\nexec_cmds < \"/etc/pve/qemu-server/$VMID.conf\"\n\nexit 0\n"
  }
]