|
11 months ago | |
---|---|---|
images | 2 years ago | |
README.md | 11 months ago |
README.md
This tutorial allows to create a KVM for gaming.
The VM is close to native performance.
Table Of Contents
- Table Of Contents
- Thanks to
- Enable & Verify IOMMU
- Configuring Libvirt
- Setup Guest OS
- Install Windows
- Attaching devices
- Libvirt Hook Helper
- Config Libvirt Hooks
- Start/Stop Libvirt Hooks
- Video card driver virtualisation detection
- vBIOS Patching
- CPU Pinning
- Hyper-V Enlightenments
- Disk Tuning
- Hugepages
- CPU Governor
- Update drivers on Windows
- Enable Hyper-V on Windows
Thanks to
Arch wiki
The best way to learn how GPU passthrough is working.
bryansteiner
The best tutorial on GPU passthrough!
QaidVoid
The best tutorial to use VFIO!
joeknock90
Really good tutorial on the NVIDIA GPU patch.
SomeOrdinaryGamers
Bring me in the VFIO community.
Zeptic
How to get good performances in nested virtualization.
Quentin Franchi
The scripts for AMD GPUs.
Enable & Verify IOMMU
Ensure that AMD-Vi or Intel VT-d is supported by the CPU and enabled in the BIOS settings.
Enable IOMMU support by setting the kernel parameter depending on your CPU.
/etc/default/grub |
---|
GRUB_CMDLINE_LINUX_DEFAULT="... amd_iommu=on iommu=pt ..." |
OR |
GRUB_CMDLINE_LINUX_DEFAULT="... intel_iommu=on iommu=pt ..." |
Generate grub.cfg
grub-mkconfig -o /boot/grub/grub.cfg
After rebooting, check that the groups are valid.
for g in `find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V`; do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done;
done;
Example output:
IOMMU Group 2:
00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
09:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] [10de:1e84] (rev a1)
09:00.1 Audio device [0403]: NVIDIA Corporation TU104 HD Audio Controller [10de:10f8] (rev a1)
09:00.2 USB controller [0c03]: NVIDIA Corporation TU104 USB 3.1 Host Controller [10de:1ad8] (rev a1)
09:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller [10de:1ad9] (rev a1)
If your card is not in an isolated group, you need to perform ACS override patch.
Configuring Libvirt
Install the packages.
pacman -S --needed qemu libvirt edk2-ovmf virt-manager dnsmasq ebtables
For windows 11, install the swtpm package.
pacman -S --needed swtpm
Update the permissions of libvirt to run the virtual machine as non root.
/etc/libvirt/libvirtd.conf |
---|
unix_sock_group = "libvirt" |
unix_sock_rw_perms = "0770" |
Add your user to the libvirt group.
sudo usermod -a -G libvirt $(whoami)
sudo systemctl enable --now libvirtd
Auto start the virsh internal network
sudo virsh net-autostart default
If you prefer, you can manually start the virsh internal network
sudo virsh net-start default
Setup Guest OS
Download virtio driver.
Create your storage volume with the raw format. Select Customize before install on Final Step.
In Overview |
---|
set Chipset to Q35 |
for Windows 10 set Firmware to UEFI |
for Windows 11 set Firmware to UEFI secboot |
In CPUs |
---|
set CPU model to host-passthrough |
set CPU Topology match your cpu topology -1 core |
In Sata |
---|
set Disk Bus to virtio |
In NIC |
---|
set Device Model to virtio |
In Add Hardware |
---|
select CDROM and point to /path/to/virtio-driver.iso |
for Windows 11 add TPM and set Version to 2.0 |
Install Windows
Windows can't detect the virtio disk, so you need to Load Driver and select virtio-iso/amd64/win10
when prompted.
Windows can't connect to the internet, we will activate internet later in this tutorial.
Attaching devices
The devices you want to passthrough.
In Add PCI Host Device |
---|
PCI Host devices for your GPU |
PCI Host device for your soundcard |
In Add USB Host Device |
---|
Add your keyboard & mouse |
Remove |
---|
Display spice |
Channel spice |
Video QXL |
Sound ich* |
Libvirt Hook Helper
Libvirt hooks automate the process of running specific tasks during VM state change.
More documentation on The Passthrough Post website.
Create Libvirt Hook Helper
sudo mkdir /etc/libvirt/hooks
sudo vim /etc/libvirt/hooks/qemu
sudo chmod +x /etc/libvirt/hooks/qemu
/etc/libvirt/hooks/qemu |
---|
|
Config Libvirt Hooks
This configuration file allows you to create variables that can be read by the scripts below.
/etc/libvirt/hooks/kvm.conf |
---|
|
Make sure to substitute the correct bus addresses for the devices you'd like to passthrough to your VM.
Just in case it's still unclear, you get the virsh PCI device IDs from the Enable & Verify IOMMU script.
Translate the address for each device as follows: IOMMU Group 1 01:00.0 ...
--> VIRSH_...=pci_0000_01_00_0
.
Start/Stop Libvirt Hooks
This command will set the variable KVM_NAME so you can execute the rest of the commands without changing the name of the VM.
KVM_NAME="YOUR_VM_NAME"
If the scripts are not working, use the scripts as template and write your own.
Choose the Start/Stop scripts that most closely match your hardware.
My hardware:
- AMD Ryzen 7 3700X
- NVIDIA GeForce RTX 2070 SUPER
Create Start Script
mkdir -p /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin
vim /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin/start.sh
chmod +x /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin/start.sh
/etc/libvirt/hooks/qemu.d/VM_NAME/prepare/begin/start.sh |
---|
|
Create Stop Script
mkdir -p /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end
vim /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end/stop.sh
chmod +x /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end/stop.sh
/etc/libvirt/hooks/qemu.d/VM_NAME/release/end/stop.sh |
---|
|
Quentin hardware:
- AMD Ryzen 5 2600
- Radeon RX 590 Series
Create Start Script
mkdir -p /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin
vim /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin/start.sh
chmod +x /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin/start.sh
/etc/libvirt/hooks/qemu.d/VM_NAME/prepare/begin/start.sh |
---|
|
Create Stop Script
mkdir -p /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end
vim /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end/stop.sh
chmod +x /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end/stop.sh
/etc/libvirt/hooks/qemu.d/VM_NAME/release/end/stop.sh |
---|
|
Video card driver virtualisation detection
Video Card drivers refuse to run in Virtual Machine, so you need to spoof Hyper-V Vendor ID.
XML |
---|
|
NVIDIA guest drivers also require hiding the KVM CPU leaf:
XML |
---|
|
vBIOS Patching
How to patch NVIDIA vBIOS
Only NVIDIA GPU's need to be patched
To get a rom for your GPU you can either download one from here or use nvflash to dump the bios currently on your GPU.
Use the dumped/downloaded vbios and open it in a hex editor.
Search for the strings "VIDEO".
Then you have to search for the first U.
that is in front of VIDEO.
Delete all of the code above the U.
then save your patched vbios.
To add the patched rom, in hostdev add rom, only for the VGA pci:
XML |
---|
|
CPU Pinning
My setup is an AMD Ryzen 7 3700X which has 8 physical cores and 16 threads (2 threads per core).
How to bind the threads to the core
It's very important that when we passthrough a core, we include its sibling. To get a sense of your cpu topology, use the command lscpu -e
. A matching core id (i.e. "CORE" column) means that the associated threads (i.e. "CPU" column) run on the same physical core.
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ
0 0 0 0 0:0:0:0 yes 4823.4370 2200.0000
1 0 0 1 1:1:1:0 yes 4559.7651 2200.0000
2 0 0 2 2:2:2:0 yes 4689.8428 2200.0000
3 0 0 3 3:3:3:0 yes 4426.1709 2200.0000
4 0 0 4 4:4:4:1 yes 5224.2178 2200.0000
5 0 0 5 5:5:5:1 yes 5090.6250 2200.0000
6 0 0 6 6:6:6:1 yes 5224.2178 2200.0000
7 0 0 7 7:7:7:1 yes 4957.0308 2200.0000
8 0 0 0 0:0:0:0 yes 4823.4370 2200.0000
9 0 0 1 1:1:1:0 yes 4559.7651 2200.0000
10 0 0 2 2:2:2:0 yes 4689.8428 2200.0000
11 0 0 3 3:3:3:0 yes 4426.1709 2200.0000
12 0 0 4 4:4:4:1 yes 5224.2178 2200.0000
13 0 0 5 5:5:5:1 yes 5090.6250 2200.0000
14 0 0 6 6:6:6:1 yes 5224.2178 2200.0000
15 0 0 7 7:7:7:1 yes 4957.0308 2200.0000
According to the logic seen above, here are my core and their threads binding.
Core 1: 0, 8
Core 2: 1, 9
Core 3: 2, 10
Core 4: 3, 11
Core 5: 4, 12
Core 6: 5, 13
Core 7: 6, 14
Core 8: 7, 15
In this example, I want to get 1 core for the host and 7 cores for the guest. I will let the core 1 for my host, so 0 and 8 are the logical threads.
I show you the final result, everything will be explained below.
XML |
---|
|
Explanations of cpu pinning
Number of threads to passthrough |
|
Same number as the iothreadpin below |
|
cpuset corresponds to the bindings of your host core |
|
vcpu corresponds to the guest cores, increment by 1 starting with 0.
cpuset correspond to your threads you want to passthrough. It is necessary that your core and their threads binding follow each other. |
|
You need to match your CPU pathrough (Update cores
and threads
if needed).
XML |
---|
|
Hyper-V Enlightenments
Hyper-V enlightenments help the guest VM handle virtualization tasks.
More documentation on fossies.org for qemu enlightenments.
XML |
---|
|
You can alternatively use this config
I do not use this configuration because I experienced mouse latency in games.
XML |
---|
|
Disk Tuning
For more explanations on virtio scsi, check bryansteiner tutorial.
Make sure you have iothreads
in your xml.
XML |
---|
|
Hugepages
Memory (RAM) is divided up into basic segments called pages. By default, the x86 architecture has a page size of 4KB. CPUs utilize pages within the built in memory management unit (MMU). Although the standard page size is suitable for many tasks, hugepages are a mechanism that allow the Linux kernel to take advantage of large amounts of memory with reduced overhead. Hugepages can vary in size anywhere from 2MB to 1GB.
Many tutorials will have you reserve hugepages for your guest VM at host boot-time. There's a significant downside to this approach: a portion of RAM will be unavailable to your host even when the VM is inactive. In bryansteiner setup, he chose to allocate hugepages before the VM starts and deallocate those pages on VM shutdown.
Update your kvm config.
/etc/libvirt/hooks/kvm.conf |
---|
|
VM_MEMORY
in MiB is the memory allocated tho the guest.
Create Alloc Hugepages Script
vim /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin/alloc_hugepages.sh
chmod +x /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin/alloc_hugepages.sh
/etc/libvirt/hooks/qemu.d/VM_NAME/prepare/begin/alloc_hugepages.sh |
---|
|
Create Dealloc Hugepages Script
vim /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end/dealloc_hugepages.sh
chmod +x /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end/dealloc_hugepages.sh
/etc/libvirt/hooks/qemu.d/VM_NAME/release/end/dealloc_hugepages.sh |
---|
|
XML |
---|
|
The memory need to match your VM_MEMORY
from your config (to convert KiB to MiB you need to divide by 1024).
CPU Governor
This performance tweak takes advantage of the CPU frequency scaling governor in Linux.
Create CPU Performance Script
vim /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin/cpu_mode_performance.sh
chmod +x /etc/libvirt/hooks/qemu.d/${KVM_NAME}/prepare/begin/cpu_mode_performance.sh
/etc/libvirt/hooks/qemu.d/VM_NAME/prepare/begin/cpu_mode_performance.sh |
---|
|
Create CPU Ondemand Script
vim /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end/cpu_mode_ondemand.sh
chmod +x /etc/libvirt/hooks/qemu.d/${KVM_NAME}/release/end/cpu_mode_ondemand.sh
/etc/libvirt/hooks/qemu.d/VM_NAME/release/end/cpu_mode_ondemand.sh |
---|
|
Update drivers on Windows
To get the network working properly you need to install the drivers.
In Device Manager
update drivers with the local virtio iso /path/to/virtio-driver
.
Enable Hyper-V on Windows
Enable Hyper-V using PowerShell:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
Enable the Hyper-V through Settings:
Search for Turn Windows Features on or off
, select Hyper-V and click Ok.