Vtech Baby Monitor Vm344 Pu or Bu Does Not Exist

Preamble

The direct style to a PCI passthrough virtual machines on Ubuntu 20.04 LTS. I try limit changes of the host operating arrangement to a minimum, but provide enough details, that fifty-fifty Linux rookies are able to participate.

The final system will run Xubuntu 20.04 as host operating system(OS), and Windows 10 2004 equally guest OS. Gaming is the primary use-case of the guest system.

Unfortunately, the setup process can be pretty circuitous. It consists of fixed base settings, some variable settings and several optional (mostly operation) settings. In order to sustain readability of this post, and because I aim to use the virtual car for gaming only, I minimized the variable parts for latency optimization. The variable topics itself are linked in articles – I promise this makes sense. 🙂

Almost this guide

This guide targets Ubuntu 20.04 and is based on my sometime guides for Ubuntu eighteen.04 and 16.04 host systems.

Notwithstanding, this guide should exist also applicable to Pop!_OS 19.04 and newer. If you wish to go on with Popular!_OS as host system y'all tin do and so, simply look out for my colorful Pop!_OS labels.

Breaking changes for older passthrough setups in Ubuntu 20.04

Kernel version

Starting with kernel version 5.four, the "vfio-pci" driver is no longer a kernel module, but build-in into the kernel. We have to notice a different manner instead of using initramfs-tools/modules config files, every bit I recommended in eastward.one thousand. the 18.04 guide.

QEMU version

Ubuntu twenty.04 ships with QEMU version 4.2. It introduces sound fixes. This needs attending if y'all want to use a virtual machine definition from an older version.

If you want to utilize a newer version of QEMU, yous can build it on your own.

Attention!

QEMU version 5.0.0 – v.0.0-half dozen should non exist used for a passthrough setup due to stability issues.

Bootmanager

This is no particular "problem" with Ubuntu 20.04, only at least for one popular distro which is based on Ubuntu – Pop!_OS 20.04.

I think, starting with version nineteen.04, Pop!_OS uses systemd every bit boot manager, instead of chow. This means triggering kernel commands works differently in Pop!_OS .

Introduction to VFIO, PCI passthrough and IOMMU

Virtual Office I/O (or VFIO) allows a virtual machine (VM) direct access to a PCI hardware resource, such as a graphics processing unit (GPU). Virtual machines with gear up GPU passthrough can proceeds close to bare metal performance, which makes running games in a Windows virtual machine possible.

Let me brand the following simplifications, in order to fulfill my merits of beginner friendliness for this guide:

PCI devices are organized in and then called IOMMU groups. In lodge to pass a device over to the virtual machine, we have to pass all the devices of the same IOMMU group equally well. In a prefect world each device has its own IOMMU group — unfortunately that's not the instance.

Passed through devices are isolated, and thus no longer bachelor to the host arrangement. Furthermore, it is but possible to isolate all devices of ane IOMMU group at the same time.

This ways, fifty-fifty when not used in the VM, a device can no longer be used on the host, when information technology is an "IOMMU group sibling" of a passed through device.

Is the content helpful? Consider donating

If you lot appreciate the content I create, this is your gamble to give something back and earn some good old karma points!

Although fun to play with, and very of import for content creators, I felt a stiff hypocrisy in putting google-ads on my website. Fifty-fifty though I e'er endeavor to minimize the user spying, data collecting part to a minimum.

The adjacent logical step was to abandon ads altogether.

Thus, consider donating 😘

Requirements

Hardware

In order to successfully follow this guide, it is mandatory that the used hardware supports virtualization and has properly separated IOMMU groups.

The used hardware

  • AMD Ryzen7 1800x (CPU)
  • Asus Prime-x370 pro (Mainboard)
  • 2x 16 GB DDR4-3200 running at 2400MHz (RAM)
  • Nvidia GeForce 1050 GTX (Host GPU; PCIE slot ane)
  • Nvidia GeForce 1060 GTX (Guest GPU; PCIE slot two)
  • 500 GB NVME SSD (Guest Bone; Chiliad.two slot)
  • 500 GB SSD (Host Os; SATA)
  • 750W PSU

When composing the systems' hardware, I was eager to avoid the necessity of kernel patching. Thus, the ACS override patch is not required for said combination of mainboard and CPU.

If your mainboard has no proper IOMMU separation y'all tin can try to solve this by using a patched kernel, or patch the kernel on your own.

BIOS settings

Enable the following flags in your BIOS:

  • Avant-garde \ CPU config - SVM Module -> enable
  • Avant-garde \ AMD CBS - IOMMU -> enable

Attention!

The ASUS Prime x370/x470/x570 pro BIOS versions for AMD RYZEN 3000-series back up (v. 4602 – 5220), volition pause a PCI passthrough setup.

Error: "Unknown PCI header blazon '127'".

BIOS version upward to (and including) 4406, 2019/03/11 are working.

BIOS version from (and including) 5406, 2019/11/25 are working.

I used Version: 4207 (8th Dec 2018)

Host operating system settings

I accept installed Xubuntu 20.04 x64 (UEFI) from here.

Ubuntu 20.04 LTS ships with kernel version 5.4 which works good for VFIO purposes – check via: uname -r

Attending!

Any kernel, starting from version 4.15, works for a Ryzen passthrough setup.

Except kernel versions 5.1, 5.2 and 5.three including all of their subversion.

Before continuing make sure that your kernel plays nice in a VFIO environment.

Install the required software

Install QEMU, Libvirt, the virtualization manager and related software via:

sudo apt install qemu-kvm qemu-utils libvirt-daemon-system libvirt-clients span-utils virt-managing director ovmf

Setting up the PCI passthrough

Nosotros are going to passthrough the following devices to the VM:

  • 1x GPU: Nvidia GeForce 1060 GTX
  • 1x USB host controller
  • 1x SSD: 500 GB NVME Grand.2

Enabling IOMMU characteristic

Enable the IOMMU feature via your grub config. On a system with AMD Ryzen CPU run:

sudo nano /etc/default/grub

Edit the line which starts with GRUB_CMDLINE_LINUX_DEFAULT to match:

GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt"

In case you are using an Intel CPU the line should read:

GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

One time yous're done editing, save the changes and get out the editor (CTRL+ten CTRL+y).

After run:

sudo update-grub

→ Reboot the organisation when the control has finished.

Verify if IOMMU is enabled by running after a reboot:

dmesg |grep AMD-Vi

dmesg output

              [ 0.792691] AMD-Half dozen: IOMMU performance counters supported  [ 0.794428] AMD-Half dozen: Found IOMMU at 0000:00:00.2 cap 0x40  [ 0.794429] AMD-Half-dozen: Extended features (0xf77ef22294ada):  [ 0.794434] AMD-Half dozen: Interrupt remapping enabled  [ 0.794436] AMD-6: virtual APIC enabled  [ 0.794688] AMD-6: Lazy IO/TLB flushing enabled                          

[collapse]

For systemd kick manager every bit used in Pop!_OS

One can use the kernelstub module, on systemd booting operating systems, in club to provide boot parameters. Apply it like so:

sudo kernelstub -o "amd_iommu=on amd_iommu=pt"

Identification of the guest GPU

Attention!

After the upcoming steps, the guest GPU will be ignored by the host OS. You lot have to take a second GPU for the host Os at present!

In this chapter we want to place and isolate the devices before we pass them over to the virtual machine. We are looking for a GPU and a USB controller in suitable IOMMU groups. This means, either both devices take their own group, or they share one group.

The game plan is to employ the vfio-pci commuter to the to exist passed-through GPU, before the regular graphic bill of fare driver tin have command of it.

This is the virtually crucial step in the process. In case your mainboard does not support proper IOMMU group, yous can notwithstanding endeavour patching your kernel with the ACS override patch.

One can utilize a bash script like this in social club to determine devices and their group:

          #!/bin/bash # alter the 999 if needed shopt -s nullglob for d in /sys/kernel/iommu_groups/{0..999}/devices/*; exercise     north=${d#*/iommu_groups/*}; north=${north%%/*}     printf 'IOMMU Group %s ' "$n"     lspci -nns "${d##*/}" done;        

source: wiki.archlinux.org + added sorting for the starting time 999 IOMMU groups

script output

                              IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Grouping ane 00:01.1 PCI span [0604]: Avant-garde Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] IOMMU Group 2 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] IOMMU Group 3 00:02.0 Host span [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Group 4 00:03.0 Host bridge [0600]: Avant-garde Micro Devices, Inc. [AMD] Family unit 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Group 5 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] IOMMU Group 6 00:03.two PCI span [0604]: Avant-garde Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] IOMMU Group 7 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Grouping 8 00:07.0 Host span [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Span [1022:1452] IOMMU Group 9 00:07.1 PCI span [0604]: Advanced Micro Devices, Inc. [AMD] Family unit 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Autobus B [1022:1454] IOMMU Group 10 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family unit 17h (Models 00h-1fh) PCIe Dummy Host Span [1022:1452] IOMMU Grouping eleven 00:08.one PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454] IOMMU Group 12 00:fourteen.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59) IOMMU Group 12 00:14.3 ISA span [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51) IOMMU Group xiii 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Information Fabric: Device 18h; Function 0 [1022:1460] IOMMU Group 13 00:xviii.1 Host bridge [0600]: Avant-garde Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Cloth: Device 18h; Function 1 [1022:1461] IOMMU Group thirteen 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Part 2 [1022:1462] IOMMU Group 13 00:xviii.3 Host span [0600]: Advanced Micro Devices, Inc. [AMD] Family unit 17h (Models 00h-0fh) Data Cloth: Device 18h; Function iii [1022:1463] IOMMU Group 13 00:18.4 Host bridge [0600]: Avant-garde Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Information Fabric: Device 18h; Function 4 [1022:1464] IOMMU Group 13 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family unit 17h (Models 00h-0fh) Data Fabric: Device 18h; Function v [1022:1465] IOMMU Group 13 00:18.half-dozen Host span [0600]: Avant-garde Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1466] IOMMU Grouping thirteen 00:xviii.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Information Fabric: Device 18h; Role 7 [1022:1467] IOMMU Group 14 01:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Engineering science P1 NVMe PCIe SSD [c0a9:2263] (rev 03) IOMMU Grouping 15 02:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset USB three.1 xHCI Controller [1022:43b9] (rev 02) IOMMU Group xv 02:00.one SATA controller [0106]: Avant-garde Micro Devices, Inc. [AMD] X370 Series Chipset SATA Controller [1022:43b5] (rev 02) IOMMU Group 15 02:00.ii PCI span [0604]: Advanced Micro Devices, Inc. [AMD] X370 Serial Chipset PCIe Upstream Port [1022:43b0] (rev 02) IOMMU Group 15 03:00.0 PCI span [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) IOMMU Grouping 15 03:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) IOMMU Grouping 15 03:03.0 PCI bridge [0604]: Avant-garde Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) IOMMU Grouping 15 03:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) IOMMU Group 15 03:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) IOMMU Group fifteen 03:07.0 PCI span [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) IOMMU Group 15 07:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1143 USB 3.i Host Controller [1b21:1343] IOMMU Group 15 08:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03) IOMMU Group 15 09:00.0 PCI bridge [0604]: ASMedia Applied science Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04) IOMMU Group xv 0a:04.0 Multimedia audio controller [0401]: C-Media Electronics Inc CMI8788 [Oxygen HD Audio] [13f6:8788] IOMMU Group 16 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1) IOMMU Group xvi 0b:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1) IOMMU Group 17 0c:00.0 VGA uniform controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1060 6GB] [10de:1b83] (rev a1) IOMMU Group 17 0c:00.one Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1) IOMMU Group 18 0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Part [1022:145a] IOMMU Group 19 0d:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456] IOMMU Grouping 20 0d:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family unit 17h (Models 00h-0fh) USB iii.0 Host Controller [1022:145c] IOMMU Group 21 0e:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function [1022:1455] IOMMU Grouping 22 0e:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI style] [1022:7901] (rev 51) IOMMU Group 23 0e:00.3 Audio device [0403]: Avant-garde Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Hd Audio Controller [1022:1457]                          

[plummet]

The syntax of the resulting output reads like this:

iommu script output description
Figure1: IOMMU script output clarification

The interesting bits are the PCI jitney id (marked dark red in figure1) and the device identification (marked orange in figure1).

These are the devices of interest:

selected devices for isolation

                              NVME One thousand.2  ======== IOMMU group fourteen         01:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Engineering P1 NVMe PCIe SSD [c0a9:2263] (rev 03)                 Driver: nvme  Guest GPU - GTX1060  ================= IOMMU group 17         0c:00.0 VGA uniform controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1060 6GB] [10de:1b83] (rev a1)                 Driver: nvidia         0c:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Sound Controller [10de:10f0] (rev a1)                 Driver: snd_hda_intel  USB host ======= IOMMU grouping xx         0d:00.3 USB controller [0c03]: Avant-garde Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]                 Driver: xhci_hcd                          

[plummet]

IOMMU groups on ASUS Prime x370-pro
Figure2: IOMMU groups for virtual automobile passthrough, on ASUS Prime number x370-pro

Nosotros will isolate the GTX 1060 (PCI-bus 0c:00.0 and 0c:00.1 ; device id 10de:1b83 , 10de:10f0 ).

The USB-controller (PCI-bus 0d:00.3 ; device id 1022:145c ) is used subsequently.

The NVME SSD can be passed through without identification numbers. Information technology is crucial though that it has its own grouping.

Isolation of the guest GPU

In social club to isolate the GPU we accept two options. Select the devices past PCI bus address or by device ID. Both options take pros and cons.

Apply VFIO-pci commuter by device id (via bootmanager)

This option should only be used, in example the graphics cards in the system are not exactly the aforementioned model.

Update the chow control again, and add the PCI device ids with the vfio-pci.ids parameter.

Run sudo nano /etc/default/grub and update the GRUB_CMDLINE_LINUX_DEFAULT line again:

          GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt kvm.ignore_msrs=1 vfio-pci.ids=10de:1b83,10de:10f0"        

Remark! The command "ignore_msrs" is only necessary for Windows ten versions higher 1803 (otherwise BSOD).

Save and close the file. Afterwards run:

          sudo update-grub        

Attention!

After the following reboot the isolated GPU will be ignored by the host Bone. You have to utilise the other GPU for the host Bone NOW!

→ Reboot the system.

Retrospective, for a systemd boot manager arrangement like Pop!_OS nineteen.04 and newer, yous can apply:

sudo kernelstub --add-options "vfio-pci.ids=10de:1b80,10de:10f0,8086:1533"

Apply VFIO-pci commuter by PCI motorcoach id (via script)

This method works fifty-fifty if you want to isolate i of two identical cards. Attention though, in case PCI-hardware is added or removed from the organisation the PCI bus ids will change (too sometimes later BIOS updates).

Create another file via sudo nano /etc/initramfs-tools/scripts/init-top/vfio.sh and add together the following lines:

          #!/bin/sh  PREREQ=""  prereqs() {    echo "$PREREQ" }  case $i in prereqs)    prereqs    exit 0    ;; esac  for dev in 0000:0c:00.0 0000:0c:00.1  exercise   echo "vfio-pci" > /sys/double-decker/pci/devices/$dev/driver_override   echo "$dev" > /sys/bus/pci/drivers/vfio-pci/bind  done  go out 0        

Thanks to /u/nazar-pc on reddit. Brand sure the line

                      for dev in 0000:0c:00.0 0000:0c:00.1                  

Has the correct PCI bus ids for the GPU y'all want to isolate. Now save and shut the file.

Make the script executable via:

sudo chmod +ten /etc/initramfs-tools/scripts/init-top/vfio.sh

Create some other file via sudo nano /etc/initramfs-tools/modules

And add the following lines:

          options kvm ignore_msrs=i        

Save and close the file.

When all is done run:

sudo update-initramfs -u -chiliad all

Attention!

Later the following reboot the isolated GPU will exist ignored by the host Bone. You have to use the other GPU for the host OS NOW!

→ Reboot the system.

Verify the isolation

In order to verify a proper isolation of the device, run:

lspci -nnv

notice the line "Kernel driver in utilise" for the GPU and its audio part. It should state vfio-pci.

output

              0b:00.0 VGA uniform controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1060 6GB] [10de:1b83] (rev a1) (prog-if 00 [VGA controller]) 	Subsystem: ASUSTeK Computer Inc. GP104 [1043:8655] 	Flags: fast devsel, IRQ 44 	Memory at f4000000 (32-bit, not-prefetchable) [size=16M] 	Memory at c0000000 (64-bit, prefetchable) [size=256M] 	Memory at d0000000 (64-bit, prefetchable) [size=32M] 	I/O ports at d000 [size=128] 	Expansion ROM at f5000000 [disabled] [size=512K] 	Capabilities: <access denied>                Kernel driver in apply:                  vfio-pci                                Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia            

[collapse]

Congratulations, the hardest role is washed! 🙂

Creating the Windows virtual machine

The virtualization is done via an open source motorcar emulator and virtualizer called QEMU. I can either run QEMU straight, or use a GUI chosen virt-managing director in order setup, and run a virtual auto.

I prefer the GUI. Unfortunately not every setting is supported in virt-managing director. Thus, I define the basic settings in the UI, practise a quick VM first and "forcefulness stop" information technology right afterward I see the GPU is passed-over correctly. Afterwards ane tin edit the missing bits into the VM config via virsh edit.

Make sure you have your windows ISO file, likewise as the virtio windows drivers downloaded and prepare for the installation.

Pre-configuration steps

As I said, lots of variable parts tin add complexity to a passthrough guide. Before we tin proceed we take to make a determination most the storage blazon of the virtual auto.

Creating image container if needed.

In this guide I pass my NVME M.2 SSD to the virtual automobile. Another feasible solution is to use a raw image container, come across the storage post for farther information.

Creating an Ethernet Bridge

We will use a bridged connection for the virtual machine. This requires a wired connection to the reckoner.

I just followed the great guide from heiko hither.

run across the ethernet setups post for further data.

Create a new virtual machine

Equally said before we use the virtual machine managing director GUI to create the virtual machine with bones settings.

In order to do then commencement up the manager and click the "Create a new virtual car" button.

Pace 1

Select "Local install media" and proceed forrad (see figure 3).

Virt-manager select installation type for operating system
Virt-manager select installation blazon for operating system

Step2

Now we have to select the windows ISO file we want to use for the installation (see figure3). Also check the automatic system detection. Hint: Use the button "browse local" (1 of the buttons on the correct side) to scan to the ISO location.

select the windows iso file
Figure4: Create a virtual car step ii – select the windows ISO file.

Footstep 3

Put in the amount of RAM and CPU cores you want to passthrough and keep with the wizard. I want to use 8 Cores (sixteen is maximum, the screenshot shows 12 by mistake!) and 16384 MiB of RAM in my VM.

Figure4: Memory and CPU settings.
Figure5: Create a virtual machine pace three – Memory and CPU settings

Pace 4

In case you lot use a storage file, select your previously created storage file and keep. I uncheck the "Enable storage for this virtual auto" check-box and add my device later.

Figure5: Create a virtual machine step 4 - Select the previous created storage.
Figure6: Create a virtual machine footstep 4 – Select the previous created storage.

Footstep 5

On the terminal steps are slightly more clicks required.

Put in a meaningful proper name for the virtual machine. This becomes the name of the xml config file, I guess I would not employ anything with spaces in it. It might work without a problem, but I wasn't brave enough to do then in the past.

Furthermore make sure you lot check "Customize configuration before install".

For the "network selection" pick "Specify shared device proper noun" and type in the proper name of the network bridge we had created previously. You can use ifconfig in a last to evidence your Ethernet devices. In my case that is "bridge0".

Figure6: Create a virtual machine step 5 - Before installation
Figure7: Create a virtual car step v – Before installation.

Remark! The CPU count in Figure7 is wrong. It should say eight if you lot followed the guide.

Commencement configuration

In one case y'all have pressed "finish" the virtual machine configuration window opens. The left column displays all hardware devices which this VM uses. By left clicking on them, you see the options for the device on the right side. You lot can remove hardware via right click. You lot can add more hardware via the push below. Brand certain to hit apply after every change.

The following screenshots may vary slightly from your GUI (as I take added and removed some hardware devices).

Overview

On the Overview entry in the list make sure that for "Firmware" UEFIx86_64 [...] OVMF [...] is selected. "Chipset" should be Q35 see figure8.

Figure7: Overview configuration
Figure8: Virtual machine configuration – Overview configuration

CPUs

For the "Model:" click in to the drop-down, every bit if it is a text field, and type in

host-passthrough

This volition pass all CPU data to the guest. You tin read the CPU Model Information chapter in the performance guide for further information.

For "Topology" cheque "Manually prepare CPU topology" with the following values:

  • Sockets: i
  • Cores: iv
  • Threads: two
Figure8: CPU configuration
Figure9: Virtual machine configuration – CPU configuration *outdated screenshot

Deejay ane, the Guests hard bulldoze

Depending on your storage pre-configuration pace you have to choose the adequate storage blazon for your disk (raw image container or passed-through real hardware). Meet the storage article for farther options.

Setup with raw prototype container

When you first enter this section it will say "IDE Disk ane". We take to alter the "Disk charabanc:" value to VirtIO.

Figure9: Disk configuration
Figure10: Virtual machine configuration – Deejay configuration
Setup with passed-through drive

Find out the correct device-id of the bulldoze via lsblk to get the deejay proper noun.

When editing the VM configuration for the first time add the following block in the <devices> section (1 line after </emulator> should be plumbing equipment).

          <disk type='block' device='disk'>   <driver proper noun='qemu' type='raw' cache='none' io='native' discard='unmap'/>   <source dev='/dev/nvme0n1'/>   <target dev='sdb' bus='sata'/>   <boot order='1'/>   <address blazon='drive' controller='0' passenger vehicle='0' target='0' unit='ane'/> </disk>        

Edit the line

          <source dev='/dev/nvme0n1'/>        

To friction match your bulldoze name.

VirtIO Driver

Next we have to add together the virtIO driver ISO, so information technology is used during the Windows installation. Otherwise, the installer tin can not recognize the storage volume nosotros accept just inverse from IDE to virtIO.

In order to add the driver press "Add Hardware", select "Storage" select the downloaded image file.

For "Device blazon:" select CD-ROM device. For "Bus blazon:" select IDE otherwise windows will as well not find the CD-ROM either 😛 (run across Figure10).

Figure10: Adding virtIO driver CDROM.
Figure11: Virtual machine configuration – Adding virtIO driver CD-ROM.

The GPU passthrough

Finally! In gild to fulfill the GPU passthrough, we take to add our guest GPU and the USB controller to the virtual machine. Click "Add Hardware" select "PCI Host Device" and notice the device by its ID. Practice this iii times:

  • 0000:0c:00.0 for GeForce GTX 1060
  • 0000:0c:00.1 for GeForce GTX 1060 Audio
  • 0000:0d:00.3 for the USB controller
Figure11: Adding PCI devices.
Figure12: Virtual machine configuration – Adding PCI devices (screenshot is still with old hardware).

Remark : In case you later add further hardware (eastward.g. another PCIE device), these IDs might/will change – simply keep in heed if you change the hardware merely redo this stride with updated Ids.

This should be it. Plugin a second mouse and keyboard in the USB ports of the passed-through controller (see figure2).

Striking "Begin installation", a Tiano cadre logo should appear on the monitor connected to the GTX 1060. If a funny white and xanthous shell pops-up you can apply exit in order to leave it.

When nil happens, make sure you lot take both CD-ROM device (one for each ISO windows10 and virtIO driver) in your list. Also check the "boot options" entry.

Once you lot see the Windows installation, use "strength off" from virt-director to stop the VM.

Is the content helpful? Consider altruistic

If y'all appreciate the content I create, this is your chance to give something back and earn some expert former karma points!

Although fun to play with, and very important for content creators, I felt a strong hypocrisy in putting google-ads on my website. Even though I e'er try to minimize the user spying, data collecting part to a minimum.

The next logical step was to abandon ads birthday.

Thus, consider donating 😘

Concluding configuration and optional steps

In order edit the virtual machines configuration use: virsh edit your-windows-vm-proper name

Once your done editing, yous tin use CTRL+10 CTRL+y to go out the editor and save the changes.

I have added the following changes to my configuration:

AMD Ryzen CPU optimizations

I moved this section in a carve up commodity – see the CPU pinning part of the functioning optimization article.

Hugepages for better RAM performance

This step is optionaland requires previous setup: See the Hugepages post for details.

find the line which ends with </currentMemory> and add together the following block behind it:

            <memoryBacking>        <hugepages/>    </memoryBacking>                  

Remark : Make sure <memoryBacking> and <currentMemory> accept the same indent.

Functioning tuning

This acrticle describes functioning optimizations for gaming on a virtual machine (VM) with GPU passthrough.

Troubleshooting

Removing Fault 43 for Nvidia cards

This guide uses an Nvidia card as guest GPU. Unfortunately, the Nvidia driver throws Error 43 , if it recognizes the GPU is being passed through to a virtual machine.

Update: With Nvidia commuter v465 (or subsequently) Nvidia officially supports the use of consumer GPUs in virtual environments. Thus, edits recommended in this department is no longer required.

I rewrote this section and moved information technology into a dissever article.

Getting audio to piece of work

After some sleepless nights I wrote a separate article about that – come across chapter Pulse Audio with QEMU 4.2 (and above).

Removing stutter on Guest

There are quite a few software- and hardware-version combinations, which might event in weak Guest performance. I have created a split commodity on known problems and common errors.

My final virtual machine libvirt XML configuration

          <domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/one.0" type="kvm">   <proper name>win10-q35</proper name>   <uuid>b89553e7-78d3-4713-8b34-26f2267fef2c</uuid>   <title>Windows 10 20.04</championship>   <description>Windows 10 eighteen.03 updated to 20.04 running on /dev/nvme0n1 (500 GB)</description>   <metadata>     <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">       <libosinfo:bone id="http://microsoft.com/win/10"/>     </libosinfo:libosinfo>   </metadata>   <memory unit of measurement="KiB">16777216</retentivity>   <currentMemory unit="KiB">16777216</currentMemory>   <memoryBacking>     <hugepages/>   </memoryBacking>   <vcpu placement="static">8</vcpu>   <iothreads>2</iothreads>   <cputune>     <vcpupin vcpu="0" cpuset="eight"/>     <vcpupin vcpu="ane" cpuset="9"/>     <vcpupin vcpu="2" cpuset="10"/>     <vcpupin vcpu="3" cpuset="11"/>     <vcpupin vcpu="iv" cpuset="12"/>     <vcpupin vcpu="5" cpuset="13"/>     <vcpupin vcpu="6" cpuset="xiv"/>     <vcpupin vcpu="7" cpuset="15"/>     <emulatorpin cpuset="0-1"/>     <iothreadpin iothread="i" cpuset="0-1"/>     <iothreadpin iothread="2" cpuset="2-3"/>   </cputune>   <bone>     <blazon arch="x86_64" auto="pc-q35-4.two">hvm</type>     <loader readonly="yes" blazon="pflash">/usr/share/OVMF/OVMF_CODE.ms.fd</loader>     <nvram>/var/lib/libvirt/qemu/nvram/win10-q35_VARS.fd</nvram>   </os>   <features>     <acpi/>     <apic/>     <hyperv>       <relaxed land="on"/>       <vapic land="on"/>       <spinlocks state="on" retries="8191"/>       <vpindex state="on"/>       <synic state="on"/>       <stimer state="on"/>       <reset land="on"/>       <vendor_id land="on" value="1234567890ab"/>       <frequencies country="on"/>     </hyperv>     <kvm>       <hidden country="on"/>     </kvm>     <vmport country="off"/>     <ioapic commuter="kvm"/>   </features>   <cpu mode="host-passthrough" bank check="none">     <topology sockets="1" cores="4" threads="2"/>     <enshroud mode="passthrough"/>     <feature policy="require" proper name="topoext"/>   </cpu>   <clock offset="localtime">     <timer name="rtc" tickpolicy="catchup"/>     <timer name="pit" tickpolicy="delay"/>     <timer name="hpet" present="no"/>     <timer proper noun="hypervclock" present="yes"/>   </clock>   <on_poweroff>destroy</on_poweroff>   <on_reboot>restart</on_reboot>   <on_crash>destroy</on_crash>   <pm>     <suspend-to-mem enabled="no"/>     <suspend-to-disk enabled="no"/>   </pm>   <devices>     <emulator>/usr/bin/qemu-arrangement-x86_64</emulator>     <disk type="cake" device="disk">       <commuter name="qemu" type="raw" cache="none" io="native" discard="unmap"/>       <source dev="/dev/nvme0n1"/>       <target dev="sdb" bus="sata"/>       <boot order="1"/>       <address type="drive" controller="0" bus="0" target="0" unit="ane"/>     </deejay>     <disk type="file" device="cdrom">       <commuter proper noun="qemu" blazon="raw"/>       <source file="/home/mrn/Downloads/virtio-win-0.ane.185.iso"/>       <target dev="sdc" bus="sata"/>       <readonly/>       <accost type="drive" controller="0" double-decker="0" target="0" unit="two"/>     </disk>     <controller type="usb" index="0" model="qemu-xhci" ports="fifteen">       <accost type="pci" domain="0x0000" jitney="0x02" slot="0x00" function="0x0"/>     </controller>     <controller type="sata" index="0">       <address blazon="pci" domain="0x0000" motorbus="0x00" slot="0x1f" function="0x2"/>     </controller>     <controller type="pci" index="0" model="pcie-root"/>     <controller type="pci" index="1" model="pcie-root-port">       <model proper noun="pcie-root-port"/>       <target chassis="1" port="0x10"/>       <accost type="pci" domain="0x0000" double-decker="0x00" slot="0x02" function="0x0" multifunction="on"/>     </controller>     <controller blazon="pci" alphabetize="two" model="pcie-root-port">       <model name="pcie-root-port"/>       <target chassis="2" port="0x11"/>       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" role="0x1"/>     </controller>     <controller type="pci" index="3" model="pcie-root-port">       <model name="pcie-root-port"/>       <target chassis="3" port="0x12"/>       <address blazon="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>     </controller>     <controller type="pci" index="4" model="pcie-root-port">       <model name="pcie-root-port"/>       <target chassis="4" port="0x13"/>       <accost type="pci" domain="0x0000" charabanc="0x00" slot="0x02" function="0x3"/>     </controller>     <controller blazon="pci" alphabetize="5" model="pcie-root-port">       <model name="pcie-root-port"/>       <target chassis="5" port="0x14"/>       <accost type="pci" domain="0x0000" bus="0x00" slot="0x02" role="0x4"/>     </controller>     <controller type="pci" index="6" model="pcie-root-port">       <model name="pcie-root-port"/>       <target chassis="6" port="0x15"/>       <address type="pci" domain="0x0000" autobus="0x00" slot="0x02" function="0x5"/>     </controller>     <controller type="pci" alphabetize="7" model="pcie-root-port">       <model name="pcie-root-port"/>       <target chassis="7" port="0x16"/>       <address blazon="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>     </controller>     <controller blazon="pci" alphabetize="8" model="pcie-root-port">       <model proper name="pcie-root-port"/>       <target chassis="eight" port="0x17"/>       <accost blazon="pci" domain="0x0000" motorcoach="0x00" slot="0x02" office="0x7"/>     </controller>     <controller type="pci" alphabetize="9" model="pcie-root-port">       <model proper name="pcie-root-port"/>       <target chassis="nine" port="0x8"/>       <address blazon="pci" domain="0x0000" autobus="0x00" slot="0x01" function="0x0"/>     </controller>     <controller type="pci" index="10" model="pcie-to-pci-bridge">       <model name="pcie-pci-span"/>       <address blazon="pci" domain="0x0000" bus="0x08" slot="0x00" office="0x0"/>     </controller>     <controller blazon="virtio-series" index="0">       <address type="pci" domain="0x0000" motorcoach="0x03" slot="0x00" function="0x0"/>     </controller>     <interface blazon="span">       <mac accost="52:54:00:fe:9f:a5"/>       <source span="bridge0"/>       <model type="virtio-net-pci"/>       <address blazon="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>     </interface>     <input type="mouse" bus="ps2"/>     <input type="keyboard" coach="ps2"/>     <hostdev way="subsystem" type="pci" managed="yes">       <source>         <address domain="0x0000" passenger vehicle="0x0c" slot="0x00" role="0x0"/>       </source>       <address blazon="pci" domain="0x0000" charabanc="0x04" slot="0x00" function="0x0"/>     </hostdev>     <hostdev manner="subsystem" type="pci" managed="yes">       <source>         <address domain="0x0000" bus="0x0c" slot="0x00" function="0x1"/>       </source>       <address type="pci" domain="0x0000" charabanc="0x05" slot="0x00" function="0x0"/>     </hostdev>     <hostdev manner="subsystem" blazon="pci" managed="yeah">       <source>         <address domain="0x0000" bus="0x0d" slot="0x00" function="0x3"/>       </source>       <address type="pci" domain="0x0000" autobus="0x06" slot="0x00" function="0x0"/>     </hostdev>     <memballoon model="virtio">       <address type="pci" domain="0x0000" autobus="0x07" slot="0x00" part="0x0"/>     </memballoon>   </devices>   <qemu:commandline>     <qemu:arg value="-device"/>     <qemu:arg value="ich9-intel-hda,charabanc=pcie.0,addr=0x1b"/>     <qemu:arg value="-device"/>     <qemu:arg value="hda-micro,audiodev=hda"/>     <qemu:arg value="-audiodev"/>     <qemu:arg value="pa,id=hda,server=unix:/run/user/1000/pulse/native"/>   </qemu:commandline> </domain>        

to exist connected…

Sources

The glorious Arch wiki

heiko-sieger.info: Really comprehensive guide

Cracking post by user "MichealS" on level1techs.com forum

Wendels draft post on Level1techs.com

Updates

  • 2021-07-06 …. removed lawmaking 43 fault handling
  • 2020-06-18 …. initial creation of the 20.04 guide
  • 2020-09-09 …. added further Pop!_OS remarks and antiseptic storage usage

Is the content helpful? Consider donating

If you appreciate the content I create, this is your chance to give something back and earn some good old karma points!

Although fun to play with, and very important for content creators, I felt a potent hypocrisy in putting google-ads on my website. Even though I ever try to minimize the user spying, data collecting part to a minimum.

The adjacent logical step was to carelessness ads birthday.

Thus, consider donating 😘

lauriaproke1995.blogspot.com

Source: https://mathiashueber.com/pci-passthrough-ubuntu-2004-virtual-machine/

0 Response to "Vtech Baby Monitor Vm344 Pu or Bu Does Not Exist"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel