[[PageOutline]] = kvm-manager = We use [https://0xacab.org/dkg/kvm-manager kvm-manager] to create, start and stop virtual guests. Starting with `stretch`, we are using the new systemd tools in kvm-manager which are documented below. == Basic Usage == The steps in puppet are exactly the same - you will need to add a stanza in the physical server pp file for each guest you want to create. Similarly, you will need to create a pp file for the guest itself. After you push to the physical server, you will need to create a symlink for the cd.iso file (if you are installing a new guest). Then, to start the guest: * Add guest user to the kvm group. I'm not sure why this is now required so have not automated it yet... {{{ adduser $guest kvm }}} * Start a guest: `systemctl start kvm@$guest.service` * Stop a guest: `systemctl stop kvm@$guest.service` (be patient! The command will not return until the guest is fully shutdown or 1 min 30 seconds passes and it gets killed) * Enable a guest to start at boot time: `systemctl enable kvm@$guest.service` * Prevent a guest from starting at boot time: `systemctl disable kvm@$guest.service` == When things go wrong == === How it works === Before diving into details of what steps to take, here's a description of what happens when a guest is started. From the service file: {{{ PermissionsStartOnly=true User=%i ExecStartPre=/usr/local/sbin/kvm-setup %i ExecStart=/usr/local/sbin/kvm-start %i ExecStop=/usr/local/sbin/kvm-stop %i ExecStopPost=/usr/local/sbin/kvm-teardown %i }}} The PermissionsStartOnly part means that only the start and stop scripts are executed by the user specified (which will always be the name of the guest) ''but'' the ExecStartPre and ExecStopPost scripts are run as root. So this means, every time you start the guest: 1. `/usr/local/sbin/kvm-setup $guest` is run as root 1. `/usr/local/sbin/kvm-start $guest` is run as the guest user Everytime you stop it: 1. `/usr/local/sbin/kvm-stop $guest` is run as $guest 1. `/usr/local/sbin/kvm-teardown` is run as root. The kvm-setup and kvm-teardown scripts setup the networking for the guest. kvm-start launches the kvm process. And kvm-stop sends it the qemu signal to shut down. Additionally, we have: {{{ Wants=kvm-screen@%i.service Before=kvm-screen@%i.service }}} This part indicates that everytime kvm@$guest.service starts, we should start kvm-screen@$guest.service - which launches the screen session that goes with it. The kvm-screen service simply runs: `/usr/local/sbin/kvm-screen $guest` as the $guest user. As you can see - the screen service is entirely independent of the service that starts the guest. So, if the guest is up and running properly but there is no screen service, you can fiddle all you want with the screen service without interfering with the guest service itself. === Environment === Environment variables are now added to a single file /etc/kvm-manager/${guestname}/env === Trouble Shooting === There are a number of steps you can take to debug what is going wrong. The first step is: `journalctl -u kvm@$guest.service` and `systemctl status kvm@$guest.service` In addition, you can try running each step manually. This is perfectly safe and acceptable to do - but you must ensure you run the right commands as the right user. As root: {{{ kvm-setup $guest }}} As the guest user: {{{ kvm-start $guest kvm-stop $guest }}} As root: {{{ kvm-teardown $guest }}} Here's a real world example: {{{ 0 medgar:~# systemctl status kvm@stokely.service ● kvm@stokely.service - KVM Manager virtual guest management script for stokely Loaded: loaded (/usr/local/share/kvm-manager/kvm@.service; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2018-06-26 04:57:42 UTC; 9h ago Process: 4408 ExecStartPre=/usr/local/sbin/kvm-setup stokely (code=exited, status=1/FAILURE) Jun 26 04:57:42 medgar systemd[1]: Failed to start KVM Manager virtual guest management script for stokely. Jun 26 04:57:42 medgar systemd[1]: kvm@stokely.service: Unit entered failed state. Jun 26 04:57:42 medgar systemd[1]: kvm@stokely.service: Failed with result 'exit-code'. Jun 26 04:57:42 medgar systemd[1]: kvm@stokely.service: Service hold-off time over, scheduling restart. Jun 26 04:57:42 medgar systemd[1]: Stopped KVM Manager virtual guest management script for stokely. Jun 26 04:57:42 medgar systemd[1]: kvm@stokely.service: Start request repeated too quickly. Jun 26 04:57:42 medgar systemd[1]: Failed to start KVM Manager virtual guest management script for stokely. Jun 26 04:57:42 medgar systemd[1]: kvm@stokely.service: Unit entered failed state. Jun 26 04:57:42 medgar systemd[1]: kvm@stokely.service: Failed with result 'exit-code'. 3 medgar:~# }}} This output indicates that the failure happened with the ExecStartPre command exiting with a non-zero exit code. Now, run it manually: {{{ 3 medgar:~# kvm-setup stokely Running kvm-prepare Configuring tap (stokely0) on bridge (br0). ioctl(TUNSETIFF): Device or resource busy 1 medgar:~# }}} yes, definitely not exiting with a 0. This error suggests that the networking may not have been torn down properly on a previous run. So, let's tear it down and then see if we can re-run the setup command and get a different result: {{{ 1 medgar:~# kvm-teardown stokely De-configuring the network. 0 medgar:~# kvm-setup stokely Running kvm-prepare Configuring tap (stokely0) on bridge (br0). /dev/mapper/vg_medgar0-stokely kvm-prepare completed successfully 0 medgar:~# }}} Now it works! But, before trying to restart stokely, let's tear it down again so we are back in the right state for it to start on it's own: {{{ 0 medgar:~# kvm-teardown stokely De-configuring the network. 0 medgar:~# }}}