Revision 27 as of 2014-10-25 18:20:44

Clear message

Proxmox VE is used by UCC as a virtual machine management infrastructure, and is a lot like vmware. It is based on KVM and OpenVZ, and is built on Debian, which UCC likes because it's FREE.

Installation

Proxmox can be installed using either a baremetal installer iso or an existing Debian installation (check kernel versions as Proxmox replaces the existing kernel). The problem with the baremetal installer is that it does not allow you to set up your own logical volumes and doesn't give you the option of software raid. IT WILL ALSO EAT ANY OTHER DISKS ATTACHED TO THE MACHINE, disconnect disks you don't want lost if using the baremetal installer. So machines such as Medico had Proxmox installed on top of pre-installed Debian Squeeze.

Installation is incredibly easy by following the instructions in the Proxmox VE Installation Page. Ensure that the Debian install follows the UCC almost-standard layout, with separate rootusr, var, boot, and home logical volumes. In addition, ensure there is a vmstore logical volume where virtual machines will be saved.

Things missed by the manual installer

  • The notable instruction that is missing in the wiki page is to enable Kernel Samepage Merging (KSM) on the host, which is a memory de-duplication feature - google how to enable it and enable it with a line in /etc/rc.local (check Motsugo's for an example)

  • The proxmox installer fails to change the network configuration file to be suitable for virtual machines; check out the default configuration in Proxmox Network Model and modify /etc/network/interfaces to suit.

  • All the other items on the SOE page, with the exclusion of LDAP, NFS, dispense and most of the other user programs.

  • IPv6 configuration. Look at Motsugo's or Medico's config for an example.

Security

Security is paramount on a vm host because of the high potential for damage if the machine is compromised. Central fail2ban is set up to monitor the webpage and the ssh interface (see http://forum.proxmox.com/threads/3583-How-To-implement-Fail2Ban-on-Host and http://blog.extremeshok.com/archives/810), however it is imperative that central logging is configured and TESTED for this to work. The web interface must not be unfirewalled to outside the UCC network under any circumstances.

Post-install Configuration

The main thing that needs to be done post-install is to configure the storage locations. Go to Datacenter->Storage in the web interface and create a storage area that will hold vm images. Then disable image storage in the "local" storage area in /var/lib/vz. You may also wish to add an NFS location for iso's (yet to be created/decided at time of writing), as well as any other SAN/NAS vm storage space.

Authentication

Out of the box, the web interface uses the username root and the root password of the host. The LDAP implementation in Proxmox isn't "true" LDAP in that Proxmox only looks at LDAP for authentication and cannot consult LDAP for a list of users or group permissions. Other users can be added by creating their username in the web interface and setting the authentication realm to UCC's LDAP. The username must correspond to a UCC LDAP username.

To add yourself to the administrator's group, SSH to medico, and run something like:

pveum useradd accmurphy@uccldap -group Administrator -firstname ACC -lastname Murphy -email accmurphy@ucc.gu.uwa.edu.au

Alternatively, get another administrator to create your user through the web interface. Contact wheel if you are unable to login twice in a row or you will be locked out.

Storage

Virtual machines should be stored as .raw images in the appropriate vmstore area.

On Medico there are two vmstores;

  • /vmstore-local is a RAID1 volume on Medico's internal SSD's. It is fast, however it is also quite small and should only be used for UCC core servers

  • /vmstore-nas is a RAID1/0 volume on Molmol mounted over NFS. It can be used for member VMs and non-core servers as it has plenty of space

Permissions

TODO

Adding VMs

These instructions are for creating a VM for a general UCC member - do not blindly follow them for UCC servers.

To create the actual machine:

  1. Log in as an administrator
  2. Click on "Create VM" in the top right corner
  3. Set Name as the hostname of the VM, set the resource pool to Member-VMs and click next
  4. Select the desired OS for the VM and click next
  5. Select "do no use any media" (unless media has already been decided and uploaded to the correct location) and click next
  6. Set the following options for the hard disk:
    • Bus device: VIRTIO 0
    • Storage: nas-vmstore
    • Disk size: 50
    • Format: raw
    • Cache: default (no cache)
    • Then click next
  7. Set the number of cores to 2 (leave everything else at default) and click next
  8. Set memory to 2GB and click next
  9. Set the network model to VirtIO (paravirtualised) and set the VLAN tag to 4
  10. Click next and then finish

The UI is a bit buggy. [SJY] wasted hours trying to work out why the Storage selector (in the Hard Disk tab) was grayed out, preventing the creation of new VMs. If you experience this issue (and storage is online, ie. other VMs are working correctly) then your best bet is probably just to create the VM on the command line with something like qm create NEW_VM_ID --name NEW_VM_NAME --virtio0 nas-vmstore:0,format=raw --bootdisk virtio0 --ostype l26 --memory 2048 --onboot yes --sockets 1 --cores 2 (this command will not configure a network interface) and then tweak the config in the web UI.

Under the summary tab of the newly created VM, go to the notes section and add a comment with the following information:

  • the username of the owner
  • the name of the owner
  • date of creation
  • the VM's purpose
  • the hostname of the VM if it's different to what it was named in proxmox
  • the IP address of the VM
  • any other pertinant information that may be helpful to management, such as a contact email address

Under the options tab:

  • Change "Start at boot" to yes

Under the permissions tab:

  • Add a user permission for the owner of the VM and give them the role "PVEVMUser". This allows the user to do anything you could do to a physical machine without taking the cover off (with the exception of changing the OS Type). If the user doesn't exist yet, see the Permissions section on how to create it.

The VM should now boot, however it is essentially a blank machine and will netboot.

Installing an OS

You can mount an ISO from /services/iso via your VM's Hardware tab (or during the creation process, if you created the VM yourself). All users in the gumby group have read and upload access to /services/iso, so you can add a new ISO from another machine.

Direct the user to browse to https://medico.ucc.gu.uwa.edu.au:8006 from any machine inside the UCC network. Tip: Windows is much less buggy with the web interface than Linux.

If you're not inside the UCC network, you can use a VPN or ssh-forward to medico -- ssh -fN -L 8006:medico:8006 motsugo.ucc.asn.au for example. If you want to use the web-based VNC interface, you'll also need to forward port 5900 (or maybe something else, if the console doesn't work the error message includes the correct port number).

The VM will get an IP address from DHCP, however this should be set to a static entry in Murasoi as soon as the mac address of the VM is known in order to avoid conflicts. Also add a DNS entry on Mooneye as you would for a physical UCC machine.

If a user has more than one VM, it is worth creating a dedicated resource pool for that user. A resource pool is just a way of grouping several VMs together and allows permissions to be applied to the pool, which then propagates to all VMs in that pool. Create a pool by going to Datacenter->Pools->Create. After the pool appears in the menu tree, click on the pool and add any existing VMs for the user to that pool. Don't forget to then add PVEVMUser permissions to the pool.

NOTE: Don't set VLAN 2 on VMs ever. It'll just break things. If you want things on VLAN 2, set "No VLAN".

Resizing VMs

Currently, the only way to resize a VM image is to use the command line on the host (just Medico for now). There is no way to shrink a volume once it has been grown, aside from copying the data to a new image, so be careful. Proxmox's command line tool for managing VMs is "qm" (try man qm and the command to resize an image file is qm resize. This tool has only been tested with the .raw format which the VM should be stored as, otherwise you're on your own, though it will probably work with any image stored as a plain file. Look at the man file for command syntax, however here is an example of resizing the image for the main virtio disk on VM 101 to 80GiB (which the guest will read as 85.9GB):

qm resize 101 virtio0 80G

The guest machine then needs to be rebooted to recognise the change in size, and the filesystem resized to use the extra space.

Adding Containers

An OpenVZ Container, or CT, is a paravirtualised environment. It is more like a chroot on steroids than a full virtual machine, and it uses the host kernel but a separate userland environment. Container technology allows you to set a quota on disk, memory and CPU usage, but unused resources can be shared. If you just need a clean environment to run a few daemons or test something out, a container makes better use of our resources.

  1. Log in as an administrator.
  2. Click on "Create CT" in the top right corner
  3. Set the following general options:
    • Name: the hostname of the VM
    • Resource pool: Member-VMs (or as appropriate)
    • Storage: nas-vmstore (or as appropriate - see Storage above)
    • Password/confirm password: the root password for your new container
  4. Click Next.
  5. Select a template (base image for the operating system). debian-7.0-standard or similar is probably the way to go.

  6. Click Next.
  7. Set appropriate resource limits. Remember that these are maximums, not guaranteed minimums, so you can set them quite high.
    • Memory: 2048 MB
    • Swap: 512 MB
    • Disk size: 50 GB
    • CPUs: 2
  8. Click Next.
  9. Network: unfortunately the UI for setting network options in containers is not as full-featured as VMs; in particular, there is no way to set a VLAN tag through the web UI.

    • If a machine room IP is appropriate (probably not), you can add that straight in as a 'Routed mode' IP, or do static configuration with 'Bridged mode' to vmbr0.

    • To use the more appropriate clubroom or VM networks, we will have to come back later. Choose 'Bridged mode', and continue once the container is created to edit the configuration on the command line.
  10. Click Next.
  11. Leave the DNS settings alone; click Next.
  12. Click Finish once you are happy with the settings. The container will be created, the template unpacked and the appropriate settings applied.

Under the summary tab of the newly created VM, go to the notes section and add a comment with the following information:

  • the username of the owner
  • the name of the owner
  • date of creation
  • the CT's purpose
  • the hostname of the CT if it's different to what it was named in proxmox
  • the IP address of the CT
  • any other pertinent information that may be helpful to management, such as a contact email address

Under the options tab, set 'Start at boot'.

To manually manage the network configuration, keep following these directions:

  1. Take note of the container ID - the number next to the hostname in the description at the top of the screen or in the left-hand server list. The number 999 is used below; replace this with the appropriate ID.
  2. Log on to medico as root via SSH.
  3. Run the following command (with the correct container ID) to wipe out the existing network configuration:

vzctl set 999 --netif_del all --ipdel all --save
  1. Choose the correct bridge interface. For VLAN 3 (clubroom), use vmbr0v3 and for the VM network use vmbr0v4.

  2. Run the following command to add a new bridge, with the appropriate bridge device and container ID:

vzctl set 999 --netif_add eth0,,,,vmbr0v4 --save

You can now start the container and log in using the console.

You will probably need to set up the interfaces as you normally would; add something like this to /etc/network/interfaces on Debian:

auto eth0
iface eth0 inet dhcp

Command Line Management

Virtual machines are managed using the qm tool. Containers are managed using the pvectl tool (though you can use vzctl as well.

There is more information on command-line tools on the Proxmox wiki.

See Also

The SOE says how to do some of the things this page tells you to do.