Differences between revisions 13 and 16 (spanning 3 versions)
Revision 13 as of 2012-12-23 23:30:53
Size: 5888
Editor: BobAdamson
Comment:
Revision 16 as of 2012-12-27 10:13:33
Size: 8184
Editor: BobAdamson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 28: Line 28:
Alternatively, get another administrator to create your user through the web interface. Keep in mind that you must have a group or you will be unable to log in. Contact wheel if you are unable to login twice in a row or you will be locked out. Alternatively, get another administrator to create your user through the web interface. Contact wheel if you are unable to login twice in a row or you will be locked out.
Line 41: Line 41:
TODO These instructions are for creating a VM for a general UCC member - do not blindly follow them for UCC servers
Line 43: Line 43:
In the comments section on the summary page of the newly created VM, add a comment with the date of creation, the VM's purpose, the name and username of the owner. To create the actual machine:
 1. Log in as an administrator
 2. Click on "Create VM" in the top right corner
 3. Set Name as the hostname of the VM, set the resource pool to Member-VMs and click next
 4. Select the desired OS for the VM and click next
 5. Select "do no use any media" (unless media has already been decided and uploaded to the correct location) and click next
 6. Set the following options for the hard disk:
  * Bus device: VIRTIO 0
  * Storage: san-vmstore
  * Disk size: 50
  * Format: raw
  * Cache: default (no cache)
  * Then click next
 7. Set the number of cores to 2 (leave everything else at default) and click next
 8. Set memory to 2GB and click next
 9. Set the network model to VirtIO (paravirtualised) and set the VLAN tag to 4
 10. Click next and then finish
Line 45: Line 61:
If a user has more than one VM, it is worth creating a resource pool for that user. A resource pool is just a way of grouping several VM's together and allows permissions to be applied to the pool, which then propagates to all VM's in that pool. Create a pool by going to Datacenter->Pools->Create. After the pool appears in the menu tree, click on the pool and add any existing VM's for the user to that pool. Don't forget to then add PVEVMUser permissions to the pool. Under the summary tab of the newly created VM, go to the notes section and add a comment with the following information:
 * the username of the owner
 * the name of the owner
 * date of creation
 * the VM's purpose
 * the hostname of the VM if it's different to what it was named in proxmox
 * the IP address of the VM
 * any other pertinant information that may be helpful to management, such as a contact email address

Under the options tab:
 * Change "Start at boot" to yes

Under the permissions tab:
 * Add a user permission for the owner of the VM and give them the role "PVEVMUser". This allows the user to do anything you could do to a physical machine without taking the cover off (with the exception of changing the OS Type). If the user doesn't exist yet, see the [[Proxmox#Permissions|Permissions]] section on how to create it.

The VM should now boot, however it is essentially a blank machine and will netboot. It is now up to the user to insert an ISO into the drive and install their operating system (all users in the gumby group have read and upload access to the ISO storage area). Direct the user to browse to https://medico.ucc.gu.uwa.edu.au:8006 from any machine '''inside the UCC network''' (VPN works too) to log in and get console access. Tip: Windows is much less buggy with the web interface than Linux.

The VM will get an IP address from DHCP, however this should be set to a static entry in [[Murasoi]] as soon as the mac address of the VM is known in order to avoid conflicts. Also add a DNS entry on [[Mooneye]] as you would for a physical UCC machine.

If a user has more than one VM, it is worth creating a dedicated resource pool for that user. A resource pool is just a way of grouping several VM's together and allows permissions to be applied to the pool, which then propagates to all VM's in that pool. Create a pool by going to Datacenter->Pools->Create. After the pool appears in the menu tree, click on the pool and add any existing VM's for the user to that pool. Don't forget to then add PVEVMUser permissions to the pool.

Proxmox VE is used by UCC as a virtual machine management infrastructure, and is a lot like vmware. It is based on KVM and OpenVZ, and is built on Debian, which UCC likes because it's FREE.

Installation

Proxmox can be installed using either a baremetal installer iso or an existing Debian installation (check kernel versions as Proxmox replaces the existing kernel). The problem with the baremetal installer is that it does not allow you to set up your own logical volumes and doesn't give you the option of software raid. IT WILL ALSO EAT ANY OTHER DISKS ATTACHED TO THE MACHINE, disconnect disks you don't want lost if using the baremetal installer. So machines such as Medico had Proxmox installed on top of pre-installed Debian Squeeze.

Installation is incredibly easy by following the instructions in the Proxmox VE Installation Page. Ensure that the Debian install follows the UCC almost-standard layout, with separate rootusr, var, boot, and home logical volumes. In addition, ensure there is a vmstore logical volume where virtual machines will be saved.

Things missed by the manual installer

  • The notable instruction that is missing in the wiki page is to enable Kernel Samepage Merging (KSM) on the host, which is a memory de-duplication feature - google how to enable it and enable it with a line in /etc/rc.local (check Motsugo's for an example)

  • The proxmox installer fails to change the network configuration file to be suitable for virtual machines; check out the default configuration in Proxmox Network Model and modify /etc/network/interfaces to suit.

  • All the other items on the SOE page, with the exclusion of LDAP, NFS, dispense and most of the other user programs.

  • IPv6 configuration. Look at Motsugo's or Medico's config for an example.

Security

Security is paramount on a vm host because of the high potential for damage if the machine is compromised. Central fail2ban is set up to monitor the webpage and the ssh interface (see http://forum.proxmox.com/threads/3583-How-To-implement-Fail2Ban-on-Host and http://blog.extremeshok.com/archives/810), however it is imperative that central logging is configured and TESTED for this to work. The web interface must not be unfirewalled to outside the UCC network under any circumstances.

Post-install Configuration

The main thing that needs to be done post-install is to configure the storage locations. Go to Datacenter->Storage in the web interface and create a storage area that will hold vm images. Then disable image storage in the "local" storage area in /var/lib/vz. You may also wish to add an NFS location for iso's (yet to be created/decided at time of writing), as well as any other SAN/NAS vm storage space.

Authentication

Out of the box, the web interface uses the username root and the root password of the host. The LDAP implementation in Proxmox isn't "true" LDAP in that Proxmox only looks at LDAP for authentication and cannot consult LDAP for a list of users or group permissions. Other users can be added by creating their username in the web interface and setting the authentication realm to UCC's LDAP. The username must correspond to a UCC LDAP username.

To add yourself to the administrator's group, SSH to medico, and run something like:

pveum useradd accmurphy@uccldap -group Administrator -firstname ACC -lastname Murphy -email [email protected]

Alternatively, get another administrator to create your user through the web interface. Contact wheel if you are unable to login twice in a row or you will be locked out.

Storage

Virtual machines should be stored as .raw images in the appropriate vmstore area.

On Medico there are two vmstores;

  • /vmstore-local is a RAID1 volume on Medico's internal SSD's. It is fast, however it is also quite small and should only be used for UCC core servers

  • /vmstore-san is a RAID1/0 volume on the SAN attached via fibre channel. It can be used for member VM's and non-core servers as it has plenty of space

Permissions

TODO

Adding VM's

These instructions are for creating a VM for a general UCC member - do not blindly follow them for UCC servers

To create the actual machine:

  1. Log in as an administrator
  2. Click on "Create VM" in the top right corner
  3. Set Name as the hostname of the VM, set the resource pool to Member-VMs and click next
  4. Select the desired OS for the VM and click next
  5. Select "do no use any media" (unless media has already been decided and uploaded to the correct location) and click next
  6. Set the following options for the hard disk:
    • Bus device: VIRTIO 0
    • Storage: san-vmstore
    • Disk size: 50
    • Format: raw
    • Cache: default (no cache)
    • Then click next
  7. Set the number of cores to 2 (leave everything else at default) and click next
  8. Set memory to 2GB and click next
  9. Set the network model to VirtIO (paravirtualised) and set the VLAN tag to 4
  10. Click next and then finish

Under the summary tab of the newly created VM, go to the notes section and add a comment with the following information:

  • the username of the owner
  • the name of the owner
  • date of creation
  • the VM's purpose
  • the hostname of the VM if it's different to what it was named in proxmox
  • the IP address of the VM
  • any other pertinant information that may be helpful to management, such as a contact email address

Under the options tab:

  • Change "Start at boot" to yes

Under the permissions tab:

  • Add a user permission for the owner of the VM and give them the role "PVEVMUser". This allows the user to do anything you could do to a physical machine without taking the cover off (with the exception of changing the OS Type). If the user doesn't exist yet, see the Permissions section on how to create it.

The VM should now boot, however it is essentially a blank machine and will netboot. It is now up to the user to insert an ISO into the drive and install their operating system (all users in the gumby group have read and upload access to the ISO storage area). Direct the user to browse to https://medico.ucc.gu.uwa.edu.au:8006 from any machine inside the UCC network (VPN works too) to log in and get console access. Tip: Windows is much less buggy with the web interface than Linux.

The VM will get an IP address from DHCP, however this should be set to a static entry in Murasoi as soon as the mac address of the VM is known in order to avoid conflicts. Also add a DNS entry on Mooneye as you would for a physical UCC machine.

If a user has more than one VM, it is worth creating a dedicated resource pool for that user. A resource pool is just a way of grouping several VM's together and allows permissions to be applied to the pool, which then propagates to all VM's in that pool. Create a pool by going to Datacenter->Pools->Create. After the pool appears in the menu tree, click on the pool and add any existing VM's for the user to that pool. Don't forget to then add PVEVMUser permissions to the pool.

Resizing VM's

Currently, the only way to resize a VM image is to use the command line on the host (just Medico for now). There is no way to shrink a volume once it has been grown, aside from copying the data to a new image, so be careful. Proxmox's command line tool for managing VM's is "qm" (try man qm and the command to resize an image file is qm resize. This tool has only been tested with the .raw format which the VM should be stored as, otherwise you're on your own, though it will probably work with any image stored as a plain file. Look at the man file for command syntax, however here is an example of resizing the image for the main virtio disk on VM 101 to 80GiB (which the guest will read as 85.9GB):

qm resize 101 virtio0 80G

The guest machine then needs to be rebooted to recognise the change in size, and the filesystem resized to use the extra space.