5247
Comment:
|
15961
bbl
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
=== Installation === Proxmox can be installed using either a baremetal installer iso or an existing Debian installation (check kernel versions as Proxmox replaces the existing kernel). The problem with the baremetal installer is that it does not allow you to set up your own logical volumes and doesn't give you the option of software raid. IT WILL ALSO EAT ANY OTHER DISKS ATTACHED TO THE MACHINE, disconnect disks you don't want lost if using the baremetal installer. So machines such as [[Medico]] had Proxmox installed on top of pre-installed Debian Squeeze. |
<<TableOfContents(3)>> |
Line 6: | Line 5: |
Installation is incredibly easy by following the instructions in the [[http://pve.proxmox.com/wiki/Category:Installation|Proxmox VE Installation Page]]. Ensure that the Debian install follows the UCC almost-standard layout, with separate rootusr, var, boot, and home logical volumes. In addition, ensure there is a vmstore logical volume where virtual machines will be saved. | = Info for Users = == Getting a VM == This will require wrangling a wheel member to help you create a machine through the Proxmox interface. Beware the Wheel member may say no to giving you a VM for any reason. Setting up a VM does take some time, so don't expect them to drop what they're doing and create it for you on the spot. The best way to do this is to jump on [[IRC]] or email wheel@ucc. Assuming the wheel member gives you a full VM and not just a container, what you should end up with is essentially an empty computer - you will need to install an OS on it and SECURE IT yourself. |
Line 8: | Line 9: |
==== Things missed by the manual installer ==== * The notable instruction that is missing in the wiki page is to enable Kernel Samepage Merging (KSM) on the host, which is a memory de-duplication feature - google how to enable it and enable it with a line in /etc/rc.local (check [[Motsugo]]'s for an example) * The proxmox installer fails to change the network configuration file to be suitable for virtual machines; check out the default configuration in [[http://pve.proxmox.com/wiki/Network_Model|Proxmox Network Model]] and modify /etc/network/interfaces to suit. * All the other items on the [[SOE]] page, with the exclusion of LDAP, NFS, dispense and most of the other user programs. * IPv6 configuration. Look at [[Motsugo]]'s or [[Medico]]'s config for an example. |
== Setting Up Your VM == === Logging Into The Interface === First, you must be '''on the UCC network''' - the web interface is fully firewalled off from the outside internet for security reasons. To get on the UCC network, use a clubroom machine, connect to the UCC wireless, or from anywhere else, connect to the [[VPN|UCC VPN]]. |
Line 14: | Line 13: |
=== Security === Security is paramount on a vm host because of the high potential for damage if the machine is compromised. Central fail2ban is set up to monitor the webpage and the ssh interface (see [[http://forum.proxmox.com/threads/3583-How-To-implement-Fail2Ban-on-Host]] and [[http://blog.extremeshok.com/archives/810]]), however it is imperative that central logging is configured and TESTED for this to work. The web interface must not be unfirewalled to outside the UCC network under any circumstances. |
For those who know ssh or don't want to connect using the other methods, you can also ssh-forward to medico: `ssh -fN -L 8006:medico:8006 motsugo.ucc.asn.au` for example. --(If you want to use the web-based VNC interface, you'll also need to forward port 5900 (or maybe something else, if the console doesn't work the error message includes the correct port number).)-- This ''should'' no longer be necessary with Proxmox 3.0+ |
Line 17: | Line 15: |
=== Post-install Configuration === The main thing that needs to be done post-install is to configure the storage locations. Go to Datacenter->Storage in the web interface and create a storage area that will hold vm images. Then disable image storage in the "local" storage area in /var/lib/vz. You may also wish to add an NFS location for iso's (yet to be created/decided at time of writing), as well as any other SAN/NAS vm storage space. |
Once you are on the UCC network, browse to https://medico.ucc.gu.uwa.edu.au:8006 from any modern browser and log in using your UCC username and password. |
Line 20: | Line 17: |
=== Authentication === | === Installing an OS === To install the OS on your machine, you can either boot an installer from an ISO, or use UCC's netboot setup to install your OS. By default, all machines on the VM network without an OS will netboot, however installing from an ISO tends to be more reliable. All gumby users (that's you if you're not on wheel) have upload privileges to a store of installer ISOs via the Proxmox web interface. You can mount an ISO from the ISOs storage location via your VM's Hardware tab by selecting the CD drive and clicking edit. If you have problems uploading to the ISOs storage area though the web interface, contact your friendly neighbourhood Wheel member and they can put it directly into /services/iso. === Once You Have Installed Your OS === There are a couple of things you need to tell the person who set up your machine - its hostname, its MAC address and its IP address. Then they will be able to set up DNS and a static DHCP entry. You will also need to let them know which firewall ports you need unblocked an which services you are running if you wish to run any externally accessible services on your machine - we generally don't let non-wheel members do their own firewalling. Gumby users must also nominate a wheel member who can have root ssh or console access on your machine for auditing purposes - for ssh their key then needs to be copied into /root/.ssh/authorized_keys or for console access they must have a fully privileged local account. === Securing Your Machine === TODO - ask a Wheel member what to do for now. = Info for Administrators = == Authentication == |
Line 28: | Line 37: |
Alternatively, get another administrator to create your user through the web interface. Keep in mind that you must have a group or you will be unable to log in. Contact wheel if you are unable to login twice in a row or you will be locked out. | Alternatively, get another administrator to create your user through the web interface. Contact wheel if you are unable to login twice in a row or you will be locked out. |
Line 30: | Line 39: |
=== Storage === Virtual machines should be stored as .raw images in the appropriate vmstore area. |
== Storage == Virtual machines should be stored as .raw images in the appropriate vmstore area. Storage is managed at the cluster level, so every storage device is available to (or created on) every node, unless it is explicitly limited to a particular node or group of nodes. This is necessary in order to migrate machines to different nodes without having to backup and restore the VM's disk to a different path (including local storage on each node). |
Line 33: | Line 42: |
On [[Medico]] there are two vmstores; */vmstore-local is a RAID1 volume on [[Medico]]'s internal SSD's. It is fast, however it is also quite small and should only be used for UCC core servers */vmstore-san is a RAID1/0 volume on the SAN attached via fibre channel. It can be used for member VM's and non-core servers as it has plenty of space |
On the Atlantic cluster, there are two vmstores per node; */vmstore-local is local on each node a RAID1 volume on [[Medico]]'s internal SSD's. It is fast, however it is also quite small and should only be used for UCC core servers */vmstore-nas is a RAID1/0 volume on [[Molmol]] mounted over NFS. It can be used for member VMs and non-core servers as it has plenty of space |
Line 37: | Line 47: |
=== Permissions === TODO |
== Adding VMs == These instructions are for creating a VM for a general UCC member - do not blindly follow them for UCC servers. |
Line 40: | Line 50: |
=== Adding VM's === TODO |
To create the actual machine: 1. Log in as an administrator 2. Click on "Create VM" in the top right corner 3. Set Name as the hostname of the VM, set the resource pool to Member-VMs and click next 4. Select the desired OS for the VM and click next 5. Select "do no use any media" (unless media has already been decided and uploaded to the correct location) and click next 6. Set the following options for the hard disk: * Bus device: VIRTIO 0 * Storage: nas-vmstore * Disk size: 50 * Format: raw * Cache: default (no cache) * Then click next 7. Set the number of cores to 2 (leave everything else at default) and click next 8. Set memory to 2GB and click next 9. Set the network model to VirtIO (paravirtualised) and set the VLAN tag to 4 10. Click next and then finish |
Line 43: | Line 68: |
=== Resizing VM's === Currently, the only way to resize a VM image is to use the command line on the host (just [[Medico]] for now). There is no way to shrink a volume once it has been grown, aside from copying the data to a new image, so be careful. Proxmox's command line tool for managing VM's is "qm" (try {{{man qm}}} and the command to resize an image file is {{{qm resize}}}. This tool has only been tested with the .raw format which the VM ''should'' be stored as, otherwise you're on your own, though it will probably work with any image stored as a plain file. Look at the man file for command syntax, however here is an example of resizing the image for the main virtio disk on VM 101 to 80GiB (which the guest will read as 85.9GB): |
The UI is a bit buggy. [SJY] wasted hours trying to work out why the Storage selector (in the Hard Disk tab) was grayed out, preventing the creation of new VMs. If you experience this issue (and storage is online, ie. other VMs are working correctly) then your best bet is probably just to create the VM on the command line with something like {{{qm create NEW_VM_ID --name NEW_VM_NAME --virtio0 nas-vmstore:0,format=raw --bootdisk virtio0 --ostype l26 --memory 2048 --onboot yes --sockets 1 --cores 2}}} (this command will not configure a network interface) and then tweak the config in the web UI. Under the summary tab of the newly created VM, go to the notes section and add a comment with the following information: * the username of the owner * the name of the owner * date of creation * the VM's purpose * the hostname of the VM if it's different to what it was named in proxmox * the IP address of the VM * any other pertinant information that may be helpful to management, such as a contact email address Under the options tab: * Change "Start at boot" to yes Under the permissions tab: * Add a user permission for the owner of the VM and give them the role "PVEVMUser". This allows the user to do anything you could do to a physical machine without taking the cover off (with the exception of changing the OS Type). If the user doesn't exist yet, see the [[Proxmox#Permissions|Permissions]] section on how to create it. The VM should now boot, however it is essentially a blank machine and will netboot. The VM will get an IP address from DHCP, however this should be set to a static entry in [[Murasoi]] as soon as the mac address of the VM is known in order to avoid conflicts. Also add a DNS entry on [[Mooneye]] as you would for a physical UCC machine. NOTE: Don't set VLAN 2 on VMs ever. It'll just break things. If you want things on VLAN 2, set "No VLAN". == Adding Containers == An OpenVZ Container, or CT, is a paravirtualised environment. It is more like a chroot on steroids than a full virtual machine, and it uses the host kernel but a separate userland environment. Container technology allows you to set a quota on disk, memory and CPU usage, but unused resources can be shared. If you just need a clean environment to run a few daemons or test something out, a container makes better use of our resources. 1. Log in as an administrator. 1. Click on "Create CT" in the top right corner 1. Set the following general options: * Name: the hostname of the VM * Resource pool: Member-VMs (or as appropriate) * Storage: nas-vmstore (or as appropriate - see Storage above) * Password/confirm password: the root password for your new container 1. Click Next. 1. Select a template (base image for the operating system). {{{debian-7.0-standard}}} or similar is probably the way to go. 1. Click Next. 1. Set appropriate resource limits. Remember that these are maximums, not guaranteed minimums, so you can set them quite high. * Memory: 2048 MB * Swap: 512 MB * Disk size: 50 GB * CPUs: 2 1. Click Next. 1. Network: unfortunately the UI for setting network options in containers is not as full-featured as VMs; in particular, [[https://bugzilla.proxmox.com/show_bug.cgi?id=129|there is no way to set a VLAN tag through the web UI]]. * If a machine room IP is appropriate (probably not), you can add that straight in as a 'Routed mode' IP, or do static configuration with 'Bridged mode' to {{{vmbr0}}}. * To use the more appropriate clubroom or VM networks, we will have to come back later. Choose 'Bridged mode', and continue once the container is created to edit the configuration on the command line. 1. Click Next. 1. Leave the DNS settings alone; click Next. 1. Click Finish once you are happy with the settings. The container will be created, the template unpacked and the appropriate settings applied. Under the summary tab of the newly created VM, go to the notes section and add a comment with the following information: * the username of the owner * the name of the owner * date of creation * the CT's purpose * the hostname of the CT if it's different to what it was named in proxmox * the IP address of the CT * any other pertinent information that may be helpful to management, such as a contact email address Under the options tab, set 'Start at boot'. To manually manage the network configuration, keep following these directions: 1. Take note of the container ID - the number next to the hostname in the description at the top of the screen or in the left-hand server list. The number 999 is used below; replace this with the appropriate ID. 1. Log on to medico as root via SSH. 1. Run the following command (with the correct container ID) to wipe out the existing network configuration: {{{ vzctl set 999 --netif_del all --ipdel all --save }}} 1.#4 Choose the correct bridge interface. For VLAN 3 (clubroom), use {{{vmbr0v3}}} and for the VM network use {{{vmbr0v4}}}. 1. Run the following command to add a new bridge, with the appropriate bridge device and container ID: {{{ vzctl set 999 --netif_add eth0,,,,vmbr0v4 --save }}} You can now start the container and log in using the console. You will probably need to set up the interfaces as you normally would; add something like this to {{{/etc/network/interfaces}}} on Debian: {{{ auto eth0 iface eth0 inet dhcp }}} == Permissions == If a user has more than one VM, it is worth creating a dedicated resource pool for that user. A resource pool is just a way of grouping several VMs together and allows permissions to be applied to the pool, which then propagates to all VMs in that pool. Create a pool by going to Datacenter->Pools->Create. After the pool appears in the menu tree, click on the pool and add any existing VMs for the user to that pool. Don't forget to then add PVEVMUser permissions to the pool. == Resizing VMs == Currently, the only way to resize a VM image is to use the command line on the host (just [[Medico]] for now). There is no way to shrink a volume once it has been grown, aside from copying the data to a new image, so be careful. Proxmox's command line tool for managing VMs is "qm" (try {{{man qm}}} and the command to resize an image file is {{{qm resize}}}. This tool has only been tested with the .raw format which the VM ''should'' be stored as, otherwise you're on your own, though it will probably work with any image stored as a plain file. Look at the man file for command syntax, however here is an example of resizing the image for the main virtio disk on VM 101 to 80GiB (which the guest will read as 85.9GB): |
Line 48: | Line 159: |
Line 49: | Line 161: |
== Command Line Management == Virtual machines are managed using the {{{qm}}} tool. Containers are managed using the {{{pvectl}}} tool (though you can use {{{vzctl}}} as well. [[http://pve.proxmox.com/wiki/Command_line_tools|There is more information on command-line tools on the Proxmox wiki.]] = Info for Installers = == Installation == Proxmox can be installed using either a baremetal installer iso or an existing Debian installation (check kernel versions as Proxmox replaces the existing kernel). The problem with the baremetal installer is that it does not allow you to set up your own logical volumes and doesn't give you the option of software raid. IT WILL ALSO EAT ANY OTHER DISKS ATTACHED TO THE MACHINE, disconnect disks you don't want lost if using the baremetal installer. So machines such as [[Medico]] had Proxmox installed on top of pre-installed Debian Squeeze. Installation is incredibly easy by following the instructions in the [[http://pve.proxmox.com/wiki/Category:Installation|Proxmox VE Installation Page]]. Ensure that the Debian install follows the UCC almost-standard layout, with separate rootusr, var, boot, and home logical volumes. In addition, ensure there is a vmstore logical volume where virtual machines will be saved. === Things missed by the manual installer === * The notable instruction that is missing in the wiki page is to enable Kernel Samepage Merging (KSM) on the host, which is a memory de-duplication feature - google how to enable it and enable it with a line in /etc/rc.local (check [[Motsugo]]'s for an example) * The proxmox installer fails to change the network configuration file to be suitable for virtual machines; check out the default configuration in [[http://pve.proxmox.com/wiki/Network_Model|Proxmox Network Model]] and modify /etc/network/interfaces to suit. * All the other items on the [[SOE]] page, with the exclusion of LDAP, NFS, dispense and most of the other user programs. * IPv6 configuration. Look at [[Motsugo]]'s or [[Medico]]'s config for an example. == Security == Security is paramount on a vm host because of the high potential for damage if the machine is compromised. Central fail2ban is set up to monitor the webpage and the ssh interface (see [[http://forum.proxmox.com/threads/3583-How-To-implement-Fail2Ban-on-Host]] and [[http://blog.extremeshok.com/archives/810]]), however it is imperative that central logging is configured and TESTED for this to work. The web interface must not be unfirewalled to outside the UCC network under any circumstances. == Post-install Configuration == The main thing that needs to be done post-install is to configure the storage locations. Go to Datacenter->Storage in the web interface and create a storage area that will hold vm images. Then disable image storage in the "local" storage area in /var/lib/vz. You may also wish to add an NFS location for iso's (yet to be created/decided at time of writing), as well as any other SAN/NAS vm storage space. == See Also == The [[SOE]] says how to do some of the things this page tells you to do. |
Proxmox VE is used by UCC as a virtual machine management infrastructure, and is a lot like vmware. It is based on KVM and OpenVZ, and is built on Debian, which UCC likes because it's FREE.
Info for Users
Getting a VM
This will require wrangling a wheel member to help you create a machine through the Proxmox interface. Beware the Wheel member may say no to giving you a VM for any reason. Setting up a VM does take some time, so don't expect them to drop what they're doing and create it for you on the spot. The best way to do this is to jump on IRC or email wheel@ucc. Assuming the wheel member gives you a full VM and not just a container, what you should end up with is essentially an empty computer - you will need to install an OS on it and SECURE IT yourself.
Setting Up Your VM
Logging Into The Interface
First, you must be on the UCC network - the web interface is fully firewalled off from the outside internet for security reasons. To get on the UCC network, use a clubroom machine, connect to the UCC wireless, or from anywhere else, connect to the UCC VPN.
For those who know ssh or don't want to connect using the other methods, you can also ssh-forward to medico: ssh -fN -L 8006:medico:8006 motsugo.ucc.asn.au for example. If you want to use the web-based VNC interface, you'll also need to forward port 5900 (or maybe something else, if the console doesn't work the error message includes the correct port number). This should no longer be necessary with Proxmox 3.0+
Once you are on the UCC network, browse to https://medico.ucc.gu.uwa.edu.au:8006 from any modern browser and log in using your UCC username and password.
Installing an OS
To install the OS on your machine, you can either boot an installer from an ISO, or use UCC's netboot setup to install your OS. By default, all machines on the VM network without an OS will netboot, however installing from an ISO tends to be more reliable.
All gumby users (that's you if you're not on wheel) have upload privileges to a store of installer ISOs via the Proxmox web interface. You can mount an ISO from the ISOs storage location via your VM's Hardware tab by selecting the CD drive and clicking edit. If you have problems uploading to the ISOs storage area though the web interface, contact your friendly neighbourhood Wheel member and they can put it directly into /services/iso.
Once You Have Installed Your OS
There are a couple of things you need to tell the person who set up your machine - its hostname, its MAC address and its IP address. Then they will be able to set up DNS and a static DHCP entry. You will also need to let them know which firewall ports you need unblocked an which services you are running if you wish to run any externally accessible services on your machine - we generally don't let non-wheel members do their own firewalling. Gumby users must also nominate a wheel member who can have root ssh or console access on your machine for auditing purposes - for ssh their key then needs to be copied into /root/.ssh/authorized_keys or for console access they must have a fully privileged local account.
Securing Your Machine
TODO - ask a Wheel member what to do for now.
Info for Administrators
Authentication
Out of the box, the web interface uses the username root and the root password of the host. The LDAP implementation in Proxmox isn't "true" LDAP in that Proxmox only looks at LDAP for authentication and cannot consult LDAP for a list of users or group permissions. Other users can be added by creating their username in the web interface and setting the authentication realm to UCC's LDAP. The username must correspond to a UCC LDAP username.
To add yourself to the administrator's group, SSH to medico, and run something like:
pveum useradd accmurphy@uccldap -group Administrator -firstname ACC -lastname Murphy -email [email protected]
Alternatively, get another administrator to create your user through the web interface. Contact wheel if you are unable to login twice in a row or you will be locked out.
Storage
Virtual machines should be stored as .raw images in the appropriate vmstore area. Storage is managed at the cluster level, so every storage device is available to (or created on) every node, unless it is explicitly limited to a particular node or group of nodes. This is necessary in order to migrate machines to different nodes without having to backup and restore the VM's disk to a different path (including local storage on each node).
On the Atlantic cluster, there are two vmstores per node;
- /vmstore-local is local on each node
a RAID1 volume on Medico's internal SSD's. It is fast, however it is also quite small and should only be used for UCC core servers
/vmstore-nas is a RAID1/0 volume on Molmol mounted over NFS. It can be used for member VMs and non-core servers as it has plenty of space
Adding VMs
These instructions are for creating a VM for a general UCC member - do not blindly follow them for UCC servers.
To create the actual machine:
- Log in as an administrator
- Click on "Create VM" in the top right corner
- Set Name as the hostname of the VM, set the resource pool to Member-VMs and click next
- Select the desired OS for the VM and click next
- Select "do no use any media" (unless media has already been decided and uploaded to the correct location) and click next
- Set the following options for the hard disk:
- Bus device: VIRTIO 0
- Storage: nas-vmstore
- Disk size: 50
- Format: raw
- Cache: default (no cache)
- Then click next
- Set the number of cores to 2 (leave everything else at default) and click next
- Set memory to 2GB and click next
- Set the network model to VirtIO (paravirtualised) and set the VLAN tag to 4
- Click next and then finish
The UI is a bit buggy. [SJY] wasted hours trying to work out why the Storage selector (in the Hard Disk tab) was grayed out, preventing the creation of new VMs. If you experience this issue (and storage is online, ie. other VMs are working correctly) then your best bet is probably just to create the VM on the command line with something like qm create NEW_VM_ID --name NEW_VM_NAME --virtio0 nas-vmstore:0,format=raw --bootdisk virtio0 --ostype l26 --memory 2048 --onboot yes --sockets 1 --cores 2 (this command will not configure a network interface) and then tweak the config in the web UI.
Under the summary tab of the newly created VM, go to the notes section and add a comment with the following information:
- the username of the owner
- the name of the owner
- date of creation
- the VM's purpose
- the hostname of the VM if it's different to what it was named in proxmox
- the IP address of the VM
- any other pertinant information that may be helpful to management, such as a contact email address
Under the options tab:
- Change "Start at boot" to yes
Under the permissions tab:
Add a user permission for the owner of the VM and give them the role "PVEVMUser". This allows the user to do anything you could do to a physical machine without taking the cover off (with the exception of changing the OS Type). If the user doesn't exist yet, see the Permissions section on how to create it.
The VM should now boot, however it is essentially a blank machine and will netboot.
The VM will get an IP address from DHCP, however this should be set to a static entry in Murasoi as soon as the mac address of the VM is known in order to avoid conflicts. Also add a DNS entry on Mooneye as you would for a physical UCC machine.
NOTE: Don't set VLAN 2 on VMs ever. It'll just break things. If you want things on VLAN 2, set "No VLAN".
Adding Containers
An OpenVZ Container, or CT, is a paravirtualised environment. It is more like a chroot on steroids than a full virtual machine, and it uses the host kernel but a separate userland environment. Container technology allows you to set a quota on disk, memory and CPU usage, but unused resources can be shared. If you just need a clean environment to run a few daemons or test something out, a container makes better use of our resources.
- Log in as an administrator.
- Click on "Create CT" in the top right corner
- Set the following general options:
- Name: the hostname of the VM
- Resource pool: Member-VMs (or as appropriate)
- Storage: nas-vmstore (or as appropriate - see Storage above)
- Password/confirm password: the root password for your new container
- Click Next.
Select a template (base image for the operating system). debian-7.0-standard or similar is probably the way to go.
- Click Next.
- Set appropriate resource limits. Remember that these are maximums, not guaranteed minimums, so you can set them quite high.
- Memory: 2048 MB
- Swap: 512 MB
- Disk size: 50 GB
- CPUs: 2
- Click Next.
Network: unfortunately the UI for setting network options in containers is not as full-featured as VMs; in particular, there is no way to set a VLAN tag through the web UI.
If a machine room IP is appropriate (probably not), you can add that straight in as a 'Routed mode' IP, or do static configuration with 'Bridged mode' to vmbr0.
- To use the more appropriate clubroom or VM networks, we will have to come back later. Choose 'Bridged mode', and continue once the container is created to edit the configuration on the command line.
- Click Next.
- Leave the DNS settings alone; click Next.
- Click Finish once you are happy with the settings. The container will be created, the template unpacked and the appropriate settings applied.
Under the summary tab of the newly created VM, go to the notes section and add a comment with the following information:
- the username of the owner
- the name of the owner
- date of creation
- the CT's purpose
- the hostname of the CT if it's different to what it was named in proxmox
- the IP address of the CT
- any other pertinent information that may be helpful to management, such as a contact email address
Under the options tab, set 'Start at boot'.
To manually manage the network configuration, keep following these directions:
- Take note of the container ID - the number next to the hostname in the description at the top of the screen or in the left-hand server list. The number 999 is used below; replace this with the appropriate ID.
- Log on to medico as root via SSH.
- Run the following command (with the correct container ID) to wipe out the existing network configuration:
vzctl set 999 --netif_del all --ipdel all --save
Choose the correct bridge interface. For VLAN 3 (clubroom), use vmbr0v3 and for the VM network use vmbr0v4.
- Run the following command to add a new bridge, with the appropriate bridge device and container ID:
vzctl set 999 --netif_add eth0,,,,vmbr0v4 --save
You can now start the container and log in using the console.
You will probably need to set up the interfaces as you normally would; add something like this to /etc/network/interfaces on Debian:
auto eth0 iface eth0 inet dhcp
Permissions
If a user has more than one VM, it is worth creating a dedicated resource pool for that user. A resource pool is just a way of grouping several VMs together and allows permissions to be applied to the pool, which then propagates to all VMs in that pool. Create a pool by going to Datacenter->Pools->Create. After the pool appears in the menu tree, click on the pool and add any existing VMs for the user to that pool. Don't forget to then add PVEVMUser permissions to the pool.
Resizing VMs
Currently, the only way to resize a VM image is to use the command line on the host (just Medico for now). There is no way to shrink a volume once it has been grown, aside from copying the data to a new image, so be careful. Proxmox's command line tool for managing VMs is "qm" (try man qm and the command to resize an image file is qm resize. This tool has only been tested with the .raw format which the VM should be stored as, otherwise you're on your own, though it will probably work with any image stored as a plain file. Look at the man file for command syntax, however here is an example of resizing the image for the main virtio disk on VM 101 to 80GiB (which the guest will read as 85.9GB):
qm resize 101 virtio0 80G
The guest machine then needs to be rebooted to recognise the change in size, and the filesystem resized to use the extra space.
Command Line Management
Virtual machines are managed using the qm tool. Containers are managed using the pvectl tool (though you can use vzctl as well.
There is more information on command-line tools on the Proxmox wiki.
Info for Installers
Installation
Proxmox can be installed using either a baremetal installer iso or an existing Debian installation (check kernel versions as Proxmox replaces the existing kernel). The problem with the baremetal installer is that it does not allow you to set up your own logical volumes and doesn't give you the option of software raid. IT WILL ALSO EAT ANY OTHER DISKS ATTACHED TO THE MACHINE, disconnect disks you don't want lost if using the baremetal installer. So machines such as Medico had Proxmox installed on top of pre-installed Debian Squeeze.
Installation is incredibly easy by following the instructions in the Proxmox VE Installation Page. Ensure that the Debian install follows the UCC almost-standard layout, with separate rootusr, var, boot, and home logical volumes. In addition, ensure there is a vmstore logical volume where virtual machines will be saved.
Things missed by the manual installer
The notable instruction that is missing in the wiki page is to enable Kernel Samepage Merging (KSM) on the host, which is a memory de-duplication feature - google how to enable it and enable it with a line in /etc/rc.local (check Motsugo's for an example)
The proxmox installer fails to change the network configuration file to be suitable for virtual machines; check out the default configuration in Proxmox Network Model and modify /etc/network/interfaces to suit.
All the other items on the SOE page, with the exclusion of LDAP, NFS, dispense and most of the other user programs.
IPv6 configuration. Look at Motsugo's or Medico's config for an example.
Security
Security is paramount on a vm host because of the high potential for damage if the machine is compromised. Central fail2ban is set up to monitor the webpage and the ssh interface (see http://forum.proxmox.com/threads/3583-How-To-implement-Fail2Ban-on-Host and http://blog.extremeshok.com/archives/810), however it is imperative that central logging is configured and TESTED for this to work. The web interface must not be unfirewalled to outside the UCC network under any circumstances.
Post-install Configuration
The main thing that needs to be done post-install is to configure the storage locations. Go to Datacenter->Storage in the web interface and create a storage area that will hold vm images. Then disable image storage in the "local" storage area in /var/lib/vz. You may also wish to add an NFS location for iso's (yet to be created/decided at time of writing), as well as any other SAN/NAS vm storage space.
See Also
The SOE says how to do some of the things this page tells you to do.