Differences between revisions 2 and 14 (spanning 12 versions)
Revision 2 as of 2013-08-18 09:13:20
Size: 1021
Editor: Frames
Comment: added info
Revision 14 as of 2013-11-06 17:40:57
Size: 5823
Editor: BobAdamson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
 = MoneyMoneyMoney/NewStorage = = About =
Line 11: Line 11:
Considerations: = Requirements =
== Usage ==
 * /vmstore, with multiple hosts accessing VMs to allow for hosting failover
  * Varied uses! Some people using it for storage, others using it as build servers
  * ``Ideally we would have some sort of snapshotting on this because our VM backups are a bit "eh" if I recall correctly`` -- BobAdamson
  * We don't have a great deal of control over how VM users set up their swap space, which could do damage to SSD's if we're not careful. It may be worth making sure that VM storage only gets spinning disks and ram cache to avoid this problem
 * /home directory storage - do we want to take /home off motsugo?
  * What are the implications of this on things like dovecot which complain loudly if home dirs are missing?
  * The storage and speed needs of /home can be measured already...but we want to be a little faster of course
  * /home is also shared with samba
 * /services - already on nfs setup. Works well, but needs cleaning up. Not a heavy load as databases are stored locally on their servers. Most writes will be from webcams, and most reads are probably for web.
 * /away - we want this to be pretty fast to reduce login times and make the clubroom user experience good
  * very good candidate for SSD caching
  * /away needs to have ACL support and samba (3 or 4) for windows. This means either exporting the volume to another server and then sharing it that way, or having samba on the storage server. One is easier to upgrade, but has a performance penalty, the other is harder to upgrade but is more direct. ``Being UCC I think we should put it on a separate server so we can break one thing without breaking another`` -- BobAdamson
Line 13: Line 26:
 == case ==
 2RU case means take less space. but need half hieght PCI(-E) cards.
If you stand back and look at these needs, our storage needs aren't exactly huge or complex except for the VM storage. VM usage in UCC is growing pretty fast and we need to account for that, but we should otherwise be looking at future-proofing our setup. This means 2.5" disks (because that's what industry is moving towards) and standard parts wherever possible (paricularly the case).
Our current setup has the NAS on multiple 1Gb links and the SAN on multipathed 2Gb FC. Ideally the new system would not be slower than that.
Line 16: Line 29:
Need to have redundant power supply. == Case ==
2RU cases are nice because they give us better storage density, however space is not a huge concern in the UCC machine room. If there's a 3RU case that would be more suitable and allow us to have full-height cards, all the better. Beware of 2RU cases - unless you get the correct rear panel you can only have half-height PCI cards.

Redundant power supplies are absolutely essential. We should try and get the same one a medico has so that we have spares.
Line 20: Line 37:
 == PCI SSD == == PCI SSD ==
Line 22: Line 39:
Upside is crazy fast.
Downside is can't be raided (at least not trvially)
Line 23: Line 42:
 == Network / Interfacing ==
 === 10Gb/s Ethernet ===
Is kinda expensive, at least $300
Our network infrastructure doesn't support.
because of no raid, might be better off with 2 normal SSD's in raid, for the OS.
Line 28: Line 44:
 == Multiple 1Gb/s Ethernet == No Advantage to using a PCI-SSD
[[http://ssd-comparison.whoratesit.com/Compare/OCZ-Vertex-4-256GB-vs-OCZ-RevoDrive-3-X2-240GB/1315vs1403]]
Moire expensive, less good

== RAM ==
DDR3, 1600, ECC ram is required.

Most motherboards worth buyign will have 16 slots.
We can get alot of RAM
We would never need a SSD, for caching.

For about $1000 we could have 16x4GB=64Gb

 * For about $ 600 we could have 16x2GB=32Gb
 * For about $ 1,000 we could have 16x4GB=64Gb
 * For about $ 1,600 we could have 16x8GB=128Gb
 * For about $ 3,600 we could have 16x16GB=256Gb
 * For about $ 7,400 we could have 16x32GB=512Gb
 * For about $14,400 we could have 16x64GB=1048Gb

Battery backed ram is interesting but lacks OS support.


== Network / Interfacing ==
=== 10Gb/s Ethernet ===
Looking at about $350 per port for PCIe cards, and $900 for an 8 port switch. A 10Gig card will work with a 1Gig switch, but it's kinda silly to do that. It's doable and probably worth looking at to get the switch and card at the same time - 10Gig Ethernet has to start somewhere! Some motherboards have 10GBase-T ports. Consider 10GBase-T vs SFP+ Twinax cabling.

== Multiple 1Gb/s Ethernet ==
Line 31: Line 74:
 == iSCSI == == iSCSI ==
Line 34: Line 77:

== Possible Build #43 ==
Case:
 *This one, $107: http://www.wacomputers.com.au/rack-mountable-server-chassis-case-2u-650mm-depth-with-atx-psu-window-p324760.html

PSU:
 *This one: $550 http://www.techbuy.com.au/p/157973/POWERSUPPLIES_REDUNDANT_PSUS/Zippy/R2W-6460P.asp

Rails:
 *This one: $45.97 to suit above Generic case: http://www.wacomputers.com.au/rack-mountable-server-case-metal-slide-rails-600mm-p324756.html

Hotswap Bay:
 *$39 each (same as the one in heathred), two of: http://www.pccasegear.com/index.php?main_page=product_info&cPath=408&products_id=20990

CPU:
 *E3 option: $236 http://www.wacomputers.com.au/intel-bx80637e31220v2-quad-core-xeon-cpu-e3-1220v2-lga1155-32ghz-8mb-cache-4-threads-p327001.html
 *E5 option: $347 http://www.wacomputers.com.au/intel-bx80621e52609-4-core-xeon-e5-2609-24ghz-10mb-cache-64gtsec-4-threads-socket-lga-2011-p320552.html

Motherboard:
 *Supermicro MBD-X9DRH-7TF-O http://www.newegg.com/Product/Product.aspx?Item=N82E16813182351
 *http://www.ebay.com.au/itm/like/390687544029?lpid=87

SSD's:
 *


HDD:
 *

About

SAN is dying. NetApp is destroying its disks. Got to get another storage solution.

Frames started to put together some form of baseline design. GoogleSpreadSheet

Requirements

Usage

  • /vmstore, with multiple hosts accessing VMs to allow for hosting failover
    • Varied uses! Some people using it for storage, others using it as build servers
    • Ideally we would have some sort of snapshotting on this because our VM backups are a bit "eh" if I recall correctly -- BobAdamson

    • We don't have a great deal of control over how VM users set up their swap space, which could do damage to SSD's if we're not careful. It may be worth making sure that VM storage only gets spinning disks and ram cache to avoid this problem
  • /home directory storage - do we want to take /home off motsugo?
    • What are the implications of this on things like dovecot which complain loudly if home dirs are missing?
    • The storage and speed needs of /home can be measured already...but we want to be a little faster of course
    • /home is also shared with samba
  • /services - already on nfs setup. Works well, but needs cleaning up. Not a heavy load as databases are stored locally on their servers. Most writes will be from webcams, and most reads are probably for web.
  • /away - we want this to be pretty fast to reduce login times and make the clubroom user experience good
    • very good candidate for SSD caching
    • /away needs to have ACL support and samba (3 or 4) for windows. This means either exporting the volume to another server and then sharing it that way, or having samba on the storage server. One is easier to upgrade, but has a performance penalty, the other is harder to upgrade but is more direct. Being UCC I think we should put it on a separate server so we can break one thing without breaking another -- BobAdamson

If you stand back and look at these needs, our storage needs aren't exactly huge or complex except for the VM storage. VM usage in UCC is growing pretty fast and we need to account for that, but we should otherwise be looking at future-proofing our setup. This means 2.5" disks (because that's what industry is moving towards) and standard parts wherever possible (paricularly the case). Our current setup has the NAS on multiple 1Gb links and the SAN on multipathed 2Gb FC. Ideally the new system would not be slower than that.

Case

2RU cases are nice because they give us better storage density, however space is not a huge concern in the UCC machine room. If there's a 3RU case that would be more suitable and allow us to have full-height cards, all the better. Beware of 2RU cases - unless you get the correct rear panel you can only have half-height PCI cards.

Redundant power supplies are absolutely essential. We should try and get the same one a medico has so that we have spares.

Need front, hot swapable hard drive bays

PCI SSD

Could use a PCI-E SSD for magic caching. Upside is crazy fast. Downside is can't be raided (at least not trvially)

because of no raid, might be better off with 2 normal SSD's in raid, for the OS.

No Advantage to using a PCI-SSD http://ssd-comparison.whoratesit.com/Compare/OCZ-Vertex-4-256GB-vs-OCZ-RevoDrive-3-X2-240GB/1315vs1403 Moire expensive, less good

RAM

DDR3, 1600, ECC ram is required.

Most motherboards worth buyign will have 16 slots. We can get alot of RAM We would never need a SSD, for caching.

For about $1000 we could have 16x4GB=64Gb

  • For about $ 600 we could have 16x2GB=32Gb
  • For about $ 1,000 we could have 16x4GB=64Gb
  • For about $ 1,600 we could have 16x8GB=128Gb
  • For about $ 3,600 we could have 16x16GB=256Gb
  • For about $ 7,400 we could have 16x32GB=512Gb
  • For about $14,400 we could have 16x64GB=1048Gb

Battery backed ram is interesting but lacks OS support.

Network / Interfacing

10Gb/s Ethernet

Looking at about $350 per port for PCIe cards, and $900 for an 8 port switch. A 10Gig card will work with a 1Gig switch, but it's kinda silly to do that. It's doable and probably worth looking at to get the switch and card at the same time - 10Gig Ethernet has to start somewhere! Some motherboards have 10GBase-T ports. Consider 10GBase-T vs SFP+ Twinax cabling.

Multiple 1Gb/s Ethernet

Many (most?) server motherboards come with Dual Ethernet ports, some with Quad. Use multipath routing magic for extra speed.

iSCSI

Could install a iSCSI card letting, some/all of the disks be mounted as block devices, over fibre. Like the SAN. Prob not worth the effort/cost.

Possible Build #43

Case:

PSU:

Rails:

Hotswap Bay:

CPU:

Motherboard:

SSD's:

HDD: