Differences between revisions 20 and 21
Revision 20 as of 2013-11-11 21:29:18
Size: 7802
Editor: Frames
Comment:
Revision 21 as of 2013-11-12 09:44:35
Size: 7944
Editor: Frames
Comment: Added comment about new MB
Deletions are marked like this. Additions are marked like this.
Line 31: Line 31:
Line 33: Line 34:
This motherboard should be Substituted into the Bob and the Frames?Gozz builds. (It is the 1 socket version of the card they currently have)

About

SAN is dying. NetApp is destroying its disks. Got to get another storage solution.

Frames started to put together some form of baseline design. GoogleSpreadSheet

Requirements

Usage

  • /vmstore, with multiple hosts accessing VMs to allow for hosting failover
    • Varied uses! Some people using it for storage, others using it as build servers
    • Ideally we would have some sort of snapshotting on this because our VM backups are a bit "eh" if I recall correctly -- BobAdamson

    • We don't have a great deal of control over how VM users set up their swap space, which could do damage to SSD's if we're not careful. It may be worth making sure that VM storage only gets spinning disks and ram cache to avoid this problem
  • /home directory storage - do we want to take /home off motsugo?
    • What are the implications of this on things like dovecot which complain loudly if home dirs are missing?
    • The storage and speed needs of /home can be measured already...but we want to be a little faster of course
    • /home is also shared with samba
  • /services - already on nfs setup. Works well, but needs cleaning up. Not a heavy load as databases are stored locally on their servers. Most writes will be from webcams, and most reads are probably for web.
  • /away - we want this to be pretty fast to reduce login times and make the clubroom user experience good
    • very good candidate for SSD caching
    • /away needs to have ACL support and samba (3 or 4) for windows. This means either exporting the volume to another server and then sharing it that way, or having samba on the storage server. One is easier to upgrade, but has a performance penalty, the other is harder to upgrade but is more direct. Being UCC I think we should put it on a separate server so we can break one thing without breaking another -- BobAdamson

If you stand back and look at these needs, our storage needs aren't exactly huge or complex except for the VM storage. VM usage in UCC is growing pretty fast and we need to account for that, but we should otherwise be looking at future-proofing our setup. This means 2.5" disks (because that's what industry is moving towards) and standard parts wherever possible (paricularly the case). Our current setup has the NAS on multiple 1Gb links and the SAN on multipathed 2Gb FC. Ideally the new system would not be slower than that.

Motherboard

It is good if this has SAS2 and 10GE as those are expensive if we need to buy PCI-E Cards

This looks good: http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRH-7TF.cfm http://www.ebay.com/ctg/Super-Micro-Computer-X9SRH-7TF-LGA-2011-Socket-R-MBD-X9SRH-7TF-O-Motherboard-/139851059 This motherboard should be Substituted into the Bob and the Frames?Gozz builds. (It is the 1 socket version of the card they currently have)

Case

2RU cases are nice because they give us better storage density, however space is not a huge concern in the UCC machine room. If there's a 3RU case that would be more suitable and allow us to have full-height cards, all the better. Beware of 2RU cases - unless you get the correct rear panel you can only have half-height PCI cards.

Redundant power supplies are absolutely essential. We should try and get the same one a medico has so that we have spares.

Need front, hot swapable hard drive bays

PCI SSD

Could use a PCI-E SSD for magic caching. Upside is crazy fast. Downside is can't be raided (at least not trvially)

because of no raid, might be better off with 2 normal SSD's in raid, for the OS.

No Advantage to using a PCI-SSD http://ssd-comparison.whoratesit.com/Compare/OCZ-Vertex-4-256GB-vs-OCZ-RevoDrive-3-X2-240GB/1315vs1403 Moire expensive, less good

RAM

DDR3, 1600, ECC ram is required.

Most motherboards worth buyign will have 16 slots. We can get alot of RAM We would never need a SSD, for caching.

For about $1000 we could have 16x4GB=64Gb

  • For about $ 600 we could have 16x2GB=32Gb
  • For about $ 1,000 we could have 16x4GB=64Gb
  • For about $ 1,600 we could have 16x8GB=128Gb
  • For about $ 3,600 we could have 16x16GB=256Gb
  • For about $ 7,400 we could have 16x32GB=512Gb
  • For about $14,400 we could have 16x64GB=1048Gb

Battery backed ram is interesting but lacks OS support.

Network / Interfacing

10Gb/s Ethernet

Looking at about $350 per port for PCIe cards, and $900 for an 8 port switch. A 10Gig card will work with a 1Gig switch, but it's kinda silly to do that. It's doable and probably worth looking at to get the switch and card at the same time - 10Gig Ethernet has to start somewhere! Some motherboards have 10GBase-T ports. Consider 10GBase-T vs SFP+ Twinax cabling.

Multiple 1Gb/s Ethernet

Many (most?) server motherboards come with Dual Ethernet ports, some with Quad. Use multipath routing magic for extra speed.

iSCSI

Could install a iSCSI card letting, some/all of the disks be mounted as block devices, over fibre. Like the SAN. Prob not worth the effort/cost.

Possible Build #43

Case:

PSU:

Rails:

Hotswap Bay:

CPU:

Motherboard:

(With Management, 8x SAS, 2x Sata 3, 4x sata 6, 2xGE)

RAM:

SSD's:

HDD:

Frames, Gozz 11 nov

Total $4163.80 (Not including shipping for anything except the montherboard which is posted from the US)

Case:

PSU:

Rails:

  • ???

CPU:

Motherboard:

(With Management, 8x SAS, 2x Sata 3, 4x sata 6, 2xGE)

RAM:

HDD

SSD:

2.5-3.5 mounting bracket/adaptor