About

SAN is dying. NetApp is destroying its disks. Got to get another storage solution.

Frames started to put together some form of baseline design. GoogleSpreadSheet

Requirements

Usage

  • /vmstore, with multiple hosts accessing VMs to allow for hosting failover
    • Varied uses! Some people using it for storage, others using it as build servers
    • Ideally we would have some sort of snapshotting on this because our VM backups are a bit "eh" if I recall correctly -- BobAdamson

    • We don't have a great deal of control over how VM users set up their swap space, which could do damage to SSD's if we're not careful. It may be worth making sure that VM storage only gets spinning disks and ram cache to avoid this problem
  • /home directory storage - do we want to take /home off motsugo?
    • What are the implications of this on things like dovecot which complain loudly if home dirs are missing?
      • Dovecot used to be on a NFS mount on mooneye and worked fine. But I moved it to motsugo because it's faster running locally. Just move dovecot to wherever /home moves! [msh]
    • The storage and speed needs of /home can be measured already...but we want to be a little faster of course
    • /home is also shared with samba
  • /services - already on nfs setup. Works well, but needs cleaning up. Not a heavy load as databases are stored locally on their servers. Most writes will be from webcams, and most reads are probably for web.
  • /away - we want this to be pretty fast to reduce login times and make the clubroom user experience good
    • very good candidate for SSD caching
    • /away needs to have ACL support and samba (3 or 4) for windows. This means either exporting the volume to another server and then sharing it that way, or having samba on the storage server. One is easier to upgrade, but has a performance penalty, the other is harder to upgrade but is more direct. Being UCC I think we should put it on a separate server so we can break one thing without breaking another -- BobAdamson

If you stand back and look at these needs, our storage needs aren't exactly huge or complex except for the VM storage. VM usage in UCC is growing pretty fast and we need to account for that, but we should otherwise be looking at future-proofing our setup. This means 2.5" disks (because that's what industry is moving towards) and standard parts wherever possible (paricularly the case). Our current setup has the NAS on multiple 1Gb links and the SAN on multipathed 2Gb FC. Ideally the new system would not be slower than that.

Motherboard

It is good if this has SAS2 and 10GE as those are expensive if we need to buy PCI-E Cards

This looks good: http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRH-7TF.cfm http://www.ebay.com/ctg/Super-Micro-Computer-X9SRH-7TF-LGA-2011-Socket-R-MBD-X9SRH-7TF-O-Motherboard-/139851059 This motherboard should be Substituted into the Bob and the Frames?Gozz builds. (It is the 1 socket version of the card they currently have)

Case

2RU cases are nice because they give us better storage density, however space is not a huge concern in the UCC machine room. If there's a 3RU case that would be more suitable and allow us to have full-height cards, all the better. Beware of 2RU cases - unless you get the correct rear panel you can only have half-height PCI cards.

Redundant power supplies are absolutely essential. We should try and get the same one a medico has so that we have spares.

Need front, hot swapable hard drive bays

The case we tried to order is no longer available. I've found this case, and it looks like there is a supplier in Perth who might be able to get it. http://www.supermicro.com/products/chassis/2u/?chs=216 -- bobgeorge33

PCI SSD

Could use a PCI-E SSD for magic caching. Upside is crazy fast. Downside is can't be raided (at least not trvially)

because of no raid, might be better off with 2 normal SSD's in raid, for the OS.

No Advantage to using a PCI-SSD http://ssd-comparison.whoratesit.com/Compare/OCZ-Vertex-4-256GB-vs-OCZ-RevoDrive-3-X2-240GB/1315vs1403 Moire expensive, less good

RAM

DDR3, 1600, ECC ram is required.

Most motherboards worth buyign will have 16 slots. We can get alot of RAM We would never need a SSD, for caching.

For about $1000 we could have 16x4GB=64Gb

  • For about $ 600 we could have 16x2GB=32Gb
  • For about $ 1,000 we could have 16x4GB=64Gb
  • For about $ 1,600 we could have 16x8GB=128Gb
  • For about $ 3,600 we could have 16x16GB=256Gb
  • For about $ 7,400 we could have 16x32GB=512Gb
  • For about $14,400 we could have 16x64GB=1048Gb

Battery backed ram is interesting but lacks OS support.

Network / Interfacing

10Gb/s Ethernet

Looking at about $350 per port for PCIe cards, and $900 for an 8 port switch. A 10Gig card will work with a 1Gig switch, but it's kinda silly to do that. It's doable and probably worth looking at to get the switch and card at the same time - 10Gig Ethernet has to start somewhere! Some motherboards have 10GBase-T ports. Consider 10GBase-T vs SFP+ Twinax cabling.

Multiple 1Gb/s Ethernet

Many (most?) server motherboards come with Dual Ethernet ports, some with Quad. Use multipath routing magic for extra speed.

iSCSI

Could install a iSCSI card letting, some/all of the disks be mounted as block devices, over fibre. Like the SAN. Prob not worth the effort/cost.

==SAS Card== Remember SATA3/SAS2 is only worth having for SSDs.

This one reviews very well $209 8 port, HighPoint RocketRAID 2720SGL, http://www.pccasegear.com/index.php?main_page=product_info&cPath=385&products_id=19403&zenid=0b1356551ef737db2845d09bd5cdec07 It is SAS2/SATA3, and punches well above it's weight. It is also half height. http://www.tweaktown.com/reviews/4306/highpoint_rocketraid_2720sgl_sata_6g_raid_controller_review/index9.html

Doesn't come with cables.

Final Design (ie: Sick of your Moaning v2.0)

Base SubTotal ~$$3436 (Not including all the shipping)

Case:

PSU:

Rails:

  • ???

CPU:

Motherboard:

(With Management, 8x SAS2, 2x Sata 6Gbs, 4x sata 3Gbs, 2xGE)

RAM:

Alot of flexibility here Motherboard supports 1866/1600/1333/1066MHz ECC DDR3 SDRAM 72-bit, 240-pin gold-plated DIMMs Mortherboard also supports, nonECC up to 1600. ands support buffered or unbuffered.

HDD WD Reds, 2.5"

(8 in raid what ever, + 1 hot spare)

SSD:

Cables: ??? ({*OX}: Please can someone else work this out, i don't know what we have laying round, it if comes to it we can just go prick some up as the shops in perth)

Level 1 Optional Extras

Subtotal: $756 Total: $4192

Filling all the mother board storage, (in the SATA 2 for spinnging disks) to have a total of 10x 1TB spinning disks, and 4x 256Gb SSDs

More HDDs:

  • 2x @$120

More SSDs:

  • +2x @$258,

Level 2: Optional extra, add a SAS card

Subtotal: $929 Total: $5121

We are now out of on motherboard ports, but we can get a 8 port SAS card. With that we can bring our spinning diskso 16Tb , which is 8TB in raid 10.

SAS/SATA PCI Card

More HDDs:

  • 6x @$120

Cables:

  • That card doesn't come with cables. [*OX] says someone else should deal with. They are pretty cheap, if we don't have any.