Differences between revisions 3 and 4
Revision 3 as of 2014-07-27 18:09:19
Size: 4371
Editor: SamMoore
Comment:
Revision 4 as of 2014-07-27 18:13:32
Size: 4851
Editor: SamMoore
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Molmol is the File Server. Molmol is the File Server. See also: [[MoneyMoneyMoney/NewStorage]].

== Hardware ==
The "Sick of your moaning" design is *almost* what we have. Except the PSU, case, SAS expander card, and (hopefully soon) an SSD bay. It is basically the same. People who care to correct this.
Line 7: Line 10:
  * /dev/md0 is /dev/sda1 and /dev/sdb1 - /boot, ext2
  * /dev/md1 is /dev/sda2 and /dev/sdb2 - swap (because)
  * /dev/md2 is /dev/sda5 and /dev/sdb5 - /, LVM ext4
  * /dev/md0 is /dev/sda1 and /dev/sdb1 - /boot, 512MB, ext2
  * /dev/md1 is /dev/sda2 and /dev/sdb2 - swap, 4GB
  * /dev/md2 is /dev/sda5 and /dev/sdb5 - /, LVM ext4, 227GB
Line 13: Line 16:

Swap is in raid1 because [SLX] wanted swap but not in raid1, and I wanted `/dev/sda` and `/dev/sdb` to be identical, so I put swap on both in raid1.
We probably don't need swap. It's 2% of the disk. Deal with it.

Molmol is the File Server. See also: MoneyMoneyMoney/NewStorage.

Hardware

The "Sick of your moaning" design is *almost* what we have. Except the PSU, case, SAS expander card, and (hopefully soon) an SSD bay. It is basically the same. People who care to correct this.

Root Filesystem

There are 2 SSDs with 3 partitions in RAID1.

  • /dev/md0 is /dev/sda1 and /dev/sdb1 - /boot, 512MB, ext2
  • /dev/md1 is /dev/sda2 and /dev/sdb2 - swap, 4GB
  • /dev/md2 is /dev/sda5 and /dev/sdb5 - /, LVM ext4, 227GB

In theory this will work. In practice it did work but only after a reinstall (keeping the first install's disk layouts).

Swap is in raid1 because [SLX] wanted swap but not in raid1, and I wanted /dev/sda and /dev/sdb to be identical, so I put swap on both in raid1. We probably don't need swap. It's 2% of the disk. Deal with it.

root@molmol:~# cat /proc/mdstat
Personalities : [raid1] 
md2 : active raid1 sdb5[3] sda5[2]
      245520192 blocks super 1.2 [2/2] [UU]
      
md1 : active raid1 sda2[2] sdb2[1]
      3904448 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sdb1[3] sda1[2]
      498368 blocks super 1.2 [2/2] [UU]

root@molmol:~# fdisk -l /dev/sda /dev/sdb

Disk /dev/sda: 256.1 GB, 256060514304 bytes
255 heads, 63 sectors/track, 31130 cylinders, total 500118192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004e00d

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      999423      498688   fd  Linux raid autodetect
/dev/sda2          999424     8812543     3906560   fd  Linux raid autodetect
/dev/sda3         8814590   500117503   245651457    5  Extended
/dev/sda5         8814592   500117503   245651456   fd  Linux raid autodetect

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 256.1 GB, 256060514304 bytes
255 heads, 63 sectors/track, 31130 cylinders, total 500118192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048      999423      498688   fd  Linux raid autodetect
/dev/sdb2          999424     8812543     3906560   fd  Linux raid autodetect
/dev/sdb3         8814590   500117503   245651457    5  Extended
/dev/sdb5         8814592   500117503   245651456   fd  Linux raid autodetect\

ZFS

Followed this guide: http://bernaerts.dyndns.org/linux/75-debian/279-debian-wheezy-zfs-raidz-pool

Didn't setup snapshots.

root@molmol:~# zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
over  7.25T  1.64M  7.25T     0%  1.00x  ONLINE  -

root@molmol:~# zpool status
  pool: over
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jul 27 18:02:05 2014
config:

        NAME        STATE     READ WRITE CKSUM
        over        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
            sdh     ONLINE       0     0     0
            sdi     ONLINE       0     0     0
            sdj     ONLINE       0     0     0

errors: No known data errors

The zfs is /there and is currently exported to Motsugo.

root@molmol:~# df -h
Filesystem                                    Size  Used Avail Use% Mounted on
rootfs                                        227G  1.5G  214G   1% /
udev                                           10M     0   10M   0% /dev
tmpfs                                         1.6G  540K  1.6G   1% /run
/dev/mapper/molmol-molmol--rootfs             227G  1.5G  214G   1% /
tmpfs                                         5.0M     0  5.0M   0% /run/lock
tmpfs                                         3.9G     0  3.9G   0% /run/shm
/dev/md0                                      457M   19M  414M   5% /boot
services.ucc.gu.uwa.edu.au:/space/away/home   1.9T  1.7T   26G  99% /away
home.ucc.gu.uwa.edu.au:/home                  2.0T  952G  963G  50% /home
nortel.ucc.gu.uwa.edu.au:/vol/space/services  884G  674G  211G  77% /services
over/there                                    6.1T  256K  6.1T   1% /there

root@molmol:~# cat /etc/exports
/there  motsugo(rw,sync,no_root_squash,mountpoint,no_subtree_check,secure)

GLHFDD.