Differences between revisions 7 and 8
Revision 7 as of 2014-07-29 13:17:59
Size: 5102
Editor: DavidAdam
Comment: updated new disk layouts
Revision 8 as of 2014-07-29 13:20:13
Size: 5419
Editor: DavidAdam
Comment: new ZFS layout
Deletions are marked like this. Additions are marked like this.
Line 74: Line 74:
Line 77: Line 76:
Followed this guide: [[http://bernaerts.dyndns.org/linux/75-debian/279-debian-wheezy-zfs-raidz-pool|http://bernaerts.dyndns.org/linux/75-debian/279-debian-wheezy-zfs-raidz-pool]]

Didn't setup snapshots.

{{{
root@molmol:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
over 7.25T 1.64M 7.25T 0% 1.00x ONLINE -
}}}
Assembled the RAID along the lines of https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
Line 89: Line 80:
  pool: over   pool: space
Line 91: Line 82:
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jul 27 18:02:05 2014   scan: resilvered 0 in 0h0m with 0 errors on Tue Jul 29 12:42:34 2014
Line 94: Line 85:
 NAME STATE READ WRITE CKSUM
 over ONLINE 0 0 0
   raidz1-0 ONLINE 0 0 0
     sdc ONLINE 0 0 0
     sdd ONLINE 0 0 0
     sde ONLINE 0 0 0
     sdf ONLINE 0 0 0
     sdg ONLINE 0 0 0
     sdh ONLINE 0 0 0
     sdi ONLINE 0 0 0
     sdj ONLINE 0 0 0
        NAME STATE READ WRITE CKSUM
        space ONLINE 0 0 0
          mirror-0 ONLINE 0 0 0
            ata-WDC_WD10JFCX-68N6GN0_WD-WX11E83HKN64 ONLINE 0 0 0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXF1A8371196 ONLINE 0 0 0
          mirror-1 ONLINE 0 0 0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXF1A83E2255 ONLINE 0 0 0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXF1A8372507 ONLINE 0 0 0
          mirror-2 ONLINE 0 0 0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXM1E83KPU73 ONLINE 0 0 0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXM1E83KPT93 ONLINE 0 0 0
          mirror-3 ONLINE 0 0 0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXM1E83JZD83 ONLINE 0 0 0
            ata-WDC_WD10JFCX-68N6GN0_WD-WX11E83HKM57 ONLINE 0 0 0
        logs
          mirror-4 ONLINE 0 0 0
            ata-Samsung_SSD_840_PRO_Series_S1ATNSAD864731A-part3 ONLINE 0 0 0
            ata-Samsung_SSD_840_PRO_Series_S1ATNSAD864729Z-part3 ONLINE 0 0 0
        cache
          ata-Samsung_SSD_840_PRO_Series_S1ATNSAD864731A-part4 ONLINE 0 0 0
          ata-Samsung_SSD_840_PRO_Series_S1ATNSAD864729Z-part4 ONLINE 0 0 0
Line 109: Line 110:
The `zfs` is `/there` and is currently exported to [[Motsugo]].
Line 112: Line 111:
root@molmol:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 227G 1.5G 214G 1% /
udev 10M 0 10M 0% /dev
tmpfs 1.6G 540K 1.6G 1% /run
/dev/mapper/molmol-molmol--rootfs 227G 1.5G 214G 1% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /run/shm
/dev/md0 457M 19M 414M 5% /boot
services.ucc.gu.uwa.edu.au:/space/away/home 1.9T 1.7T 26G 99% /away
home.ucc.gu.uwa.edu.au:/home 2.0T 952G 963G 50% /home
nortel.ucc.gu.uwa.edu.au:/vol/space/services 884G 674G 211G 77% /services
over/there 6.1T 256K 6.1T 1% /there
}}}

{{{
root@molmol:~# cat /etc/exports
/there motsugo(rw,sync,no_root_squash,mountpoint,no_subtree_check,secure)
root@molmol:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
space 40.4M 3.57T 168K /space
space/away 136K 3.57T 136K /space/away
space/scratch 39.2M 3.57T 39.2M /space/scratch
space/services 136K 3.57T 136K /space/services
space/vmstore 136K 3.57T 136K /space/vmstore

molmol

Molmol is the File Server. See also: MoneyMoneyMoney/NewStorage.

Hardware

The "Sick of your moaning" design is *almost* what we have. Except the PSU, case, SAS expander card, and (hopefully soon) an SSD bay. It is basically the same. People who care to correct this.

Root Filesystem

There are 2 SSDs, partitioned with a GUID Partition Table (GPT).

Partition 1 on both stores the boot loader (Grub). Partition 2 on sda will contain a FreeBSD partition. Partition 2 on sdb stores a RAID which holds a LVM group, containing /, /boot and swap. Partition 3 on both forms a mirror for the ZFS SLOG (journal). Partition 4 on both forms a spanned (i.e. not mirrored) ZFS L2ARC (disk cache).

root@molmol:~# gdisk -l /dev/sda
Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 77FC147A-5A20-486B-88E2-9EA0FAEC4D15
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2-sector boundaries
Total free space is 1 sectors (512 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02  BIOS boot partition
   2            2048        83886080   40.0 GiB    FD00  molmol-system
   3        83886082        88080385   2.0 GiB     A504  molmol-slog
   4        88080386       500118158   196.5 GiB   A504  molmol-l2arc

root@molmol:~# gdisk -l /dev/sdb
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 80F4D19D-44F2-4851-90B5-E7CBEC7B23C3
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2-sector boundaries
Total free space is 1 sectors (512 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02  BIOS boot partition
   2            2048        83886080   40.0 GiB    FD00  molmol-system
   3        83886082        88080385   2.0 GiB     A504  molmol-slog
   4        88080386       500118158   196.5 GiB   A504  molmol-l2arc

root@molmol:~# cat /proc/mdstat 
Personalities : [raid1] 
md1 : active raid1 sdb2[1]
      41909120 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

root@molmol:~# lvs
  LV   VG     Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  boot molmol -wi-ao-- 512.00m                                           
  root molmol -wi-ao--  30.00g                                           
  swap molmol -wi-ao--   4.00g                                           
root@molmol:~# pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/md1   molmol lvm2 a--  39.96g 5.46g

ZFS

Assembled the RAID along the lines of https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/

root@molmol:~# zpool status
  pool: space
 state: ONLINE
  scan: resilvered 0 in 0h0m with 0 errors on Tue Jul 29 12:42:34 2014
config:

        NAME                                                      STATE     READ WRITE CKSUM
        space                                                     ONLINE       0     0     0
          mirror-0                                                ONLINE       0     0     0
            ata-WDC_WD10JFCX-68N6GN0_WD-WX11E83HKN64              ONLINE       0     0     0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXF1A8371196              ONLINE       0     0     0
          mirror-1                                                ONLINE       0     0     0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXF1A83E2255              ONLINE       0     0     0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXF1A8372507              ONLINE       0     0     0
          mirror-2                                                ONLINE       0     0     0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXM1E83KPU73              ONLINE       0     0     0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXM1E83KPT93              ONLINE       0     0     0
          mirror-3                                                ONLINE       0     0     0
            ata-WDC_WD10JFCX-68N6GN0_WD-WXM1E83JZD83              ONLINE       0     0     0
            ata-WDC_WD10JFCX-68N6GN0_WD-WX11E83HKM57              ONLINE       0     0     0
        logs
          mirror-4                                                ONLINE       0     0     0
            ata-Samsung_SSD_840_PRO_Series_S1ATNSAD864731A-part3  ONLINE       0     0     0
            ata-Samsung_SSD_840_PRO_Series_S1ATNSAD864729Z-part3  ONLINE       0     0     0
        cache
          ata-Samsung_SSD_840_PRO_Series_S1ATNSAD864731A-part4    ONLINE       0     0     0
          ata-Samsung_SSD_840_PRO_Series_S1ATNSAD864729Z-part4    ONLINE       0     0     0

errors: No known data errors

root@molmol:~# zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
space           40.4M  3.57T   168K  /space
space/away       136K  3.57T   136K  /space/away
space/scratch   39.2M  3.57T  39.2M  /space/scratch
space/services   136K  3.57T   136K  /space/services
space/vmstore    136K  3.57T   136K  /space/vmstore

GLHFDD.