Differences between revisions 1 and 7 (spanning 6 versions)
Revision 1 as of 2014-07-27 14:00:17
Size: 28
Editor: wireless46
Comment:
Revision 7 as of 2014-07-29 13:17:59
Size: 5102
Editor: DavidAdam
Comment: updated new disk layouts
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Molmol is the File Server. {{attachment:molmol.jpg|molmol|width=400}}

Molmol is the File Server. See also: [[MoneyMoneyMoney/NewStorage]].

== Hardware ==
The "Sick of your moaning" design is *almost* what we have. Except the PSU, case, SAS expander card, and (hopefully soon) an SSD bay. It is basically the same. People who care to correct this.

== Root Filesystem ==

There are 2 SSDs, partitioned with a GUID Partition Table (GPT).

Partition 1 on both stores the boot loader (Grub).
Partition 2 on sda will contain a FreeBSD partition.
Partition 2 on sdb stores a RAID which holds a LVM group, containing /, /boot and swap.
Partition 3 on both forms a mirror for the ZFS SLOG (journal).
Partition 4 on both forms a spanned (i.e. not mirrored) ZFS L2ARC (disk cache).

{{{
root@molmol:~# gdisk -l /dev/sda
Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 77FC147A-5A20-486B-88E2-9EA0FAEC4D15
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2-sector boundaries
Total free space is 1 sectors (512 bytes)

Number Start (sector) End (sector) Size Code Name
   1 34 2047 1007.0 KiB EF02 BIOS boot partition
   2 2048 83886080 40.0 GiB FD00 molmol-system
   3 83886082 88080385 2.0 GiB A504 molmol-slog
   4 88080386 500118158 196.5 GiB A504 molmol-l2arc
}}}
{{{
root@molmol:~# gdisk -l /dev/sdb
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 80F4D19D-44F2-4851-90B5-E7CBEC7B23C3
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2-sector boundaries
Total free space is 1 sectors (512 bytes)

Number Start (sector) End (sector) Size Code Name
   1 34 2047 1007.0 KiB EF02 BIOS boot partition
   2 2048 83886080 40.0 GiB FD00 molmol-system
   3 83886082 88080385 2.0 GiB A504 molmol-slog
   4 88080386 500118158 196.5 GiB A504 molmol-l2arc
}}}

{{{
root@molmol:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1]
      41909120 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>
}}}

{{{
root@molmol:~# lvs
  LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
  boot molmol -wi-ao-- 512.00m
  root molmol -wi-ao-- 30.00g
  swap molmol -wi-ao-- 4.00g
root@molmol:~# pvs
  PV VG Fmt Attr PSize PFree
  /dev/md1 molmol lvm2 a-- 39.96g 5.46g
}}}



== ZFS ==

Followed this guide: [[http://bernaerts.dyndns.org/linux/75-debian/279-debian-wheezy-zfs-raidz-pool|http://bernaerts.dyndns.org/linux/75-debian/279-debian-wheezy-zfs-raidz-pool]]

Didn't setup snapshots.

{{{
root@molmol:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
over 7.25T 1.64M 7.25T 0% 1.00x ONLINE -
}}}

{{{
root@molmol:~# zpool status
  pool: over
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jul 27 18:02:05 2014
config:

 NAME STATE READ WRITE CKSUM
 over ONLINE 0 0 0
   raidz1-0 ONLINE 0 0 0
     sdc ONLINE 0 0 0
     sdd ONLINE 0 0 0
     sde ONLINE 0 0 0
     sdf ONLINE 0 0 0
     sdg ONLINE 0 0 0
     sdh ONLINE 0 0 0
     sdi ONLINE 0 0 0
     sdj ONLINE 0 0 0

errors: No known data errors
}}}

The `zfs` is `/there` and is currently exported to [[Motsugo]].

{{{
root@molmol:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 227G 1.5G 214G 1% /
udev 10M 0 10M 0% /dev
tmpfs 1.6G 540K 1.6G 1% /run
/dev/mapper/molmol-molmol--rootfs 227G 1.5G 214G 1% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /run/shm
/dev/md0 457M 19M 414M 5% /boot
services.ucc.gu.uwa.edu.au:/space/away/home 1.9T 1.7T 26G 99% /away
home.ucc.gu.uwa.edu.au:/home 2.0T 952G 963G 50% /home
nortel.ucc.gu.uwa.edu.au:/vol/space/services 884G 674G 211G 77% /services
over/there 6.1T 256K 6.1T 1% /there
}}}

{{{
root@molmol:~# cat /etc/exports
/there motsugo(rw,sync,no_root_squash,mountpoint,no_subtree_check,secure)
}}}

GLHFDD.

molmol

Molmol is the File Server. See also: MoneyMoneyMoney/NewStorage.

Hardware

The "Sick of your moaning" design is *almost* what we have. Except the PSU, case, SAS expander card, and (hopefully soon) an SSD bay. It is basically the same. People who care to correct this.

Root Filesystem

There are 2 SSDs, partitioned with a GUID Partition Table (GPT).

Partition 1 on both stores the boot loader (Grub). Partition 2 on sda will contain a FreeBSD partition. Partition 2 on sdb stores a RAID which holds a LVM group, containing /, /boot and swap. Partition 3 on both forms a mirror for the ZFS SLOG (journal). Partition 4 on both forms a spanned (i.e. not mirrored) ZFS L2ARC (disk cache).

root@molmol:~# gdisk -l /dev/sda
Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 77FC147A-5A20-486B-88E2-9EA0FAEC4D15
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2-sector boundaries
Total free space is 1 sectors (512 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02  BIOS boot partition
   2            2048        83886080   40.0 GiB    FD00  molmol-system
   3        83886082        88080385   2.0 GiB     A504  molmol-slog
   4        88080386       500118158   196.5 GiB   A504  molmol-l2arc

root@molmol:~# gdisk -l /dev/sdb
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 80F4D19D-44F2-4851-90B5-E7CBEC7B23C3
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2-sector boundaries
Total free space is 1 sectors (512 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02  BIOS boot partition
   2            2048        83886080   40.0 GiB    FD00  molmol-system
   3        83886082        88080385   2.0 GiB     A504  molmol-slog
   4        88080386       500118158   196.5 GiB   A504  molmol-l2arc

root@molmol:~# cat /proc/mdstat 
Personalities : [raid1] 
md1 : active raid1 sdb2[1]
      41909120 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

root@molmol:~# lvs
  LV   VG     Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  boot molmol -wi-ao-- 512.00m                                           
  root molmol -wi-ao--  30.00g                                           
  swap molmol -wi-ao--   4.00g                                           
root@molmol:~# pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/md1   molmol lvm2 a--  39.96g 5.46g

ZFS

Followed this guide: http://bernaerts.dyndns.org/linux/75-debian/279-debian-wheezy-zfs-raidz-pool

Didn't setup snapshots.

root@molmol:~# zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
over  7.25T  1.64M  7.25T     0%  1.00x  ONLINE  -

root@molmol:~# zpool status
  pool: over
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jul 27 18:02:05 2014
config:

        NAME        STATE     READ WRITE CKSUM
        over        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
            sdh     ONLINE       0     0     0
            sdi     ONLINE       0     0     0
            sdj     ONLINE       0     0     0

errors: No known data errors

The zfs is /there and is currently exported to Motsugo.

root@molmol:~# df -h
Filesystem                                    Size  Used Avail Use% Mounted on
rootfs                                        227G  1.5G  214G   1% /
udev                                           10M     0   10M   0% /dev
tmpfs                                         1.6G  540K  1.6G   1% /run
/dev/mapper/molmol-molmol--rootfs             227G  1.5G  214G   1% /
tmpfs                                         5.0M     0  5.0M   0% /run/lock
tmpfs                                         3.9G     0  3.9G   0% /run/shm
/dev/md0                                      457M   19M  414M   5% /boot
services.ucc.gu.uwa.edu.au:/space/away/home   1.9T  1.7T   26G  99% /away
home.ucc.gu.uwa.edu.au:/home                  2.0T  952G  963G  50% /home
nortel.ucc.gu.uwa.edu.au:/vol/space/services  884G  674G  211G  77% /services
over/there                                    6.1T  256K  6.1T   1% /there

root@molmol:~# cat /etc/exports
/there  motsugo(rw,sync,no_root_squash,mountpoint,no_subtree_check,secure)

GLHFDD.