In this tutorial series, we are setting up a highly available WordPress web site from scratch.
Part 1 – Introduction, Considerations, and Architecture
Part 2 – Ordering the VPSes
Part 3 – Ansible
Part 4 – Gluster (this article)
Part 5 – WordPress install
Part 6 – MariaDB Multi-Master
Part 7 – Round-Robin DNS, Let’s Encrypt, & Conclusion
We’ve made tons of progress so far. Using Ansible really sped up setting up the servers. Now let’s get GlusterFS configured.
GlusterFS is a replicated filesystem that allows us to have the exact same directories, files, and permissions on each node. We’ll host our web files on it, so that if we upload some art or other files on one node, they’re instantly replicated to the others. We’ll also leverage it to make Let’s Encrypt and Nginx easier, as well as using it for an easy transport mechanism for our DB backups when we’re setting up MariaDB.
For right now, we’ll get it setup and installed.
Because there’s some back and forth, I’m going to label some parts “on each node” and “on node1”. It’s helpful to open an SSH session to each node so you can switch back and forth.
On Each Node
If you run dmesg, you should see something like this:
[ 1.714306] sd 0:0:0:1: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)
There’s usually a lot of output in dmesg, so try
dmesg | grep sdb
This was probably mounted by default, so unmount it
umount /dev/sdb
And remove any reference to it from /etc/fstab. On my node1, I removed this line:
/dev/disk/by-id/scsi-0HC_Volume_100431844 /mnt/HC_Volume_100431844 xfs discard,nofail,defaults 0 0
Now let’s nuke the partition label and make a new one, create a new partition that uses 100% of the drive, and make an XFS filesystem on it:
root@node1:~# parted /dev/sdb --align opt mklabel gpt Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? Yes Information: You may need to update /etc/fstab. root@node1:~# parted /dev/sdb mkpart xfs 0% 100% Information: You may need to update /etc/fstab. root@node1:~# mkfs.xfs -f -i size=512 /dev/sdb1 meta-data=/dev/sdb1 isize=512 agcount=4, agsize=655232 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 data = bsize=4096 blocks=2620928, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Discarding blocks...Done.
Now we can setup a mount point for it and mount it:
root@node1:~# mkdir -p /data/brick1 root@node1:~# echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab root@node1:~# systemctl daemon-reload root@node1:~# mount -a root@node1:~# mount | grep sdb1 /dev/sdb1 on /data/brick1 type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota) root@node1:~# df -h |grep sdb1 /dev/sdb1 10G 104M 9.9G 2% /data/brick1
Now lather, rinse, repeat for each node.
On Each Node
We should now start the glusterd.
systemctl enable glusterd.service systemctl start glusterd.service
On node1
Let’s get our GlusterFS cluster setup.
root@node1:~# gluster peer probe node2.lowend.party peer probe: success root@node1:~# gluster peer probe node3.lowend.party peer probe: success
On node2
root@node2:~# gluster peer probe node1.lowend.party peer probe: success root@node2:~# gluster peer probe node3.lowend.party peer probe: Host node3.lowend.party port 24007 already in peer list
On node3
root@node3:~# gluster peer probe node1.lowend.party peer probe: Host node1.lowend.party port 24007 already in peer list root@node3:~# gluster peer probe node2.lowend.party peer probe: Host node2.lowend.party port 24007 already in peer list
On node1
root@node1:~# gluster peer status Number of Peers: 2 Hostname: node2.lowend.party Uuid: 4fdc500b-d0dc-4853-9eac-c30f31bf80d8 State: Peer in Cluster (Connected) Hostname: node3.lowend.party Uuid: 363cac9f-5cf2-485d-ac70-37afa25cd785 State: Peer in Cluster (Connected)
If you run that on node2 and node3, you should see similar information. GlusterFS is working! Now let’s create a brick and add it.
On Each Node
mkdir -p /data/brick1/gv0
On node1
root@node1:~# gluster volume create gv0 replica 3 node1.lowend.party:/data/brick1/gv0 node2.lowend.party:/data/brick1/gv0 node3.lowend.party:/data/brick1/gv0 volume create: gv0: success: please start the volume to access data root@node1:~# gluster volume start gv0 volume start: gv0: success root@node3:~# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 7be70513-926d-441e-99d8-e339e59db30a Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node1.lowend.party:/data/brick1/gv0 Brick2: node2.lowend.party:/data/brick1/gv0 Brick3: node3.lowend.party:/data/brick1/gv0 Options Reconfigured: cluster.granular-entry-heal: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off
On Each Node
mkdir /gluster
Add to /etc/fstab:
echo "localhost:/gv0 /gluster glusterfs defaults,_netdev,noauto,x-systemd.automount 0 0" >> /etc/fstab
And then:
systemctl daemon-reload
If you don’t understand that fstab entry, I’ll refer you to this blog post, which points out a problem:
When running a GlusterFS cluster, you may want to use the volume(s) on the servers themselves.
During the boot process, GlusterFS will take a bit of time to start.
systemd-mount
, which handles the mount points from/etc/fstab
, will run before theglusterfs-server
service finishes to start.The mount will fail so you will end up without your mounted volume after a reboot.
There’s a solution presented there, but the solution in the first comment is simpler.
Now reboot to make sure GlusterFS starts and mounted as expected. For me, it did.
Testing Gluster
Let’s give Gluster a spin. On node1:
echo "This is a test file to see if gluster is working." > /gluster/testfile.txt
And then hopping over to node2:
root@node3:~# cat /gluster/testfile.txt This is a test file. Hello. How are you?
Woot!
In the next section, we’ll get WordPress installed, leveraging Gluster to make it easy.
Related Posts:
- Crunchbits Discontinuing Popular Annual Plans – The Community Mourns! - November 20, 2024
- RackNerd’s Black Friday 2024: Bigger, Better, and Now in Dublin! - November 19, 2024
- It’s the Season of Giving and CharityHost Has Deals for You! - November 18, 2024
Here is a GlusterFS related answer in ServerFault SE site : https://serverfault.com/a/1165339/217110