LowEndBox - Cheap VPS, Hosting and Dedicated Server Deals

Tutorial – The LowEndCluster – Part 4

lowendtutorial

It’s time for the fourth and final part in the LowEndCluster series, a series of tutorials aimed at effectively using LowEndSpirit or very budget/low-resource boxes to create a redundant cluster hosting a WordPress website for less than $50/year!

As I said last time, we’re focusing on redundancy, not automated fail-over or scaling; that’s for future tutorials. I’m using the easiest approach possible on all aspects of this, which gives us plenty of room to improve in the future as well as keeping it easy to understand right now. While I’m writing this as part of a LowEndCluster series, each tutorial in itself has value as well and many of them can be applied to other situations as well. For example, the MariaDB master-slave tutorial and the filesystem one today can be used perfectly fine to keep a spare copy of, say, an existing Observium machine. The only limit is your imagination ;-)

This week we’re going to install our filesystem, which will be SSHFS-based and we’ll be using rsync to move all the data to the second server. It’s really quite simple to be honest, but very effective! Let’s get cracking!

The web node – Part 1

First we’re going to create a user account and SSH keys on one of the web nodes. We’re then going to repeat the user creation and copy the SSH keys to the second web node.

Let’s create the user first called ‘cluster’. We’ll use this user to log into the filesystem server:

sudo adduser cluster

You’ll be asked for a password here. Please provide a strong one. You won’t need it further down the road. After you have provided a password, you’ll be answered a number of questions. Feel free to fill those out, but none of them is required.

Now you’ve got a new user, switch to that user:

sudo su cluster

And go to its home directory:

cd

From the home directory, run the following command to generate an SSH key pair:

ssh-keygen

You’ll again be asked some questions, more specifically a password. Leave this empty and ENTER through it. We don’t want a password as it’s going to give us issues while mounting the remote filesystem.

Finally, create a directory where we’re going to mount the remote filesystem:

sudo mkdir /filesystem

And ensure the user ‘cluster’ owns what directory:

sudo chown cluster. /filesystem

So, the situation we have right now is as follows:

  • You have a ‘cluster’ user with a strong password
  • You have an SSH key pair for the cluster user without a password
  • You have a /filesystem directory own by the user ‘cluster’ that will function as a mount point in the future

Repeat the user creation on the second web node, but do not create an SSH key pair there. You should copy the SSH files from your first web node to your second web node:

scp -r .ssh/ node2.example.net:/home/cluster/.ssh

The above command should be run as the user ‘cluster’ from the first web node. What this does, is copy the .ssh directory and its contents over to the second web node. Both web nodes now have the same SSH key pair, which will eventually give them access to the filesystem node.

Before we can take that step, though, we’ll head over to the filesystem node.

The filesystem node – Part 1

On the first filesystem node node, repeat the user creation step from above:

sudo adduser cluster

Again, pick a strong password. It doesn’t need to match that of the one on the web node. You do need to be able to fill it out later, though.

Once this has been done, you should create a folder on the filesystem node where you want to put your files:

sudo mkdir /filesystem

This will create a directory called ‘filesystem’ in the root of your server. Feel free to put it somewhere else, but I’m using this location to keep things simple.

Now, ensure the user ‘cluster’ owns what directory:

sudo chown cluster. /filesystem

And you should be good!

On the filesystem node, you now have the following situation:

  • A user ‘cluster’ with a strong password
  • A directory (/filesystem) to host your files owned by the user (and user group) ‘cluster’

You should now repeat the above steps on the second filesystem node before you continue.

A short note: initially, I anticipated the need for three filesystem nodes. This is no longer the case. Two will suffice (they don’t have to be KVM either), which means you will have a total of 8 servers: 2 load balancers, 2 web nodes, 2 database nodes, and 2 filesystem nodes. I will elaborate on this in the final notes.

The web node – Part 2

Back on the web node, you now can copy the public key of the ‘cluster’ user’s SSH key pair to the filesystem nodes. As the user ‘cluster’, from the home directory, run:

ssh-copy-id filesystemnode.example.net

Replace filesystemnode.example.net with the hostname of your first filesystem node. You will be asked for a password: use the password you’ve set for the user on the filesystem node! The public SSH key should then be copied. Give this a try by trying to access the filesystem node via SSH:

ssh filesystemnode.example.net

You should now be logged in without it asking for your password. If that is the case, repeat the first step for the second filesystem node:

ssh-copy-id filesystemnode2.example.net

And that should now also have your public SSH key on it.

So, short recap. Right now, you have:

  • Two web nodes with a ‘cluster’ user sharing the same SSH key pair
  • Two filesystem nodes with a ‘cluster’ user that have the web nodes’ user’s SSH public key in the authorized_keys file
  • The ability to access either filesystem node from either web node using SSH without being prompted for a password

With that in mind, we can now mount the remote filesystem on the web nodes.

In order to be able to mount the remote filesystem, you need SSHFS installed on your server. For this to work, you need either an OpenVZ box with FUSE enabled, or a KVM machine. Most providers enable FUSE for your on demand; some have it built into their panel. SolusVM does not give users an option to enable/disable FUSE, so if your provider uses SolusVM, please contact them.

Let’s install SSHFS. On both web nodes, run:

sudo apt-get install sshfs

With SSHFS installed, you can actually mount the remote filesystem right away. We’ll make it “stick” in a bit, as a mount from the CLI won’t survive a reboot, but it’s good to test it. From one of the web nodes, as the user ‘cluster’, run:

sshfs filesystemnode.example.net:/filesystem /filesystem

Replace filesystemnode.example.net with the hostname or IP address of your first filesystem node. We’re going to work with the first one from now on, but just when accessing it from the web nodes.

If this doesn’t give any errors (and it shouldn’t), head over to the /filesystem directory on the web node:

cd /filesystem

And try to create a file there:

touch README.md

With that file having been created, let’s go back to the filesystem node.

The filesystem node – Part 2

On the filesystem node, first check if the file you’ve just created from the web node is present:

ls -al /filesystem

In the output, you should see the file ‘README.md’ listed. Neat, right?

OK, so now we can use the remote filesystem from the web node, we have a situation which we can work from. Before we start moving WordPress to the remote filesystem, though, I’d like to set up redundancy. I want to make sure that if the first filesystem node is offline, I can easily switch over my web nodes to the second filesystem node and have the same files there.

I’m going to use rsync for that. This tool should already be installed, but if it isn’t, here’s how you install it:

sudo apt-get install rsync

And that’s all.

In order to be able to rsync from the first filesystem node to the second filesystem node, the ‘cluster’ user on the first filesystem node needs to be able to access the second filesystem node.

As the cluster user on the first filesystem node, from the home directory of the user, run:

ssh-keygen

Follow the same rule as before: no password.

Now, copy this over to the second filesystem node:

ssh-copy-id filesystemnode2.example.net

And you should be able to access that no problem, no passwords asked, via SSH.

Now, back to rsync. It’s actually extremely easy to get this working. From the first web node, run the following command:

rsync -a /filesystem filesystemnode2.example.net:/filesystem

What this does is recursively synchronize all files from the first filesystem node to the second filesystem node.

The ‘-a’ flag does a lot of cool things that you want to happen:

  • Performs a recursive sync (all directories and files under /filesystem in this case)
  • Copies symlinks as actual symlinks
  • Preserves permissions
  • Preserves modification times
  • Preserves the owner and group
  • Preserves several other files

So, this will actually be a copy of the situation as it is on the first filesystem node rather than a half-assed backup.

But running the command by hand isn’t going to help you any. You want to have this ran on a regular basis. To solve this issue, I want to add the command to cron. Depending on the site of the filesystem and the amount of file modifications you make, I’d say you could run this every 15 minutes for a site with few modifications. Worst-case scenario is you loose 15 minutes of changes to files (not the database). You can change this to fit your needs, but keep in mind that rsync need to have enough time to complete before running it again.

On the first filesystem node, as the user ‘cluster’, run:

crontab -e

This will open an editor, or ask you to pick one. If it asks you to pick one, either pick one or press enter. The default is ‘nano’, which should be easiest for most people.

In the file that opens, add the following line:

*/15 * * * * rsync -a /filesystem filesystemnode2.example.net:/filesystem

Now save the file. From this point on, rsync should back up the files from the first filesystem node to the second filesystem node every 15 mintes!

It’s time to head for the final step when it comes to the filesystem: permanently mounting the remote filesystem on the web nodes. Before we start with that, however, let’s do another recap. We now have:

  • Two web nodes with access to both filesystem nodes over SSH without needing a password
  • SSHFS installed on both web nodes
  • Two filesystem nodes with the first one having access to the second one over SSH without the need for a password
  • A cron running an rsync command every 15 minutes to back up the files from the first filesystem node to the second filesystem node
  • A working situation for mounting the remote filesystem on the web nodes

The web node – Part 3

In order to mount the remote filesystem permanently, you need to add an entry to /etc/fstab, which lists the filesystems that should be mounted. In order to be able to mount it, though, you need to be able to log in to the remote filesystem as the user ‘cluster’ from the user ‘root’, since root mounts the filesystems at boot. In order to do this, the private SSH key of the ‘cluster’ user needs to be copied to the .ssh directory of the ‘root’ user.

Make yourself root:

sudo su root

And head to your home directory:

cd

From there, run the following command:

cp /home/cluster/.ssh/id_rsa .ssh/

This copies the private key to the root user’s .ssh directory, giving it the ability to log in to the filesystem nodes as the user ‘cluster’.

Now, open up /etc/fstab in your favorite editor (I’m using vim):

vim /etc/fstab

And add the following line:

cluster@filesystemnode.example.net:/filesystem    /filesystem     fuse.sshfs  defaults,_netdev  0  0

Save the file. In order to test this, first unmount your test mount (if you had any):

fusermount -u /filesystem

And then try the fstab file:

mount -a

If there are no errors, you should see the remote filesystem mounted under /filesystem with all your files there. Do this on both web nodes to enable quick switching in case of an issue.

Once this is done, it’s time for the grand finale step: moving your WordPress files to the remote filesystem!

The “big” migration

There’s one last ‘but’ to this: the web server runs as ‘www-data’ and needs to be able to access the files owned by cluster. Since all files have user+group access, all you need to do it add the ‘www-data’ user on both web nodes to the ‘cluster’ group:

sudo usermod -a -G cluster www-data

This modifies the user ‘www-data’ and adds it to the group ‘cluster’.

Now you can safely migrate your files to the remote filesystem. Since we stored the files in a user’s home directory, we’re going to copy them from there to the /filesystem directory:

sudo cp -rp /home/username/public_html /filesystem

This copies the files recursively from the old location to the new one, preserving ownership and timestamps. After having done that, make sure all files are owned by ‘cluster’ in order to prevent any issues. Since ‘www-data’ is in the group ‘cluster’ this should not be an issue:

sudo chown -R cluster. /filesystem

Finally, with all the files on the remote filesystem, you need to take one more step for this to work: switching the web server to a different document root. Open up the file /etc/nginx/sites-available/hostname.conf and look for this line:

root /home/username/public_html;

Change that to:

root /filesystem;

And restart NGINX:

sudo service nginx restart

That’s it! It was quite some work, but you now have your (manual-intervention-required) fully redundant LowEndCluster!

Final notes

As this is quite an elaborate series, I make make this one into a lengthy guide in the future and/or expand on it. What I’ve done so far it touched the bare essentials of the possibilities and as technology develops, those possibilities will only increase.

I do need to note, though, that for a high-traffic website this may not be the best solution. Especially not with servers spread across the planet.

To get back to the required servers, here’s the list of servers I’ve actually used:

  • 2x BudgetVZ 128MB – Load balancers with IPv4 and IPv6 – €4/year each – €8/year total
  • 2x MegaVZ 512MB – Database servers with NAT IPv4 and IPv6 – €5.5o/year each – €11/year total
  • 2x LowEndSpirit 128MB – Web nodes with NAT IPv4 and IPv6 – €3/year each – €6/year total
  • 2x LowEndSpirit 128MB – Filesystem nodes with NAT IPv4 and IPv6 – €3/year each – €6/year total

It differs from my initial list in that KVM machines are no longer required for the filesystem. OpenVZ with FUSE works fine. This means you can actually create this 8-server cluster for less than $35/year (€31)! That’s both LowEnd and fantastic!

I hope you’ve enjoyed reading this series and I look forward to getting back to it in the future. Thank you for reading!

mpkossen

5 Comments

  1. Bubba:

    Why not use Unison for cloning? You can make changes on either side and it just works to keep them in sync.

    May 26, 2015 @ 2:40 am | Reply
    • David:

      +1 unison
      I am also using unison over ssh for 2-way sync of my customer’s webspace.

      May 26, 2015 @ 8:18 pm | Reply
  2. Hi,.. just my opinion that the “Final Notes” part of your tutorial would be better if you provide an infographic-like illustration regarding explained server setup.

    Also, short explanation about what is, what for and why we need Load balancer nodes and Filesystem nodes will be better and will be very helpful for newbies. thanks

    May 27, 2015 @ 4:57 am | Reply
  3. D3matt:

    Am I correct in thinking that the load balancers will see double bandwidth usage since they are both sending and receiving? That would give you about 250GB usable bandwidth on those servers, which doesn’t sound like it goes very far with a WordPress site.

    May 27, 2015 @ 6:47 pm | Reply
  4. There is a big issue on using sshfs, there is still a central node. So HA does not exist perse

    May 30, 2015 @ 12:51 am | Reply

Leave a Reply to Luis Daniel Lucio Quiroz Cancel reply

Some notes on commenting on LowEndBox:

  • Do not use LowEndBox for support issues. Go to your hosting provider and issue a ticket there. Coming here saying "my VPS is down, what do I do?!" will only have your comments removed.
  • Akismet is used for spam detection. Some comments may be held temporarily for manual approval.
  • Use <pre>...</pre> to quote the output from your terminal/console, or consider using a pastebin service.

Your email address will not be published. Required fields are marked *