By now we have completed a fair amount of configuration on the cluster setup:
- Host IP addressing
- NLB configuration (ldirectord / IPVS)
- HA configuration (Corosync / Pacemaker)
- Cluster node configuration (Squid)
As mentioned in my project aims, the cluster configuration must be consistent. That is, configuration changes made in one place must be echoed to all other relevant nodes in the cluster. This can be achieved by using file synchronisation software. There are a couple of options available, the most obvious being rsync, but employing filesystem replication is possible too. I chose not to go with either of these in favour of csync2 by Clifford Wolf.
From the csync2 website:
Csync2 also allows custom actions to be performed when a file change is detected. For example, if it detects that the Squid conf file has been modified, it will push the changed file out to the other nodes and then restart Squid on those nodes. I have used this feature extensively and it has proved to be very reliable.
Installation and Environment Setup
All hosts
Before continuing, I recommend looking over the csync2 documentation here from from the csync2 website. It's quite concise and easy to read. Begin by installing csync2 on all hosts (both HA and NLB nodes):
root@lvs-director1:~# apt-get install --yes csync2
csync2 can be configured to keep backups of files it replaces. It's not required but I definitely recommend it! Create a folder for backups:
root@lvs-director1:~# mkdir /var/backups/csync2
Files are replicated over SSL by default. Each host will need to be configured with an SSL certificate - a self-signed one is fine, although these exact commands must be used - generating one in the usual way will NOT work!
root@lvs-director1:~# openssl genrsa -out /etc/csync2_ssl_key.pem 1024 root@lvs-director1:~# openssl req -batch -new -key /etc/csync2_ssl_key.pem -out /etc/csync2_ssl_cert.csr root@lvs-director1:~# openssl x509 -req -days 6000 -in /etc/csync2_ssl_cert.csr -signkey /etc/csync2_ssl_key.pem -out /etc/csync2_ssl_cert.pem
Directors Only
Due to the nature of its operation, csync2 runs as root. This is a problem if you want to initiate a sync from the web interface (which runs as the Apache user www-data). Later we will schedule a sync at regular intervals so this isn't strictly necessary, but it is quite convenient! Run this command as root - it will allow csync2 to be run as root by user on the system (use with care!). You only need to run this command on the two directors, as these are the only places syncs will be initiated from.
root@lvs-director1:~# chmod u+s /usr/sbin/csync2
In addition to SSL, csync2 requires that all sync partners have a pre-shared key. In a similar fashion to Corosync you must do this on the server's physical console instead of an SSH session. Generate this key on a director and SCP it to all other nodes (csync2 doesn't prompt you to enter random characters but that is what you need to do).
root@lvs-director1:~# csync2 -k /etc/csync2.key root@lvs-director1:~# scp /etc/csync2.key root@lvs-director2:/etc
The plan is to use the directors as the central repository for modifying and replicating configuration files via a web interface, so the files themselves must be present on the directors in the correct places. On BOTH directors, create a folder for the Squid config, then SCP the Squid conf file from the server you configured it on in part 4 to LVS-Director1 ONLY. The chmod command enables the web interface to modify the file (that is, once we install Apache and add the www-data user to the proxy group). csync2 replicates file security/ownership by default.
root@lvs-director1:~# mkdir /etc/squid3 root@lvs-cache1:~# scp /etc/squid3/squid.conf root@lvs-director1:/etc/squid3 root@lvs-director1:~# chgrp proxy /etc/squid3/squid* root@lvs-director1:~# chmod -R g+w /etc/squid3/squid*
- All hosts can resolve the directors' hostnames to the IP the directors will be connecting from and vice versa (directors will connect to the NLB hosts over their shared CACHE network, so the director hostnames need to resolve to their fixed IPs on this network. This is why we made the entries in /etc/hosts to override DNS. Here is a log of what happens when the connecting IP does not match the resolved IP.
- All hosts have SSL certificates configured in the correct locations as per the three openssl commands above
- All hosts are synchronised to the correct time. Just make sure the ntp package is installed and that a contactable time source is configured (Ubuntu's servers are configured by default so port 123 must be allowed outbound on your firewall)
- If you get sync errors, you can run csync2 in verbose mode: csync2 xvvvt
Initial csync2 Configuration
Now you are ready to set up an initial configuration file. The file /etc/csync2.cfg is installed by default, although it is blank, so delete it, then download, rename and customise my basic file - explanations are commented inline (download here).
#Initial csync2 configuration #Replicates Squid, csync2 and ldirectord config ############################################### #Replicate to NLB Cache Boxes from Director1 # group cacheconfig-dir1 { #Source is director1, slave hosts (only receive updates) are in brackets host lvs-director1 (lvs-cache1) (lvs-cache2) (lvs-cache3); #Pre-shared key to authenticate each host key /etc/csync2.key; #Detect changes in the following files/folders #In this case we only look at squid and csync2 conf files include /etc/squid3/squid.conf; include /etc/csync2.cfg; #Define an action to perform after syncing changed files action { #If the following file(s) have been changed pattern /etc/squid3/squid.conf; #Run the following command exec "/etc/init.d/squid3 reload"; #And log to the following file logfile "/var/log/csync2_action.log"; } #Backup modified files to the follow directory backup-directory /var/backups/csync2; #Keep 10 revisions of each modified file backup-generations 10; #Who wins when duplicates are found? The left-most host #specified in the "host" parameter above auto left; } #Replicate to NLB Cache Boxes from Director2 #This is an exact copy of the config above # group cacheconfig-dir2 { host lvs-director2 (lvs-cache1) (lvs-cache2) (lvs-cache3); key /etc/csync2.key; include /etc/squid3/squid.conf; include /etc/csync2.cfg; action { pattern /etc/squid3/squid.conf; exec "/etc/init.d/squid3 reload"; logfile "/var/log/csync2_action.log"; } backup-directory /var/backups/csync2; backup-generations 10; auto left; } #Replicate between directors #This is because configuration changes can be made on either director, #depending on which is the current active HA node. # group directorconfig { #Both directors can replicate to eachother host lvs-director1 lvs-director2; #Pre-shared key to authenticate each host key /etc/csync2.key; #Detect changes in the following files/folders #In this case we look at squid, csync2 and ldirectord conf files include /etc/squid3/squid.conf; include /etc/csync2.cfg; include /etc/ldirectord.cf; #We don't need an action here because # a) squid doesn't run on directors - the conf files exist for modification purposes only # b) ldirectord auto-reloads its config backup-directory /var/backups/csync2; backup-generations 10; #Who wins when duplicates are found? The host that initiates the replication. #Therefore it is possible for files to be overwritten by older versions. #Replication is only initiated on the active node so this won't be a problem. auto first; }
Now SCP the csync2 conf file to all hosts (HA and NLB). Any further changes to the csync2 conf will be replicated as the conf file itself is monitored for changes.
root@lvs-director1:~# scp /etc/csync2.cfg root@lvs-director2:/etc
Test csync2 from LVS-Director1 in verbose mode - it should hopefully run through without errors! Keep an eye on /var/log/squid3/cache.log on the Squid boxes - you will see it reload.
root@lvs-director1:~# csync2 -xv Marking file as dirty: /etc/squid3/squid.conf Connecting to host lvs-cache1 (SSL) ... Updating /etc/squid3/squid.conf on lvs-cache1 ... Connecting to host lvs-cache2 (SSL) ... Updating /etc/squid3/squid.conf on lvs-cache2 ... Connecting to host lvs-director2 (SSL) ... Updating /etc/squid3/squid.conf on lvs-director2 ... Connecting to host lvs-cache3 (SSL) ... Updating /etc/squid3/squid.conf on lvs-cache3 ... Finished with 0 errors. root@lvs-director1:~#
Next up - installing and configuring SquidGuard!