Over the last few posts we have configured the following:
- NLB cluster
- HA cluster
- Squid configuration
- SquidGuard configuration
- File synchronisation
- Blacklist updates
It is time to introduce some automation to the cluster. Tasks such as updating blacklists and replication are currently performed manually, but if you remember the introduction to Pacemaker, anything that can be scripted can be turned into a cluster resource. This is what we are going to do now, thanks to some nifty scripts I've put together.
Below is a list of scripts I've written to make cluster operation that bit easier:
- directormonitor.sh
This is used for the web admin interface to monitor the cluster's health. It runs ipvsadm and crm_mon every 5 seconds and saves the output to files accessible by the web admin interface. It only does this when it can find the cluster IP on the server it's running on (i.e. the active director).
- blupdate.sh
Downloads, tests and extracts the latest blacklists archive from shallalist.de. Compiles the new lists into databases, resets directory permissions and initiates a sync. Runs once only.
- blupdatewrapper.sh
Runs blupdate.sh every 24 hours, indefinitely.
- replicate.sh
Initiates csync2 at the interval you specify.
- restartsquid.sh
Essentially performs the same task as /etc/init.d/squid reload but runs only if Squid is already running (i.e. doesn't start the service if it has previously been stopped). It also stops Squid if SquidGuard has been found to launch into emergency mode.
- resetperm.sh
Resets file and folder permissions on SquidGuard's db directory. This is necessary after a blacklist update when performed by a user other than proxy.
- clearcache.sh
Clear's Squid's cache directory and rebuilds it from scratch. Shuts down and restarts Squid in the process.
- control.sh
This isn't a cluster resource but does enable you to send pre-configured commands to any backend NLB node via the web interface (start/stop services etc). The web interface inserts a clause to only run the script if the hostname of the machine it's running on matches your specification. That way the script can be synced and run on all backend hosts but will only perform any actions on one.
Script Installation and Configuration
Download the scripts package to /usr/local/squidGuard on LVS-Director1. Extract the archive to create a directory "control", then set permissions on the folder:
root@lvs-director1:/usr/local/squidGuard# wget http://www.zerosignal.co.uk/wp-content/uploads/2011/08/control.zip root@lvs-director1:/usr/local/squidGuard# unzip control.zip root@lvs-director1:/usr/local/squidGuard# chown -R proxy:proxy ./control root@lvs-director1:/usr/local/squidGuard# chmod -R g+w ./control
There is some slight configuration required in some of the scripts - modify the following to fit your environment. Most of the path settings are fine if you've followed these guides, but you will almost definitely want to change the IP address in directormonitor.sh:
directormonitor.sh
CLUSTERIP="192.168.0.110/24" (the shared Cluster IP - don't forget the CIDR notation!)
SQUIDGUARDLOGDIR="/usr/local/squidGuard/log" (SquidGuard's log directory)
blupdate.sh
squidguardbin="/usr/local/bin/squidGuard" (the SquidGuard binary/executable)
squidguarddb="/usr/local/squidGuard/db" (SquidGuard's db directory)
squidguardlog="/usr/local/squidGuard/log/squidGuard.log" (SquidGuard's log file)
squiduser="proxy" (The user Squid/SquidGuard runs as)
squidgroup="proxy" (The group that squiduser belongs to)
backuppath="/var/backups/sgdb" (Backup path for modifications made in web interface - more on this later)
downloadurl="http://www.shallalist.de/Downloads" (Blacklist download URL, excluding filename)
downloadfile="shallalist.tar.gz" (Blacklist filename)
blupdatewrapper.sh
RUNHOUR=23 (The hour to download blacklists during)
BLUPDATE="/usr/local/squidGuard/control/blupdate.sh" (Path to blupdate.sh)
replicate.sh
logpath="/usr/local/squidGuard/log/csync2.txt" (Path to csync2 log)
restartsquid.sh
SQUIDGUARDLOG="/usr/local/squidGuard/log/squidGuard.log" (SquidGuard's log file)
resetperms.sh
SQUIDUSER="proxy" (The user Squid/SquidGuard runs as)
SQUIDGROUP="proxy" (The group that SQUIDUSER belongs to)
SQUIDGUARDDB="/usr/local/squidGuard/db" (SquidGuard's db directory)
clearcache.sh
SQUIDUSER="proxy" (The user Squid/SquidGuard runs as)
SQUIDGROUP="proxy" (The group that SQUIDUSER belongs to)
SQUIDSPOOL="/var/spool/squid3" (Squid's spool/cache directory)
control.sh
No editing required as this is an automatically generated script
Configure Additional Cluster Resources
Now the scripts are in place we can create new cluster resources as some of them must run in the background as services on the active director. Run the following commands on LVS-Director1. If you've changed any paths (I recommend you don't) then make sure you change them here as well:
crm configure primitive DirectorMonitor ocf:heartbeat:anything params binfile="/bin/bash" cmdline_options="/usr/local/squidGuard/control/directormonitor.sh" meta target-role="Started" crm configure primitive BlacklistUpdater ocf:heartbeat:anything params binfile="/bin/bash" cmdline_options="/usr/local/squidGuard/control/blupdatewrapper.sh" meta target-role="Started" crm configure primitive Replicator ocf:heartbeat:anything params binfile="/bin/bash" cmdline_options="/usr/local/squidGuard/control/replicate.sh" meta target-role="Started"
Now the resources are configured, add them to the resource group so they will always run on the same node as the others. This cannot be done with a single command, so you have two options: delete the group and redefine it to include to new resources, or run the following command to edit the resource directly and append the new resources:
crm options editor nano crm configure edit ClusterResourceGroup
Then append DirectorMonitor BlacklistUpdater Replicator to the end of the line. Save and exit nano, then type commit and exit.
If you run crm_mon -1 you should see the scripts running:
============ Last updated: Wed Aug 10 02:29:21 2011 Stack: openais Current DC: lvs-director2 - partition with quorum Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd 2 Nodes configured, 2 expected votes 1 Resources configured. ============ Online: [ lvs-director1 lvs-director2 ] Resource Group: DirectorServices LAN-Cluster-IP (ocf::heartbeat:IPaddr2): Started lvs-director1 CACHE-Cluster-IP (ocf::heartbeat:IPaddr2): Started lvs-director1 ldirectord (ocf::heartbeat:ldirectord): Started lvs-director1 DirectorMonitor (ocf::heartbeat:anything): Started lvs-director1 BlacklistUpdater (ocf::heartbeat:anything): Started lvs-director1 Replicator (ocf::heartbeat:anything): Started lvs-director1
Configure SquidGuard Replication - LVS-Director1 ONLY
csync2
We now need to go back to the csync2 configuration and enable monitoring of SquidGuard configuration, blacklist databases and control scripts. Download the updated config file to /etc/csync2.cfg or make the following changes manually. Three include lines need to be added to all three replication groups:
include /usr/local/squidGuard/db; include /usr/local/squidGuard/squidGuard.conf; include /usr/local/squidGuard/control;
Add two new pattern lines and modify the exec line in the action configuration in cacheconfig-dir1 and cacheconfig-dir2 (to restart Squid when SquidGuard configuration changes are detected). The exec line is changed to my script instead of the init.d one so it will only attempt to restart Squid if it is already running:
pattern /usr/local/squidGuard/db/*; pattern /usr/local/squidGuard/squidGuard.conf; exec "/bin/sh /usr/local/squidGuard/control/restartsquid.sh";
Create a new action configuration in cacheconfig-dir1 and cacheconfig-dir2 (to run control.sh when it is modified by the web interface):
action { #Commands are sent to a host from the web interface via this file #so it must be replicated and executed immediately. # pattern /usr/local/squidGuard/control/control.sh; exec "/bin/sh /usr/local/squidGuard/control/control.sh"; logfile "/var/log/csync2_action.log"; }
And finally create a new action configuration in directorconfig (to run resetperms.sh when it is updated by the web interface):
action { #The command to reset SquidGuard DB permissions is initially run #on the directors only and then replicated to backend servers. # pattern /usr/local/squidGuard/control/resetperms.sh; exec "/bin/sh /usr/local/squidGuard/control/resetperms.sh"; logfile "/var/log/csync2_action.log"; do-local; }
File/Folder Permissions
As has been the theme most of the way through with configuring SquidGuard, having the correct permissions is paramount! Run these commands to set the permissions on squidGuard.conf and the squidGuard log directory (the web admin interface touches a couple of these files to trigger events.)
root@lvs-director1:~# chown -R proxy:proxy /usr/local/squidGuard/log root@lvs-director1:~# chmod -R g+w /usr/local/squidGuard/log root@lvs-director1:~# chown proxy:proxy /usr/local/squidGuard/squidGuard.conf root@lvs-director1:~# chmod g+w /usr/local/squidGuard/squidGuard.conf
Run a sync after updating or downloading the new file. You will need to run it a few times as the new csync2.cfg needs to be synced to the cluster nodes before any newly defined files will sync (so don't worry if you get a long string of "permission denied" errors!) After syncing has completed to no errors, you should see SquidGuard processes appearing on your NLB nodes where Squid configuration changes have been detected and Squid has in turn been restarted:
root@lvs-director1:~# csync2 -xvt root@lvs-cache1:~# ps -A | grep squid 13481 ? 00:00:00 squid3 13484 ? 00:02:10 squid3 14694 ? 00:00:02 squidGuard 14698 ? 00:00:00 squidGuard 14699 ? 00:00:00 squidGuard 14701 ? 00:00:00 squidGuard 14703 ? 00:00:00 squidGuard
/usr/local/squidGuard/log/csync2.txt
/usr/local/squidGuard/log/ipvsadm.txt
/usr/local/squidGuard/log/crm_mon.txt
/usr/local/squidGuard/log/squidGuard.log
In addition, deliberately disconnect or shutdown the active director and ensure that the secondary director fails over and runs all scripts and services as expected.
The final step in this part is installing the web admin interface.
Web Admin Installation
Initial Installation
We need a way to administer the Squid and SquidGuard configuration, black/whitelists, Squid and SquidGuard ACLs etc. I have written a PHP-based interface to be run on the directors. This involves installing Apache, PHP and the PHP script itself. The next post will discuss administering the cluster with it, but for now we are just going to install and secure it.
Start out by installing Apache and PHP onto both directors:
root@lvs-director1:~# apt-get --yes install apache2 apache2-utils php5 ssl-cert
The default web root folder is /var/www. This is fine and doesn't need to be modified. On LVS-Director1 ONLY Change to this folder then download and extract the web admin and block page packages.
root@lvs-director1:/var/www# wget http://www.zerosignal.co.uk/wp-content/uploads/2011/08/admin.zip root@lvs-director1:/var/www# wget http://www.zerosignal.co.uk/wp-content/uploads/2011/08/blocked.zip root@lvs-director1:/var/www# unzip admin.zip ; rm admin.zip root@lvs-director1:/var/www# unzip blocked.zip ; rm blocked.zip
On both directors, create a directory to store backups of modified files in. It must be writeable by the Apache www-data user:
root@lvs-director1:~# mkdir -p /var/backups/sgadmin root@lvs-director1:~# chown www-data:www-data /var/backups/sgadmin
We also want the web interface to be able to modify files belonging to Squid's proxy user, so add www-data to the proxy group (this was the reason for giving Squid files group-write access). Again, do this on both directors:
root@lvs-director1:~# usermod -a -G proxy www-data
Web Admin Protection - htaccess & SSL
Before making changes to Apache's config, you should first configure htaccess (basic authentication) and SSL to protect the admin interface from unauthorised users. Run the following command in /var/www/admin on LVS-Director1:
root@lvs-director1:/var/www/admin# htpasswd -c ./htpasswd admin New password: [hidden] Re-type new password: [hidden] Adding password for user admin
Now review the file /var/www/admin/.htaccess and make any modifications if required (default configuration is fine):
AuthType Basic AuthName "Password Required" AuthUserFile /var/www/admin/htpasswd Require user admin
On LVS-Director1, generate a self-signed SSL certificate for Apache to use and SCP it to LVS-Director2:
root@lvs-director1:~# make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/ssl/certs/ssl-cert-cluster.pem root@lvs-director1:~# scp /etc/ssl/certs/ssl-cert-cluster.pem root@lvs-director2:/etc/ssl/certs
Web Admin Configuration
Customise the file /var/www/admin/conf.php. The various options are detailed inline. Again, most path variables work as their defaults if you have followed these guides closely.
Web Admin Replication
Add the following lines to the directorconfig group of /etc/csync2.cfg on LVS-Director1, and then run a sync:
include /var/www; include /var/backups/sgadmin; root@lvs-director1:~# csync2 -xvt
Apache Configuration Changes
- On LVS-Director1, add the following to /etc/apache2/sites-available/default and /etc/apache2/sites-available/default-ssl:
<Directory /var/www/admin> AllowOverride AuthConfig </Directory>
- Add the DirectoryIndex directive to <Directory /var/www/>:
DirectoryIndex index.php index.html
- Change SSLCertificateFile and comment out SSLCertificateKeyFile in /etc/apache2/sites-available/default-ssl ONLY:
SSLCertificateFile /etc/ssl/certs/ssl-cert-cluster.pem #SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
SCP the configuration to LVS-Director2, then enable Apache's SSL module and restart Apache on both directors:
root@lvs-director1:~# scp /etc/apache2/sites-available/default* root@lvs-director2:/etc/apache2/sites-available root@lvs-director1:~# a2ensite default-ssl root@lvs-director1:~# a2enmod ssl root@lvs-director1:~# /etc/init.d/apache2 restart
Testing
Once you have Apache confirmed to be up and running, check the block page and admin interface are accessible simply by trying both URLs out in a browser:
http://192.168.0.110/blocked/blocked.php (no info is displayed as this URL has parameters passed to it by SquidGuard)
Image may be NSFW.
Clik here to view.
https://192.168.0.110/admin (should display something like this after authenticating with the username/password you specified in htaccess setup)
Image may be NSFW.
Clik here to view.
Either exclude local addresses in your browser configuration (I recommend you do this anyway) or access Apache via the CACHE Cluster IP (10.2.0.10) - that way you will always connect to the active director.
That concludes this part - the next part details how to administer the cluster, Squid and SquidGuard with the web admin interface.