Thursday, Jan 28, 2016
Windows Server 2012 Hyper-V Failover Clustering - Part 4: PostgreSQL
The previous post has touched many important topics, including the installation of the Failover Clustering Feature, Cluster Shared Volumes, File Server and Virtual Machine roles, so we are almost done with building our private cloud infrastructure. I will not go into details regarding the installation of the application server - it is a standard Windows VM running an ASP.NET application. What’s more interesting is how to set up a fault-tolerant LinuX VM running PostgreSQL on top of a Windows-based hypervisor.
Installing Linux
In our example, we will be using a virtual machine running Ubuntu Server 14.04 LTS.
Here are a step-by-step instructions for installing Ubuntu VM on Hyper-V:
- Create a new Generation 2 VM in Hyper-V manager, set 4 CPU cores, 8192 Mb RAM, 40 Gb primary disk, place the primary disk on C:\ClusterStorage\Volume1 (that’s SystemStorage CSV from our previous post), select Main teamed switch for NIC. Having VM disks stored on a CSV will ensure that our VM can seamlessly fail over to another node.
-
Go to firmware, deselect Enable Secure Boot.
-
Go to Network adapter, main teamed switch, enter VLAN ID 232. You may want to set its MAC ID to a static value to avoid eth interface changes when a machine is migrated.
-
Add another network adapter on the backup vswitch. Uncheck protected network on both NICs in advanced features.
-
Create a new 140 Gb disk BaasicDatabases.vhdx on C:\ClusterStorage\Volume2 (that’s DatabaseStorage CSV from our previous post).
-
install with default options.
-
Set network parameters. DB01 has static IP 10.80.35.145, netmask 255.255.255.224, default GW 10.80.35.129.
-
choose OpenSSH from SW packages.
-
when asked for disk partitioning, pick Guided partitioning and set up LVM. Set up LVM on Disk 1 (smaller), create a single partition on Disk 2, and choose XFS as a file system - this can be done before continuing to write the partition table on the disk by choosing the partition’s file system.
-
after logging in for the first time, you might want to install desktop. It is not a mandatory step and is not used for production environments, but can help users with less experience with Linux
sudo apt-get install lubuntu-desktop
-
set up networking in /etc/network/interfaces. Add eth1 if not there, along with static IP and netmask and route if needed.
post-up route add -net 10.80.0.0/16 gw 10.80.35.225 dev eth1
Note that 14.04 requires the use of the line below to reload new settings. After that, check ifconfig or route -n to check settings
service network-manager restart
Setting up LVM
We will now set up LVM (Logical Volume Management) on the database disk using the steps from this excellent tutorial. Logical Volume Manager allows for a layer of abstraction between your operating system and the disks/partitions it uses. Because volume groups and logical volumes aren’t physically tied to a hard drive, it makes it easy to dynamically resize and create new disks and partitions. As we choose to set up the LVM on the “system” disk during setup, we will not have to touch /dev/sda3.
fdisk -l
fdisk /dev/sdb
- choose p to display the current situation and delete partition 1 if present
- press n = create new partition, p = creates primary partition, 1 = makes partition the first on the disk
- Push enter twice to accept the default first cylinder and last cylinder.
- To prepare the partition to be used by LVM use the following two commands: t = change partition type, 8e = changes to LVM partition type
- p = view partition setup so we can review before writing changes to disk, w = write changes to disk
pvcreate /dev/sdb1
to create a LVM physical volume on the partition we just createdvgcreate vg-databases /dev/sdb1
to create a volume grouplvcreate -L 140G -n databases vg-databases
to create a logical volume 140 Gb called databasesmkfs -t xfs /dev/vg-databases/databases
to create XFS file systemmount -t xfs /dev/vg-databases/databases /db
to mount the volume under /db- edit
/etc/fstab
and remove the physical mount for /db, replace with/dev/mapper/vg--databases-databases /db xfs defaults 0 2
To later extend the logical volume, use
lvextend -L+50G /dev/vg-databases/databases
xfs_growfs /dev/vg-databases/databases
The first command extends the logical volume by 50 Gb, while the second extends the file system so it could use all of it.
Mount SMB network disk for backup
Use the steps from this tutorial to permanently mount Windows share:
sudo apt-get install cifs-utils
-
mkdir /backup
-
edit
/etc/fstab
and add a line//10.80.6.85/DbBackup /backup cifs dir_mode=0777,file_mode=0777,username=myusername,password=mypassword,iocharset=utf8,sec=ntlm 0 0
-
mount -a
-
(to unmount,
umount /backup
)
Installing PosgreSQL
-
Create the file /etc/apt/sources.list.d/pgdg.list, and add a line for the repository
deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main
-
Import the repository signing key, and update the package lists
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \ sudo apt-key add - sudo apt-get update
-
Install PostgreSQL
apt-get install postgresql-9.4 postgresql-contrib
-
Change the location of the default data folder.
sudo su postgres
psql -d postgres –U postgres
select version();
SHOW data_directory;
chown -R postgres:postgres /db/data
/usr/lib/postgresql/9.4/bin/initdb -D /db/data
- edit
/etc/postgresql/9.4/main/postgresql.conf
, setdata_directory = '/db/data'
. Optionally (in case of errors) setssl=false
- edit
/db/data/postgresql.conf
, setdata_directory = '/db/data'
. sudo /etc/init.d/postgresql restart
-
Install JavaScript PLV8 Extension pg93plv8jsbin_w64_1.4.0_gcc4.8.0.zip
sudo apt-get install postgresql-9.4-plv8
Configuration tasks
- Configure
pg_hba.conf
-
Allow connections from local network:
host all all 10.80.35.128/27 md5
-
Make sure that Postgres listens on all interfaces by uncommenting and changing the following line in
postgres.conf
:listen_address='*'
-
-
Note that both conf files are present in both
/db/data
and/etc/postgresql/9.4/main
-
Use pgTune
- in addition to its recommendations, set
max_prepared_transactions = max_connections * 1.5
- in addition to its recommendations, set
-
Change the password for the postgres username
sudo -u postgres psql postgres # \password mynewpassword
-
Create CITEXT Module
CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA public; -
Create PLV8 Extension CREATE EXTENSION IF NOT EXISTS plv8;
Backup automation
- Install the packages required to send mail notifications:
sudo apt-get install ssmtp
sudo apt-get install mailutils
- Edit the file
/etc/ssmtp/ssmtp.conf
and add the following content (change the addresses, usernames and passwords with the real values):
#
# Config file for sSMTP sendmail
#
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.
root=fromaddress@somedomain.com
AuthUser=senderaddress@somedomain.com
AuthPass=mypassword
# The place where the mail goes. The actual machine name is required no
# MX records are consulted. Commonly mailhosts are named mail.domain.com
mailhub=mysmptpserver.somedomain.com
# Where will the mail seem to come from?
rewriteDomain=mydomain.com
# The full hostname
#hostname=db01
# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address
FromLineOverride=YES
- To produce backups, modify the script from this blog post as follows and put it in the file /home/username/db_backup, set owner group to postgres (chown root:postgres db_backup) and privileges to chmod 0774.
#!/usr/bin/env bash
# common
set -e
top_level_backup_dir=/backup
# common
cd "$top_level_backup_dir"
backup_dir=$( date +%Y-%m-%d )
mkdir -p "$backup_dir"
cd "$backup_dir"
# Make actual backup files
pg_dumpall -r > roles.dump
psql -qAtX -c "select datname from pg_database where datallowconn order by pg_database_size(oid) desc" | \
xargs -r -d'\n' -I{} -P2 pg_dump -Fc -f pg-{}.dump {}
dbstr=$(psql -qAtX -c "select datname from pg_database where datallowconn order by pg_database_size(oid) desc")
# Retention policy
cd "$top_level_backup_dir"
find . -mindepth 2 -maxdepth 2 -type f -regex '\./[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/.*dump' -mtime +6 -delete
find . -mindepth 1 -maxdepth 1 -type d -regex '\./[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]' -empty -delete
echo "Backed up the following databases: $dbstr" | mail -s "Baasic database backup report" someaddress@somedomain.com
- optionally set a .pgpass file in the /home/username folder
- Set up a cron job by putting a pgbackup file in the /etc/cron.f folder with the following contents. This will run a backup task each day at 7:30:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
30 7 * * * postgres /home/username/db_backup
- To restore a dump file, use the following, where mydb is the name of the database. More info.
dropdb mydb
pg_restore -C -d mydb mydb.dump
- To check disk usage, use
df -h
.
That’s it! Just add the virtual machine we’ve set up to the failover roles in the Failover Cluster Manager, as described in the previous post, and you are ready to go.