Systemd and the mount of pain

I wish I didn't have to complain about systemd.

I wished it would work from system administrators' perspective.

I wished it wasn't convoluted.

I wished I didn't wish...

The problem


I've been looking for HA, across VMs, solutions for remote-shared filesystems. The specific case in point, is a django webservice, that we've needed to split across VMs, that writes to a "local" filesystem, which it then pass on to NGinx/Apache to servbe the specific file from it's filesystem. The Nginx/Apache have been split across VMs for HA reasons too, so I need a NFS/CIFS/etc. filesystem to share the files, and to have it spread across datacentres inside VMs. (less than 100GBs of images and thumbnails, though it's growing).

The first (failing) solution


The solution, after researching various options, like CEPH, DRBD etc. was (initially) setttled on NAS4free, using FreeBSD's HAST for a shared/duplicated block device and using CARP to fail over the service virtual IP (VIP). The system was installed, though a bit finicky, and was "working", however, it has a gory splitbrain problem since it it split acros datacentres, and it also have the problem that it's 2 nodes *only* for HAST! ie. I can't setup a quorum disk, and right there the fun started when the two datacentres experienced some latency and CARP kicked in an the slave becomes master... and then we have split brain. HAST is best for two nodes physically next to each other with STONITH options, NOT for my case of remote VMs ;(

The reason I chose it, was because of my preference to ZFS, and it was an "easy" ZFS on top of the HAST, and I got it going quickly as we was under a bit of over-run constraints.

Let's relook and re-investigate

I've considered the other options, and the main options could be classified as such:

Shared block device cluster filesystems

This type of cluster/shared storage, shared a common device, and they have locking features to prevent simultaneous accesses. The assumption here is both servers write to the same disk/block-device. Not quite usefull for the distributed VM case, unless you use a distributed block device (like DRBD)

Distributed block devices

Here the "disk" is distributed, and the writing to the block is typically in a active/passive setup where only one server have access to write to the disk, and you'll have to fail over to the slave and start/mount the needed services once failure is needed. This way you can use any filesystem on top of it. This is the HAST solution, and the DRBD in "normal" setup. The problem with is a single server is active on it, and could become a bottle neck, and service failover needs to be handled.

To have a multiple server active/active to this, you will need a cluster aware filesytem as above. HAST doesn't have active-active, which then gives DRBD and CEPH's 

Mac OSX settings to not forget

[pre class="prettyprint"] sudo spctl --master-disable [/pre]