Using GlusterFS with Docker swarm cluster

In this blog I will create a 3 node Docker swarm cluster and use GlusterFS to share volume storage across Docker swarm nodes.


Using Swarm node in Docker will create a cluster of Docker hosts to run container on, the problem in had is if container “A” run in “node1” with named volume “voldata”, all data changes applied to “voldata” will be locally saved to “node1”. If container A is shut down and happen to start again in different node, let’s say this time on “node3” and also mounting named volume “voldata” will be empty and will not contain changes done to the volume when it was mounted in “node1”.

In this example I will not use named volume, rather I will use shared mount storage among cluster nodes, of course the same can apply to share storage for named volume folder.

I’m using for this exercise 3 EC2 on AWS with 1 attached EBS volumes for each one of them.

How to get around this?

One of the way to solve this is to use GlusterFS to replicate volumes across swarm nodes and make data available to all nodes at any time. Named volumes will still be local to each Docker host since GlusterFS takes care of the replication.

Preparation on each server

I will use Ubuntu 16.04 for this exercise.

First we put friendly name in /etc/hosts:

XX.XX.XX.XX    node1
XX.XX.XX.XX    node2
XX.XX.XX.XX    node3

Then we update the system

$ sudo apt update
$ sudo apt upgrade

Finally we reboot the server. Then start with installing necessary packages on all nodes:

$ sudo apt install -y
$ sudo apt install -y glusterfs-server

Then start the services:

$ sudo systemctl start glusterfs-server
$ sudo systemctl start docker

Create GlusterFS storage for bricks:

$ sudo mkdir -p /gluster/data /swarm/volumes

GlusterFS setup

First we prepare filesystem for the Gluster storage on all nodes:

$ sudo mkfs.xfs /dev/xvdb 
$ sudo mount /dev/xvdb /gluster/data/

From node1:

$ sudo gluster peer probe node2
peer probe: success. 
$ sudo gluster peer probe node3
peer probe: success.

Create the volume as a mirror:

$ sudo gluster volume create swarm-vols replica 3 node1:/gluster/data node2:/gluster/data node3:/gluster/data force
volume create: swarm-vols: success: please start the volume to access data

Allow mount connection only from localhost:

$ sudo gluster volume set swarm-vols auth.allow
volume set: success

Then start the volume:

$ sudo gluster volume start swarm-vols
volume start: swarm-vols: success

Then on each Gluster node we mount the shared mirrored GlusterFS locally:

$ sudo mount.glusterfs localhost:/swarm-vols /swarm/volumes

Docker swarm setup

Here I will create 1 manager node and 2 worker nodes.

$ sudo docker swarm init
Swarm initialized: current node (82f5ud4z97q7q74bz9ycwclnd) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h \
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Get the token for worker nodes:

$ sudo docker swarm join-token worker
To add a worker to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h \

Then on both worker nodes:

$ sudo docker swarm join --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h
This node joined a swarm as a worker.

Verify the swarm cluster:

$ sudo docker node ls
ID                           HOSTNAME          STATUS  AVAILABILITY  MANAGER STATUS
6he3dgbanee20h7lul705q196    ip-172-31-27-191  Ready   Active        
82f5ud4z97q7q74bz9ycwclnd *  ip-172-31-24-234  Ready   Active        Leader
c7daeowfoyfua2hy0ueiznbjo    ip-172-31-26-52   Ready   Active


To test, I will create label on node1 and node3, then create a container on node1 then shut it down then create it again on node3, with the same volume mounts, then we will notice that files created by both containers are shared.

Label swarm nodes:

$ sudo docker node update --label-add nodename=node1 ip-172-31-24-234
$ sudo docker node update --label-add nodename=node3 ip-172-31-26-52

Check the labels:

$ sudo docker node inspect --pretty ip-172-31-26-52
ID:			c7daeowfoyfua2hy0ueiznbjo
 - nodename = node3
Hostname:		ip-172-31-26-52
Joined at:		2017-01-06 22:44:17.323236832 +0000 utc
 State:			Ready
 Availability:		Active
 Operating System:	linux
 Architecture:		x86_64
 CPUs:			1
 Memory:		1.952 GiB
  Network:		bridge, host, null, overlay
  Volume:		local
Engine Version:		1.12.1

Create Docker service on node1 that will create a file in the shared volume:

$ sudo docker service create --name testcon --constraint 'node.labels.nodename == node1' --mount type=bind,source=/swarm/volumes/testvol,target=/mnt/testvol /bin/touch /mnt/testvol/testfile1.txt

Verify service creation:

$ sudo docker service ls
duvqo3btdrrl  testcon  0/1       busybox  /bin/bash

Check that it’s running in node1:

$ sudo docker service ps testcon
ID                         NAME           IMAGE          NODE              DESIRED STATE  CURRENT STATE           ERROR
6nw6sm8sak512x24bty7fwxwz  testcon.1      ubuntu:latest  ip-172-31-24-234  Ready          Ready 1 seconds ago     
6ctzew4b3rmpkf4barkp1idhx   \_ testcon.1  ubuntu:latest  ip-172-31-24-234  Shutdown       Complete 1 seconds ago

Also check the volume mounts:

$ sudo docker inspect testcon
        "ID": "8lnpmwcv56xwmwavu3gc2aay8",
        "Version": {
            "Index": 26
        "CreatedAt": "2017-01-06T23:03:01.93363267Z",
        "UpdatedAt": "2017-01-06T23:03:01.935557744Z",
        "Spec": {
            "ContainerSpec": {
                "Image": "busybox",
                "Args": [
                "Mounts": [
                        "Type": "bind",
                        "Source": "/swarm/volumes/testvol",
                        "Target": "/mnt/testvol"
            "Resources": {
                "Limits": {},
                "Reservations": {}
            "RestartPolicy": {
                "Condition": "any",
                "MaxAttempts": 0
            "Placement": {
                "Constraints": [
                    "nodename == node1"
        "ServiceID": "duvqo3btdrrlwf61g3bu5uaom",
        "Slot": 1,
        "Status": {
            "Timestamp": "2017-01-06T23:03:01.935553276Z",
            "State": "allocated",
            "Message": "allocated",
            "ContainerStatus": {}
        "DesiredState": "running"

Shutdown the service and then create in node3:

$ sudo docker service create --name testcon --constraint 'node.labels.nodename == node3' --mount type=bind,source=/swarm/volumes/testvol,target=/mnt/testvol ubuntu:latest /bin/touch /mnt/testvol/testfile3.txt

Verify it has ran on node3:

$ sudo docker service ps testcon
ID                         NAME           IMAGE          NODE             DESIRED STATE  CURRENT STATE           ERROR
5p57xyottput3w34r7fclamd9  testcon.1      ubuntu:latest  ip-172-31-26-52  Ready          Ready 1 seconds ago     
aniesakdmrdyuq8m2ddn3ga9b   \_ testcon.1  ubuntu:latest  ip-172-31-26-52  Shutdown       Complete 2 seconds ago

Now check the files created from both containers exist in the same volume:

$ ls -l /swarm/volumes/testvol/
total 0
-rw-r--r-- 1 root root 0 Jan  6 23:59 testfile3.txt
-rw-r--r-- 1 root root 0 Jan  6 23:58 testfile1.txt

Containerizing Alfresco

What is Alfresco?

Alfresco is an open source document management system for Microsoft Windows and Unix-like operating systems. It comes in Community and Enterprise editions that are different only in scalability and high availability features. It’s powerful, easy to use and much mature than other open source document management systems. Alfresco is Java based software, Community edition comes bundled with Tomcat application server and uses PostgreSQL database as backend.


The scope here is to containerize Alfresco, the setup will have 2 containers, one for PostgreSQL and another one for Alfresco. The container entry point will detect if Alfresco is not installed and will install it in this case, otherwise it will start the Tomcat server to run Alfresco application. Here I’m relying on Docker compose to run the environment.


– Setup directory hierarchy:
The following is the directory hierarchy for the container folder:

  alfresco/                     Alfresco Docker container folder      Docker entry point for Alfresco container               Configration parameters for Alfresco installation
    Dockerfile                  Dockerfile to build Alfresco container
    alfresco/                   Shared volume for Alfresco
  db_store/                     PostgreSQL Docker container folder
    data/                       Shared volume for PostgreSQL database storage

– On container host: Create group “alfresco” (GID: 888) and user “alfresco” (UID: 888), home directory /opt/alfresco (Alfresco root)

– Login with user “alfresco”
– Move the above hierarchy to the file /opt/alfresco (Available at
– Build the Alfresco container:

$ docker-compose build alfresco

– If SELinux is enabled, run the following command on all shared volumes (Needed to run that on CentOS 7.2, docker 1.9.1):

$ chcon --recursive --type=svirt_sandbox_file_t --range=s0 /path/to/volume

– Copy Alfresco installation file “alfresco-community-installer-201605-linux-x64.bin” to alfresco shared volume:
– Copy the file to alfresco shared volume and change:
To match hostname of the Docker host

– Change hostname as well in docker-compose.yml file (alfresco service):

– Bring the environment up without -d parameter time to make sure it installs everything OK:

$ docker-compose up

– After finishing and accessing Alfresco on port 8080, press Ctrl-C
– Start the environment in daemon mode:

$ docker-compose up -d

Now you can enjoy Alfresco Document Management System in Docker container.