Oracle Always Free Cloud instances in HA configuration

Marcelo Ochoa
ITNEXT
Published in
4 min readOct 26, 2019

--

Always Free offer from Oracle Cloud is great resource to deploy minimal apps or as development environment.

In this post We show how to deploy minimalist two-node Docker Swarm cluster with HA functionality for this solution I chosen these components:

always free cloud offer include:

  • 2 x VM.Standard.E2.1.Micro shape

1Gb RAM

48 Gb boot disk

2 cores of AMD EPYC 7551 32-Core Processor

  • 100 Gbytes of block volume (96 Gbytes are consumed if you start above instances)
  • 2 autonomous database
  • one for OLTP, including:

1 OCPU (two cores)

20Gb data

Automatic daily backup

No auto-scaling

  • one for DW:

1 OCPU (two cores)

20Gb data

No backup

No auto-scaling

  • A load balancer — 10Mbps-Micro shape

with this scenario, what we can do?, well a lot!!. Following picture shows my deployment

Let’s see more in details:

Compute -> Instance -> Create instance steps:

instance name and shape selection
Check Always Free Eligible
Show Shape, Network and Storage Options-> Enable a public IP to access using ssh
Select a ssh public key from your notebook

repeat above steps for node2 creation, you will get to compute instances up and running a few minutes later:

Post installation scripts (adding Gluster Storage for Oracle Linux), node1 step as root:

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.1.16 node2 remote
# firewall-cmd --permanent --zone=public --add-port=24007-24008/tcp
# firewall-cmd --permanent --zone=public --add-port=24007-24008/udp
# firewall-cmd --permanent --zone=public --add-port=49152-49156/tcp
# firewall-cmd --permanent --zone=public --add-port=49152-49156/udp
# firewall-cmd --reload
# cd /home/
# yum -y update
# yum install -y oracle-gluster-release-el7
# yum-config-manager --enable ol7_gluster5 ol7_addons ol7_latest ol7_optional_latest ol7_UEKR5
# dd if=/dev/zero of=fs.img count=0 bs=1 seek=20G
# mkfs.xfs -f -i size=512 -L glusterfs fs.img
# mkdir -p /data/glusterfs/myvolume/mybrick
# echo '/home/fs.img /data/glusterfs/myvolume/mybrick xfs defaults 0 0' >> /etc/fstab
# mount -a && df
# yum install -y glusterfs-server
# systemctl enable --now glusterd

due a limitation on volume disk usage We cut 20Gb of boot volume and dedicate it to a shared mirrored storage using Gluster FS.

Do similar steps on node2, except for /etc/hosts which look like:

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.1.15 node1 remote

Configuring Software defined firewall at Oracle Cloud
Networking -> Virtual Cloud Networks -> Virtual Cloud Network Details -> Security Lists (Ingress rules)

Ports allowed to enable Docker Swarm and GlusterFS

Prepares Gluster FS cluster, steps on node1 as root:

# gluster peer probe node2
# gluster peer status
# gluster pool list
UUID Hostname State
c1b9cfd0-6e2f-425a-b3da-46803c6312c7 node2 Connected
54746d21-d1e2-4030-b58c-54a45f22fa3c localhost Connected
# gluster volume create myvolume replica 2 node{1,2}:/data/glusterfs/myvolume/mybrick/brick
# gluster volume start myvolume
# gluster volume info

mount replicated storage on node1 and node2:

# mkdir /gluster-storage
# echo "localhost:/myvolume /gluster-storage glusterfs defaults,_netdev 0 0" >> /etc/fstab
# mount /gluster-storage && df -h

installing Docker Swarm cluster on node1 and node2:

# yum install docker-engine
# systemctl enable --now docker
# firewall-cmd --permanent --zone=public --add-port=2377/tcp
# firewall-cmd --reload
# docker swarm init --advertise-addr ens3

initialize Swarm cluster on node1:

# docker swarm init --advertise-addr ens3
# docker swarm join-token manager

use join token on node node2:

# docker swarm join --token SWMTKN-1-3ud0gg60omt194qnu5fxht9qnuhz7zs6v0wt1pcmj3h1cxcacl-d6536ldpb7ijzx7j9iqly8vlv 10.0.1.15:2377

Basic performance testing of Gluster FS over XFS local, read and write, parallel or sequential.

Units in MB/s using dd with flag direct

Note that there only an small difference when doing parallel write, both nodes writing on the replicated directory, may be causes by write some extra metadata to have both directories in sync.

With above configuration We can deploy multiples apps with the availability of having HA support, for example leaving a Docker Swarm node in drain state for maintenance and our App will keep and running.

Next post will be with the examples deploying multiples services at the above cluster including, see screenshots:

Services up and running nginx (replicated) portainer and a private registry with UI
Cluster UI Administration using Portainer.IO
Private docker registry with UI and replicated storage

--

--