Features such as High Availability (HA), Elastic Scaling, Disaster Recovery (DR) are no longer restricted to Tier-1 mission critical applications & services. These are now table-stakes for any enterprise-grade cloud provider and enterprises can leverage these at will - as they are now fundamental building blocks of the cloud platform. Oracle Cloud Infrastructure (OCI) offers several modern state-of-the-art capabilities including - Bare Metal servers for extreme performance & HPC, Availability Domains & Fault-Domain constructs for fault-tolerance, HA & DR, Elastic Scaling of compute & storage independently for future-proofing, RAC for highly-available databases, Regional Subnets for seamless datacenter resiliency & failover, LBaaS (Load Balancer as a Service) for platform-driven fully managed instance failovers etc..
Although there are many different ways to achieve HA on Oracle Cloud Infrastructure, we will look at a more simple/primitive method (just leveraging core OCI capabilities) that leverages open-source technologies.
Note: This could be more elegantly achieved using OCI's native LBaaS PaaS service as well.
However, in certain situations - windows server based apps, container apps, microservice constructs - harproxy (or similar) & keepalived might have semblance to on-premise experience and/or solution preference.
In this article we would walk through detailed step-by-step instructions on how to install, configure and achieve HA & failover using HAProxy & KeepAlived.
This article presumes users have some exposure to HAProxy, KeepAlived concepts as well as cloud constructs such as virtual networking, subnets, private/public IPs, security lists/route tables etc..
A little bit of python & bash shell scripting knowledge would be helpful too.
I not, don't worry - I will try my best to point out / reference documentation as much as possible.
Pre-Requisites:
1) An Oracle Cloud Account / Tenancy. If you don't have one, you can request a trial instance here.
2) A Compartment that would host our HA instances
3) A VCN (Virtual Cloud Network) with at least 1 public subnet
4) Administrator access on the cloud instance to configure Identity, Network Rules, Policies & Dynamic Groups
Spin Up 2 OCI VM Instances:
To start, let's first spin up 2 VMs on OCI.
My VCN looks like below;
Need help setting up a VCN on OCI? Refer here
Note: In this example, we have HA within an AD (Availability Domain) leveraging the "Fault Domain" resilience. However, this can be quickly reconstructed with a "regional subnet" construct for a full "site resiliency".
Within a public subnet, spin up 2 VMs.
In my example I have 2 Angry Bird servers - Terence & Stella.
Both running a standard single core VM with Oracle Enterprise Linux in the same availability domain - but placed strategically in different fault domains for intra-AD HA.
Now, you should see 2 instances up & running within your VCN.
Let's now SSH into our instances.
Note: If you are unable to ssh into your instance. Make sure;
1) The instance is indeed spun up within a public subnet
2) Security List has port 22 enabled (by default this should be there)
3) Ensure you have an internet gateway (IGW) attached to your VCN and route table configured with the route to IGW
4) Finally at the OS level, make sure you open up firewall ports. For a quick test, you can try stopping the firewall on linux OS instances using the command: service firewalld stop
Install your preferred proxy service:
You can choose to install any of your preferred http/https proxy or load balancer service. Some of the most popular ones include - apache httpd, Nginx, HAProxy.
In my example, I used HAProxy.
Install HAProxy on both Terence & Stella VM instances.
Since we are just going to test the failover / HA configuration, we are not going to actually create any backend sets / services. When the reverse proxy service is called, it will be directed to render a static error page.
Let's create a simple html page under /etc/haproxy/errorfiles/errorpage.http
Replace {ServerName} with appropriate VM names so it helps distinguish the service when it fails over.
Although there are many different ways to achieve HA on Oracle Cloud Infrastructure, we will look at a more simple/primitive method (just leveraging core OCI capabilities) that leverages open-source technologies.
Note: This could be more elegantly achieved using OCI's native LBaaS PaaS service as well.
However, in certain situations - windows server based apps, container apps, microservice constructs - harproxy (or similar) & keepalived might have semblance to on-premise experience and/or solution preference.
In this article we would walk through detailed step-by-step instructions on how to install, configure and achieve HA & failover using HAProxy & KeepAlived.
This article presumes users have some exposure to HAProxy, KeepAlived concepts as well as cloud constructs such as virtual networking, subnets, private/public IPs, security lists/route tables etc..
A little bit of python & bash shell scripting knowledge would be helpful too.
I not, don't worry - I will try my best to point out / reference documentation as much as possible.
Pre-Requisites:
1) An Oracle Cloud Account / Tenancy. If you don't have one, you can request a trial instance here.
2) A Compartment that would host our HA instances
3) A VCN (Virtual Cloud Network) with at least 1 public subnet
4) Administrator access on the cloud instance to configure Identity, Network Rules, Policies & Dynamic Groups
Spin Up 2 OCI VM Instances:
To start, let's first spin up 2 VMs on OCI.
My VCN looks like below;
Need help setting up a VCN on OCI? Refer here
Note: In this example, we have HA within an AD (Availability Domain) leveraging the "Fault Domain" resilience. However, this can be quickly reconstructed with a "regional subnet" construct for a full "site resiliency".
Within a public subnet, spin up 2 VMs.
In my example I have 2 Angry Bird servers - Terence & Stella.
Both running a standard single core VM with Oracle Enterprise Linux in the same availability domain - but placed strategically in different fault domains for intra-AD HA.
Now, you should see 2 instances up & running within your VCN.
Let's now SSH into our instances.
ssh -i <<private_key> > opc@< <public_ip> >
Note: If you are unable to ssh into your instance. Make sure;
1) The instance is indeed spun up within a public subnet
2) Security List has port 22 enabled (by default this should be there)
3) Ensure you have an internet gateway (IGW) attached to your VCN and route table configured with the route to IGW
4) Finally at the OS level, make sure you open up firewall ports. For a quick test, you can try stopping the firewall on linux OS instances using the command: service firewalld stop
Install your preferred proxy service:
You can choose to install any of your preferred http/https proxy or load balancer service. Some of the most popular ones include - apache httpd, Nginx, HAProxy.
In my example, I used HAProxy.
Install HAProxy on both Terence & Stella VM instances.
sudo su
yum install haproxy
yum install haproxy
Since we are just going to test the failover / HA configuration, we are not going to actually create any backend sets / services. When the reverse proxy service is called, it will be directed to render a static error page.
Let's create a simple html page under /etc/haproxy/errorfiles/errorpage.http
Replace {ServerName} with appropriate VM names so it helps distinguish the service when it fails over.
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html>
<head>
<title>503 - Service Unavailable</title>
</head>
<body>
<div>
<h2>Hello from {ServerName}</h2>
</div>
</body>
</html>
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html>
<head>
<title>503 - Service Unavailable</title>
</head>
<body>
<div>
<h2>Hello from {ServerName}</h2>
</div>
</body>
</html>
Configure HAProxy /etc/haproxy/haproxy.cfg with the errorfile info and frontend bind on port 80. Remember, we actually don't have any backends configured. But that's okay for our failover test.
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
errorfile 503 /etc/haproxy/errorfiles/errorpage.http
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend hatest
mode http
bind *:80
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
mode http
balance roundrobin
option httpchk HEAD / HTTP/1.1
server app1 127.0.0.1:5001 check
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
errorfile 503 /etc/haproxy/errorfiles/errorpage.http
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend hatest
mode http
bind *:80
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
mode http
balance roundrobin
option httpchk HEAD / HTTP/1.1
server app1 127.0.0.1:5001 check
Now, let's configure HAProxy to start on VM boot.
chkconfig haproxy on
service haproxy start
service haproxy start
Try hitting both server instances with their respective public ip addresses;
http://publicipofinstance
and you should see the appropriate server's 503 error page with welcome messages.
If you are unable to connect from your browser, check the public subnet security list for port 80, route table for IGW route or the OS firewall could be blocking port 80.
Configure Secondary IP:
In order for instance to failover, we need a reserved public ip that would shuttle over across instances for a seamless HA / failover.
In the OCI Console, go to the primary VM instance - pick any one VM that would serve as the "master" node. Click on "Attached VNICs" under the "Resources" section.
Click on the "Primary VNIC" > "IP Addresses" > choose "Assign Private IP Address".
In the dialog, enter a private ip that is unused within the VCN/Subnet. In my case, I picked 10.0.0.4.
Under the "Public IP Address" section, choose "Reserved Public IP" and select "Create a New Reserved Public IP". Optionally give it a name.
This would be our reserved public ip - which would move along with our chosen private ip address.
Go back to the primary VNIC of your VM instance and notice you have 2 public IPs (one ephemeral IP that is OCI assigned and another reserved IP) assigned to the instance. This technically means the VM can be accessed with either IPs.
However, we need to make sure the OS config is updated to reflect this.
Quicker option is to execute the following command (however this would not persist on VM reboot).
In my case, the command looks like below;
ip addr add 10.0.0.4/25 dev ens3 label ens3:0
Syntax: ip addr add <address>/<subnet_prefix_len> dev <phys_dev> label <phys_dev>:<addr_seq_num>
Syntax: ip addr add <address>/<subnet_prefix_len> dev <phys_dev> label <phys_dev>:<addr_seq_num>
To make this change persistent, create an ifcfg file named /etc/sysconfig/network-scripts/ifcfg-<phys_dev>:<addr_seq_num>. To continue with the preceding example, the file name would be /etc/sysconfig/network-scripts/ifcfg-ens3:0, and the contents would be:
DEVICE="ens3:0"
BOOTPROTO=static
IPADDR=10.0.0.4
NETMASK=255.255.255.128
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.0.0.4
NETMASK=255.255.255.128
ONBOOT=yes
Note: Only this step has to be performed on both VM instances - as when the private ip moves (along with the reserved ip) to standby instance, the OS must be able to recognize the IP mapping.
To verify this change, try accessing the Terence VM with both IP address via a browser.
Install KeepAlived:
We will leverage KeepAlived to maintain our server pool, monitor the VM instances and shuttle IP address. In our example we would use the VRRP protocol and unicast ip addressing.
Make sure to add the VRRP protocol rule to the subnet security list. This will allow the VM instances to communicate over VRRP.
Let's install keepalived on both VM instances.
sudo su
yum install keepalived
yum install keepalived
Modify the keepalived config file @ /etc/keepalived/keepalived.conf.
My config file for both Terence (Primary / Master) and Stella (Secondary / Backup) instances look like below;
Note source ip (ip of current server instance), peer ip (ip of backup instance) and state fields. Make sure priority is higher for Master node.
! Configuration File for keepalived
vrrp_script check_haproxy
{
script "pidof haproxy"
interval 5
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens3
virtual_router_id 50
priority 101
unicast_src_ip 10.0.0.2
unicast_peer
{
10.0.0.3
}
track_script
{
check_haproxy
}
notify_master /etc/keepalived/failover.sh
}
vrrp_script check_haproxy
{
script "pidof haproxy"
interval 5
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens3
virtual_router_id 50
priority 101
unicast_src_ip 10.0.0.2
unicast_peer
{
10.0.0.3
}
track_script
{
check_haproxy
}
notify_master /etc/keepalived/failover.sh
}
! Configuration File for keepalived
vrrp_script check_haproxy
{
script "pidof haproxy"
interval 5
fall 2
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens3
virtual_router_id 50
priority 99
unicast_src_ip 10.0.0.3
unicast_peer
{
10.0.0.2
}
track_script
{
check_haproxy
}
notify_master /etc/keepalived/failover.sh
}
vrrp_script check_haproxy
{
script "pidof haproxy"
interval 5
fall 2
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens3
virtual_router_id 50
priority 99
unicast_src_ip 10.0.0.3
unicast_peer
{
10.0.0.2
}
track_script
{
check_haproxy
}
notify_master /etc/keepalived/failover.sh
}
Configure Instance Principals in OCI:
We will leverage the OCI Instance Principals to allow instances within the server pool to manage virtual network connections. This would enable the reserved IP to move across VM instances.
Create a dynamic group and create a matching rule to ensure all VMs within our server pool are added to the group. More details on how to create a dynamic group here.
Now, create a policy to allow the dynamic group to manage virtual network connectivity.
In this case, the policy would look like below;
Allow dynamic-group HAProxyDG to manage virtual-network-family in compartment id ocid1.compartment.oc1..aaaaaaaaaxxxxxxxxxxxxxxxxa
Install Python OCI SDK:
Let's now install python oci sdk on both VM instances.
sudo su
yum install -y python
# Download and install pip
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
# install python OCI SDK
pip install oci
yum install -y python
# Download and install pip
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
# install python OCI SDK
pip install oci
Let's now start keepalived daemon services on both VMs. You can either use chkconfig to make this service start on boots. There have been several intermittent issues reported with running keepalived as a service.
Another workaround is to start this via command line using;
keepalived -D
Python Script using OCI SDK to Migrate IP from Master VM to Backup VM:
import sys, oci, logging, os
def assign_to_different_vnic(private_ip_id, vnic_id):
update_private_ip_details = oci.core.models.UpdatePrivateIpDetails(vnic_id=vnic_id)
network.update_private_ip(private_ip_id, update_private_ip_details)
if __name__ == '__main__':
signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner()
network = oci.core.VirtualNetworkClient(config={}, signer=signer)
new_vnic_id = sys.argv[1]
privateip_id = sys.argv[2]
assign_to_different_vnic(privateip_id, new_vnic_id)
def assign_to_different_vnic(private_ip_id, vnic_id):
update_private_ip_details = oci.core.models.UpdatePrivateIpDetails(vnic_id=vnic_id)
network.update_private_ip(private_ip_id, update_private_ip_details)
if __name__ == '__main__':
signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner()
network = oci.core.VirtualNetworkClient(config={}, signer=signer)
new_vnic_id = sys.argv[1]
privateip_id = sys.argv[2]
assign_to_different_vnic(privateip_id, new_vnic_id)
We will now call this python script from a shell script.
Create this shell script under /etc/keepalived/failover.sh
Note: This script will be invoked by the keepalived daemon
#!/bin/bash
logger -s "Floating the private/public VIPs:"
python /home/opc/claimip.py {ocid of vnic} {ocid of private ip} > >(logger -s -t $(basename $0)) 2>&1
logger -s "Private/public VIPs attached to the NEW Master Node!"
logger -s "Floating the private/public VIPs:"
python /home/opc/claimip.py {ocid of vnic} {ocid of private ip} > >(logger -s -t $(basename $0)) 2>&1
logger -s "Private/public VIPs attached to the NEW Master Node!"
Make sure in each of the VM instances, the vnic ocid is set properly. The private ip ocid will remain the same - since it will be the same ip (in our case 10.0.0.4) that will float across VMs.
We are all set. Access the reserved public ip (haproxy service) from a browser;
You should now see "Hello from Terence" - since the ip is assigned to this master node.
Now, try stopping haproxy service: service haproxy stop.
Refreshing the page must now render "Hello from Stella" - as the ip moved over to the backup node.
We now created a HA configuration. For fun, start the haproxy service back on Terence and stop the haproxy service on Stella.
No comments:
Post a Comment