Objective:
Active Directory users in the east coast region will be able to create Containers and Objects in object storage which will be replicated and consumed by Active Directory users in the west coast region and vice a versa.
Prerequisites:
The Following is assumed for this deployment:
- Two OpenStack deployments: East Coast and West Coast Datacenters respectively
- Active Directory Domain Controllers replicated in each site
- Ceph clusters configured for each site
- Keystone deployments independent from each other
- Keystone using LDAP (Active Directory) for authentication
- Keystone utilizing Fernet tokens
Software used for demonstration
- Red Hat OpenStack Platform (RH OSP) 11 (based on OpenStack Ocata release)
- Red Hat Enterprise Linux (RHEL) 7.4
- 2 RH OSP Director 11 virtual machines (east and west respectively)
- Ceph 2.2
Deployment:
To deploy the OpenStack environments for each site, I deployed RH OSP 11 Director at each site (director11east.lab.lan & director11west.lab.lan) on virtual machines consisting of 24 GB RAM, 8 vCPUs, 60GB Disk and 2 NICs (public and provisioning) each.
I used the following triple-o templates and custom templates to create the two overcloud
deployment changing only the hostnames, IPs, subnets and VLANs between east and west
overclouds: east templates & west templates
Once you have deployed both OpenStack sites (east and west respectively), reference my ad-post.sh script in the templates above to create the Keystone Domain which will point to Active Directory for authentication. Within this newly created KeystoneV3 Domain, create a new tenant called “east” in both the East and West OpenStack environments.
IMPORTANT: Even though we have created a tenant in both OpenStack environment with the same name, the Unique ID associated with that tenant name is different for each cluster. We need to make the West Coast OpenStack Cluster have the same ID for the “east” tenant to match what the East Coast OpenStack Cluster has.
On one of the OpenStack controllers in the East Coast OpenStack Environment, use the following command to obtain the ID associated with the tenant named “east”
1 | source overcloudrc.v3 |
1 | openstack project show list --domain LAB |
On one controller in the West Coast OpenStack Environment, run the following commands as root. RUN THIS ONLY ONCE AND FROM ONLY ONE CONTROLLER. IT WILL REPLICATE TO THE OTHERS
1 | mysql keystone |
# View the field you are about to change
1 | select * from project where name='east'; |
# change the existing ID to the ID from the East Coast OpenStack Environment
# update id in DB with project ID from east overcloud
1 | update project set id='ID_GATHERED_FROM_EAST_OPENSTACK_ENV' where name='east'; |
East Coast Cluster
Once each OpenStack site has been deployed, we need to configure the Ceph RADOS Gateway REALMs. We will start with the East Coast OpenStack Environment and make it the Master REALM. This script is intended for new Ceph Clusters with absolutely ZERO data on it. THIS WILL DESTROY EXISTING CEPH POOLS AND CREATE NEW ONES!!!! The PG numbers I used are based on my environment, but you can generate you own specific settings by using the Red Hat Ceph Placement Groups calculator: https://access.redhat.com/labsinfo/cephpgc
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | #!/bin/bash # East Coast Ceph RADOS Gateway (Master REALM) configuration script # To be run only once on east coast ceph rados gateway controller # To be run only as root if ! [ $(id -u) = 0 ]; then echo "I am not root!" exit 1 fi if ! [[ -f ~/eastrc.v3 ]] ; then echo "No RC file exists in /root/" exit fi unset OS_PASSWORD OS_AUTH_URL OS_USERNAME OS_TENANT_NAME OS_NO_CACHE OS_IDENTITY_API_VERSION OS_PROJECT_DOMAIN_NAME OS_USER_DOMAIN_NAME source eastrc.v3 ## Variables ## # gather ceph rados gateway public endpoint URL east_pub_endpoint=$(crudini --get /etc/ceph/ceph.conf client.radosgw.gateway rgw_keystone_url) # Create a name for your realm. This is global realm_name=redhat # Create a name for your zonegroup. This is global zonegroup=us # Create a name for your zone. This is local to each ceph deployment rgw_zone=us-east # AD Test User ad_test_user=kholden ### Script ### tar cvf ~/ceph.backup.tar /etc/ceph radosgw-admin realm create --rgw-realm=$realm_name --default radosgw-admin zonegroup create --rgw-zonegroup=$zonegroup --endpoints=$east_pub_endpoint --rgw-realm=$realm_name --master --default radosgw-admin zone create --rgw-zonegroup=$zonegroup --rgw-zone=$rgw_zone --master --default --endpoints=$east_pub_endpoint radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default ceph osd dump | grep pool | grep default | awk -F\' '{print $2}' | xargs -P8 -I{} rados rmpool {} {} --yes-i-really-really-mean-it # create new pools from ceph us.east pools ceph osd pool create us-east.rgw.intent-log 16 ceph osd pool set us-east.rgw.intent-log size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.log 16 ceph osd pool set us-east.rgw.log size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.buckets.data 128 ceph osd pool set us-east.rgw.buckets.data size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.buckets.extra 16 ceph osd pool set us-east.rgw.buckets.extra size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.buckets.index 16 ceph osd pool set us-east.rgw.buckets.index size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.control 16 ceph osd pool set us-east.rgw.control size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.gc 16 ceph osd pool set us-east.rgw.gc size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.data.root 16 ceph osd pool set us-east.rgw.data.root size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.usage 16 ceph osd pool set us-east.rgw.usage size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.users 16 ceph osd pool set us-east.rgw.users size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.users.email 16 ceph osd pool set us-east.rgw.users.email size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.users.swift 16 ceph osd pool set us-east.rgw.users.swift size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.users.uid 16 ceph osd pool set us-east.rgw.users.uid size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-east.rgw.meta 16 ceph osd pool set us-east.rgw.meta size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done radosgw-admin period update --commit radosgw-admin period update --commit radosgw-admin zone delete --rgw-zone=default radosgw-admin period update --commit radosgw-admin zonegroup delete --rgw-zonegroup=default radosgw-admin period update --commit radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system access_key=$(radosgw-admin user info --uid="synchronization-user"|grep access_key|awk '{print $2}'|sed 's/^"\(.*\)".*/\1/') secret_key=$(radosgw-admin user info --uid="synchronization-user"|grep secret|awk '{print $2}'|sed 's/^"\(.*\)".*/\1/') radosgw-admin zone modify --rgw-zone=us-east --access-key=$access_key --secret=$secret_key radosgw-admin period update --commit cp /etc/ceph/ceph.conf ~/ceph.conf_backup echo "rgw_zone=us-east" >> /etc/ceph/ceph.conf chown ceph /etc/ceph/*.keyring systemctl stop ceph-radosgw.target sleep 10 systemctl start ceph-radosgw.target |
# Check RGW sync status
radosgw-admin sync status
# Create an openstack credentials file with your AD, Domain, and Tenant information
cp eastrc.v3 ken
change username from ‘admin’ to ‘kholden’ (MY AD USERNAME)
change domain settings from ‘default’ to ‘lab’
change project from ‘admin’ to ‘east’
change password to AD password for user ‘kholden’
source ken
openstack container list
Next we need to configure the West OpenStack environment to be part of the Ceph RADOS Gateway REALM (I called the REALM “redhat”) that we created in the previous step.
IMPORTANT: For the master realm setup, I created both an Access Key and Secret Key to be used for authentication of the REALM. YOU MUST USE THE ACCESS AND SECRET KEYS USED IN THE PREVIOUS STEP FOR THE SCRIPT BELOW. DO NOT USE THE KEY’S I HARDCODED BELOW
West Coast Ceph Cluster:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | #!/bin/bash # East Coast Ceph RADOS Gateway (Master REALM) configuration script # To be run only once on east coast ceph rados gateway controller # To be run only as root if ! [ $(id -u) = 0 ]; then echo "I am not root!" exit 1 fi if ! [[ -f ~/eastrc.v3 ]] ; then echo "No RC file exists in /root/" exit fi unset OS_PASSWORD OS_AUTH_URL OS_USERNAME OS_TENANT_NAME OS_NO_CACHE OS_IDENTITY_API_VERSION OS_PROJECT_DOMAIN_NAME OS_USER_DOMAIN_NAME source eastrc.v3 ## Variables ## # gather ceph rados gateway public endpoint URL EAST_PUB_ENDPOINT=$(crudini --get /etc/ceph/ceph.conf client.radosgw.gateway rgw_keystone_url) # Create a name for your realm. This is global REALM_NAME=redhat # Create a name for your zonegroup. This is global ZONE_GROUP=us # Create a name for your zone. This is local to each ceph deployment RGW_ZONE=us-east # AD Test User AD_TEST_USER=kholden # AD Test User Password AD_TEST_USER_PASSWORD # AD Domain Name AD_DOMAIN=LAB # OpenStack Project Name PROJECT_NAME=east ### Script ### tar cvf ~/ceph.backup.tar /etc/ceph radosgw-admin realm pull --url=$east_pub_endpoint --access-key=1HMKB05PQQ78YV5US3KY --secret=OuWfjeqO7Z15hUr5FLf37uUph8XWNb3Sylctrvpr radosgw-admin realm default --rgw-realm=redhat radosgw-admin period pull --url=$east_pub_endpoint --access-key=1HMKB05PQQ78YV5US3KY --secret=OuWfjeqO7Z15hUr5FLf37uUph8XWNb3Sylctrvpr radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west --access-key=1HMKB05PQQ78YV5US3KY --secret=OuWfjeqO7Z15hUr5FLf37uUph8XWNb3Sylctrvpr --endpoints=http://192.168.15.120:8080 radosgw-admin zone delete --rgw-zone=default ceph osd dump | grep pool | grep default | awk -F\' '{print $2}' | xargs -P8 -I{} rados rmpool {} {} --yes-i-really-really-mean-it ceph osd pool create us-west.rgw.intent-log 16 ceph osd pool set us-west.rgw.intent-log size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.log 16 ceph osd pool set us-west.rgw.log size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.buckets.data 128 ceph osd pool set us-west.rgw.buckets.data size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.buckets.extra 16 ceph osd pool set us-west.rgw.buckets.extra size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.buckets.index 16 ceph osd pool set us-west.rgw.buckets.index size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.control 16 ceph osd pool set us-west.rgw.control size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.gc 16 ceph osd pool set us-west.rgw.gc size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.data.root 16 ceph osd pool set us-west.rgw.data.root size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.usage 16 ceph osd pool set us-west.rgw.usage size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.users 16 ceph osd pool set us-west.rgw.users size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.users.email 16 ceph osd pool set us-west.rgw.users.email size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.users.swift 16 ceph osd pool set us-west.rgw.users.swift size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.users.uid 16 ceph osd pool set us-west.rgw.users.uid size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done ceph osd pool create us-west.rgw.meta 16 ceph osd pool set us-west.rgw.meta size 2 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done cp /etc/ceph/ceph.conf ~/ echo "rgw_zone=us-west" >> /etc/ceph/ceph.conf chown ceph /etc/ceph/*.keyring radosgw-admin period update --commit |
Run
1 | systemctl stop ceph-radosgw.target |
# Wait until you see the following message
# Broadcast message from systemd-journald@east-controller1.lab.lan (Fri 2017-08-25 11:57:53 UTC):
#
# haproxy[35919]: proxy ceph_rgw has no server available!
Then run
1 | systemctl start ceph-radosgw.target |
# Create openstack credentials file with AD user, Domain, and Tenant information:
cp westrc.v3 ken
change username from ‘admin’ to ‘kholden’ (MY AD USERNAME)
change domain settings from ‘default’ to ‘lab’
change project from ‘admin’ to ‘east’
change password to AD password for user ‘kholden’
logout and back in to clear ENV variables
source ken
openstack container list
You should now be able to create containers and objects in the east coast datacenter and then see them replicate to the west coast datacenter. You should be able to also create containers and objects in the west data center and see them replicate to the east datacenter