Moved to OpenShift Online

February 20th, 2015

I’ve got an old server that I have been hosting this blog on. I’m trying to shut it down so I can stop paying the hosting. So, I’ve move this blog to OpenShift online!

RDO OpenStack [Errno 14] Peer cert cannot be verified or peer cert invalid

September 16th, 2014

Over the weekend I started getting failures on one of my RDO nodes when trying to read the repomd.xml file from the openstack-icehouse/epel-6 RDO repo.

[root@host ~]# yum update 
Loaded plugins: priorities, product-id, subscription-manager
This system is receiving updates from Red Hat Subscription Management. [Errno 14] Peer cert cannot be verified or peer cert invalid
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: openstack-icehouse. Please verify its path and try again

But on all my other nodes I was able to run yum without any trouble. Long story short it boiled down to the ca-certificates package installing the a new ca-bundle as an .rpmnew file instead of into place. This file is provided by the ca-certificates package.

[root@host ~]# yum whatprovides /etc/pki/tls/certs/ca-bundle.crt
{ ... snip ... }
ca-certificates-2014.1.98-65.0.el6_5.noarch : The Mozilla CA root certificate bundle
Repo : installed
Matched from:
Other : Provides-match: /etc/pki/tls/certs/ca-bundle.crt
[root@host ~]# cd /etc/pki/tls/certs
[root@host certs]# ll
total 2528
-rw-r--r--. 1 root root 4795 Feb 25 2014 ca-bundle.crt
-rw-r--r--. 1 root root 757191 Feb 25 2014 ca-bundle.crt.exp20140422
-rw-r--r--. 1 root root 786601 Jun 24 02:22 ca-bundle.crt.rpmnew
-rw-r--r--. 1 root root 1005005 Jun 24 02:22
{ ... snip ... }

You can see I have multiple ca-bundle files which is what yum uses to verify it’s ssl certs. The newest one is a .rpmnew file so it’s not being used, the old one is still what’s being used. Just move the new bundle into place and you’ll be on your way. I didn’t end up having to clean yum, but you may have to run a “yum clean all” after this. I think I had done it before hand.

[root@host certs]# mv ca-bundle.crt ca-bundle.crt_
[root@host certs]# mv ca-bundle.crt.rpmnew ca-bundle.crt
[root@host certs]# yum update
Loaded plugins: priorities, product-id, subscription-manager
This system is receiving updates from Red Hat Subscription Management.
openstack-icehouse | 2.9 kB 00:00 
{ .. snip .. }

Once the new ca-bundle is in place the openstack-icehouse repomd.xml file was able to be downloaded without trouble. After you get a successful run go back and clean up the old ca-bundles if you want to.

[root@host certs]# rm ca-bundle.crt_ ca-bundle.crt.exp20140422
rm: remove regular file `ca-bundle.crt_'? y
rm: remove regular file `ca-bundle.crt.exp20140422'? y

Monitoring OVS Tunnels

September 15th, 2014

On we’ve had ups and downs keeping OVS tunnels happy. I’ve put a few pieces into place to monitor their health and alert me through nagios so that I’m made aware when we won’t have connectivity to an instance on a particular compute node.

Thanks to ajo for this post: the solution here is based largely on his post.

The over all architecture of this monitoring solution is to create (or use an existing) network and subnet in the admin tenant, manually create neutron ports for each of the compute nodes, create ovs ports on each of the compute nodes that map to the neutron ports and then send pings across the tunnel using nagios.

To be clear, nagios is expecting all the ports to already exist. There’s a variety of ways this could be setup, for my initial installment I wrote a simple collection of bash scripts to run to build up these ports.  Make sure that you’ve sourced your credential file then walk through these scripts to establish monitoring checks that run pings though your tunnels.

Let’s start with a network in the admin tenant.

[root@control tunnel_monitor]# cat
neutron net-create tun-mon
neutron subnet-create tun-mon

This script has simply created a new network and allocated a /24 subnet for the monitoring to use. I also left DHCP enabled, which is the default. This will create the qdhcp name space for us to operate out of. I’m not going to use dhcp other than for a target to ping on the network node.

Next create the neutron ports.

[root@control tunnel_monitor]# cat
NETWORK_ID=`neutron net-list --name tun-mon | tail -n +4 | head -n 1 | cut -d \ -f 2`
for i in 5 6 7 8 9 10 11 12 13 14 15 18; do
  neutron port-create --name tunmonport${i} --binding:host_id=host${i} $NETWORK_ID

This script is hard coded for TryStack, you’ll have to update the host list to match your architecture. You can see that there are 12 compute nodes, this script creates a neutron port for each of the compute node which will  allocate an IP address, a mac address and bind each of them to their respective compute node and they will be associated to the tunmon network. Here’s a list of my neutron ports, I truncated the IDs for brevity.

[root@control tunnel_monitor]# neutron port-list | grep tunmon
| {ID} | tunmonport13 | fa:16:3e:ba:4e:7e | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport10 | fa:16:3e:fa:5c:7f | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport5 | fa:16:3e:6f:07:6d | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport11 | fa:16:3e:11:01:b5 | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport15 | fa:16:3e:cf:92:20 | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport9 | fa:16:3e:98:60:bb | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport18 | fa:16:3e:5b:b7:ae | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport7 | fa:16:3e:6b:5c:1e | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport12 | fa:16:3e:aa:df:6a | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport8 | fa:16:3e:83:2c:03 | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport14 | fa:16:3e:ab:de:a0 | {"subnet_id": "{ID}", "ip_address": ""} |
| {ID} | tunmonport6 | fa:16:3e:fe:14:be | {"subnet_id": "{ID}", "ip_address": ""} |

Next create the OVS ports that map to these neutron ports.

[root@host3 tunnel_monitor]# cat
HOSTS="5 6 7 8 9 10 11 12 13 14 15 18"
if [ ! -z "$1" ]
for i in $HOSTS; do
  TUNMONPORT=`neutron port-list --name tunmonport${i} | tail -n +4 | head -n 1`
  ID=`echo $TUNMONPORT | cut -d \ -f 2`
  MAC=`echo $TUNMONPORT | cut -d \ -f 6`
  ssh host${i} "ovs-vsctl -- --may-exist add-port br-int tunmonhost${i} \
    -- set Interface tunmonhost${i} type=internal \
    -- set Interface tunmonhost${i} external-ids:iface-status=active \
    -- set Interface tunmonhost${i} external-ids:attached-mac=${MAC} \
    -- set Interface tunmonhost${i} external-ids:iface-id=${ID} \
  && ip link set dev tunmonhost${i} address ${MAC} \
  && ip addr add`printf '%02d\n' $i`/24 dev tunmonhost${i}"

In this script I create a port on all the compute nodes unless I pass a host number to the script, that’s all the initial if conditional is doing is checking for an argument passed and only acting on that host number if it is passed. Then ssh to each of the nodes and add a port to br-int specifying the neutron-port’s id and mac address. Once the OVS port is created the mac address is set using ip-utils for the created interface and a static IP address is assigned to the interface in the range of where the last 2 digits are the host number. I used a static IP address here to remove the dependency of dhclient. If the interface doesn’t DHCP because dhclient failed to get an IP address then the route isn’t created for the subnet. I decided I would prefer to force the IP address and expect that the route is already there. As an example let’s look at host5.

[root@host5 ~]# ip a s tunmonhost5
3656: tunmonhost5: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
  link/ether fa:16:3e:6f:07:6d brd ff:ff:ff:ff:ff:ff
  inet scope global tunmonhost5
  inet6 fe80::50c7:f6ff:fe52:ea3a/64 scope link
  valid_lft forever preferred_lft forever
[root@host5 ~]# ip r | grep dev tunmonhost5 proto kernel scope link src
[root@host5 ~]# ovs-vsctl show
Bridge br-int
  {.... snip ....}
  Port "tunmonhost5"
    tag: 42
    Interface "tunmonhost5"
      type: internal
{ ... snip ... }

IP address and route are there and the interface on br-int is there too, let’s verify the DHCP IP address is available on the network node.

[root@network ~]# ip netns exec qdhcp-629ee7bc-fe28-472a-921b-4f2593a3dbc5 ip a
938: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet scope host lo
  inet6 ::1/128 scope host
  valid_lft forever preferred_lft forever
1467: tap6ebc3409-a7: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
  link/ether fa:16:3e:e4:87:6a brd ff:ff:ff:ff:ff:ff
  inet brd scope global tap6ebc3409-a7
  inet6 fe80::f816:3eff:fee4:876a/64 scope link
  valid_lft forever preferred_lft forever

DHCP is listening on Let’s ping manually from host5 to the network node.

[root@host5 ~]# ping -c 3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.396 ms
64 bytes from icmp_seq=2 ttl=64 time=0.215 ms
64 bytes from icmp_seq=3 ttl=64 time=0.218 ms
--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.215/0.276/0.396/0.085 ms

Why is this ping exciting? Well consider what’s happening here, we have IP addresses established on each side of the tunnel that connects OVS on the compute node to OVS on the network node. When the ping is issued it’s routed through the interface we created, labeled as traffic related to the tunmon network, sent through tunnel and then sent through to the qdhcp net namespace for the tunmon network on the other side. The ping succeeding means that the tunnel is functioning properly between the network node and the compute node. If the tunnel works for the tunmon network then it will work for all neutron networks because all network traffic for the instances travels through the tunnel exactly the same except for the identifier that associates it to a particular network within the neutron infrastructure.

Now this needs to be put in Nagios. Just make sure that the check_ping package is installed. Yum install it if it’s not.

[root@host5 ~]# rpm -q nagios-plugins-ping

Then add a line in your nrpe.cfg, mine is in /etc/nagios/

command[check_ovs_tunnel]=/usr/lib64/nagios/plugins/check_ping -H -w 1000.0,25% -c 2000.0,100% -p 5

One obvious improvement to this would be to put a static address on the DHCP interface along side the neutron assigned IP address. I’ve considered adding to the interface and changing the in the nrpe command to the .100 address. That way it doesn’t matter what address neutron assigns to the DHCP interface, my nagios configs would always be the same. Next add a service definition for the host to execute the nrpe command.

define service {
  check_command check_nrpe!check_ovs_tunnel
  host_name host5
  service_description OVS tunnel connectivity
  use generic-service

I’ve also considered expanding the nrpe check to be a little bit smarter, maybe to look for the tunmonhostX interface and report if it’s missing before trying to ping. That would help troubleshoot when things go sour. There are also tunnels established from compute node to compute node that could be monitored. In TryStack’s experience, if connectivity to the network node for each of the compute nodes that a compute to compute tunnel connects is working then the compute to compute has worked too. So it could be a bit overkill to monitor them, though not entirely out of the question. But if you wanted to monitor compute to compute it would be as simple as pinging the other IP addresses, now that we have an interface connected to the tunmon network on each compute node the traffic will be carried over the appropriate tunnel to the host that we’re trying to reach. Here’s an example of pinging hosts 6 through 9 to test connectivity over compute to compute tunnels from host5 to the respective host in this loop.

[root@host5 ~]# for i in 6 7 8 9; do ping${i} -c 1; done
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=2.01 ms
--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 2.016/2.016/2.016/0.000 ms
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=1.00 ms
--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 1.009/1.009/1.009/0.000 ms
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=1.20 ms
--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 1.207/1.207/1.207/0.000 ms
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.959 ms
--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 0.959/0.959/0.959/0.000 ms

The scripts I have here have to be executed manually, maybe they could be added to a config management system, for now manual has worked ok. The only maintenance this needs once it’s setup is to recreate the ovs port on the compute node if the node is rebooted or OVS is reset for some reason. That’s why I added the extra conditional on the script.

Hope this helps to sooth your tunnel monitoring woes if you have any.

OpenStack Summit ATL Voting is live / Red Hat Summit

February 24th, 2014

I have a few talks proposed for the OpenStack summit in Atlanta in May. The community votes on them for acceptance. Please take a few moments and vote.

I’ll also be at Red Hat Summit / DevNation in April in San Francisco. If you’re attending, check out my sessions there too:

DevNation Session:

2:30 p.m.-3:30 p.m.

Red Hat Summit Sessions:

Session Date Time
Red Hat Cloud Infrastructure architecture design Tuesday, April 15 4:50 pm – 5:50 pm
Building scalable cloud infrastructure using Red Hat Enterprise Linux OpenStack Platform Wednesday, April 16 10:40 am – 11:40 am

Building a small Fedora 20 image

February 5th, 2014

In my RDO demo environment post I hinted at building a smaller Fedora disk image to use in the environment.

An easy answer to this would be to use the Fedora could image from
This image could be booted up or cracked open with libguestfs.and customized as was shown in the post.

If you’d like to customize one yourself you can use this process, It’s by far not perfect and probably will not suit every need, but will result in a disk image just under 300M. Use it as an example to get started.

First you’ll need a kickstart file, start with this one: small-F20.cfg
(thanks Forrest Taylor, this is derived from the one you sent me.)

edit that file and update the root password. You can generate one like this:

python -c 'import crypt; print(crypt.crypt("My Password", "$6$My Salt"))'

This will generate sha512 crypt of your password using your provided salt. [1]

You could also add some of the setup steps from my demo post into the kickstart. In this one the image is being sealed. You can see the network device modification and the udev rule being deleted. The ssh keys haven’t been generated yet because the image hasn’t been booted yet so it’s not necessary to delete them in the kickstart.

I’m going to use the same libvirt environment as the other post. I installed apache and put the kickstart file at /var/www/html/small-F20.cfg
It will later be referenced at by virt-install
This file can live anywhere your installed VM will be able to access via http so if you don’t want to do it this way do it a different way. :)

Once you have the kickstart in place and the root password updated create a disk image, and start the install. I created a small script to do this since there’s so many options.

rm -f small-F20.img
qemu-img create -f qcow2 small-F20.img 8G

sudo virt-install -n small-F20 -r 2048 --vcpus=2 --network bridge=virbr0 \
  --graphics=spice --noautoconsole --noreboot -v \
  -l \
  --disk=path=small-F20.img,format=qcow2 \
  -x "ks="

This will launch a VM and do the install. You can connect the way is tells you to or open up the terminal window through virt-manager.



Once the install completes sparsify the image and move the new image into place.

# if /tmp does not have enough space; you can do this to use /var/lib/libvirt/images instead
export TMPDIR=/var/lib/libvirt/images

sudo virt-sparsify –compress small-F20.img small-F20.qcow2
ll small-F20.img small-F20.qcow2
-rw-r–r–. 1 qemu qemu 1.1G Feb 5 10:30 small-F20.img
-rw-r–r–. 1 root root 273M Feb 5 10:52 small-F20.qcow2
mv small-F20.qcow2 small-F20.img

This new image can just be swapped out with your base image from the demo, rebuild the overlays and you’re in business with a sub 300M Fedora 20 image.

The cost to overcome

February 4th, 2014

Imagine yourself on a clear cold evening. The stars are just coming out over a wooded mountainous landscape, blanketed by a covering of snow.
There’s a wide, fairly deep, river flowing at the base of the mountain.

The is is the image in my mind when I listen to Overcome by The Digital Age.

I love this image. I grew up visiting my grandparents in Summit, NY. They owned something like 12 acres of land. On the back side of the property was a lake. I can remember many winters adventuring out into the property in knee high snow. Walking on the frozen lake, Playing in the snow, Spending time with my family in the wintry outdoors. It wasn’t exactly the landscape I’ve described, but for me it salts the imagery very well.

While taking my son to school this morning we were listening to this song. I found myself trying to explain the symbolism in this song in 4 year old terms. I think it ended up being more edifying to me. After I dropped him off I continued to consider the lyrics to the song and how they correlate to the two books I’m reading right now. Redemption: accomplished and applied by John Murray and Not a fan by Kyle Idleman.

Back to that landscape you just imagined, add an avalanche on the mountain in the distance. An impressive event of snow barreling down the side of a mountain. Destroying anything and everything in its path. No chance of human survival for someone caught in the middle of it. It’s overwhelming to even picture yourself witness to something like this.

God’s love is like an avalanche they sing:

I can hear the roar it’s a mighty sound
Your love is an avalanche
I’m overcome

The avalanche settles at the foot of the mountain. Right along the river. For something so full of power, a river often a symbol of peace. The sheer volume of water flowing right in front of you could carry you a seemingly endless distance.

The peace of God and the glow of His majesty is like this river they sing:

I’m taken away by a river of peace
on this starlit night
I’m overcome
I’m lost in the glow of Your majesty
oh my God how You captivate
I’m overcome

Looking off as far into the distance as you can see, snow has fallen and covered everything. Nothing has escaped its covering.

As such, our sin has been covered they sing:

like a snowfall that blankets the Earth
my sin has been covered
I’m overcome

Murray begins his book discussing the necessity of and the nature for Christ’s work on the cross, why and how it happened the way it did. In establishing the relationship of the atonement to justification he states:

The only righteousness conceivable that will meet the requirements of our situation as sinners and meet the requirements of a full and irrevocable justification is the righteousness of Christ. This implies his obedience and therefore his incarnation, death and resurrection.

We often put so much emphasis on the cross. I think what Murray points to here is that there was an extensive amount of supporting work that happened to enable his death on the cross would overcome our sin. Further, Murray later associates the fulfillment of the Levitical law with Christ’s work on the cross by establishing him as the priest that offered himself as the sacrifice:

That Christ’s work was to offer himself a sacrifice for sin implies, however, a complementary truth too frequently overlooked. It is that, if Christ offered himself as a sacrifice, he was also a priest. And it was as a priest that he offered himself. He was not offered up by another; he offered himself. This is something that could not be exemplified in the ritual of the Old Testament. The priest did not offer himself and neither did the offering offer itself. But in Christ we have this unique combination that serves to exhibit the uniqueness of his sacrifice, the transcendent character of his priestly office, and the perfection inherent in his priestly offering. It is in virtue of his priestly office and in pursuance of his priestly function that he makes atonement for sin.

The song continues:

the price has been paid
the war is already won
the blood of my savior was shed
He’s overcome
and I’m overcome

He’s overcome and I’m overcome. Christ overcame sin, now we are called to be overcome by him. That’s what Kyle Idleman’s book is all about.

It cost Christ everything to save us. It costs us everything to be saved. It requires us to be overcome by everything that God is and let go of everything that we were.

RDO OpenStack Demo Environment :: How To

January 29th, 2014

I’ve given a “Getting started with OpenStack using RDO” presentation a couple times over the past year. This presentation includes a live demo of installing and configuring RDO on virtual machines on my laptop. The installation runs entirely off my laptop, there is no internet access needed. I’ve been ask a couple times to document how to build this demo environment. So here goes…. finally :)

I’ve had a couple people try and recreate this environment with different tools. The issue I’ve heard most people run into on other tools is support for multiple virtual nics on the VMs. To use packstack and neutron together with GRE you have to have 2 nics on each of the VMs.

To clarify, I’ll call the OpenStack nodes that I’m installing OpenStack on “VMs” and/or “OpenStack nodes”. The represent the bare metal in a real installation. A virtual machine that OpenStack launches will be called an instance, or an OpenStack instance.

One more house keeping item. My blog displays two characters wrong.
1. single quotes turn into a weird back tick, so when you copy and paste sed commands update the tick looking character to a plain single quote.
2. double dash (or double minus) turns into this super long dash character. So when you do commands like the sparsify command that has a –compress param make sure that’s dash dash compress.

The OpenStack Installation:
There will be two nodes. A control / network node and a compute node. In reality the network node should be separated from the control node. This is just a demo environment so the simplicity is beneficial to the amount of time it takes to demo the environment. Each node will have 2 ipaddresses. 192.168.122.x for the “public” address and 192.168.123.x for the private address. Node 1 will be 12{2,3}.101 and node2 will be 12{2,3}.102.

I’m currently running this demo on a Lenovo T530 with 8 cores, 16G of memory, and I run my virtual machines off a Crucial M500 240GB mSATA SSD.

The environment was originally designed and run off Fedora 19 and the disk images were Fedora 19, using RDO Havana. I’ve just upgraded the demo environment to Fedora 20 and RDO Icehouse, so this post will be F20 / RDO Icehouse. I’ll note differences where I find them, Though i don’t expect many if any. So all disk images in this tutorial will be Fedora 20.

Note that the SSD is not necessary, but it does speed up the demo in general quite a bit as the installation of OpenStack is a disk intensive operation.

Building the OpenStack Nodes:
The first step in getting this environment stood up is to create a base disk image. This image will serve 2 purposes.
1. qemu overlay images will be created on top of it for the OpenStack nodes
2. In the demo it’s used as the glance image to import and launch instances off of.

*** UPDATE ***
For a much smaller footprint disk image use this method instead of a basic install.

To build this base image I just use the virt-manager in Fedora to install a minimal install of Fedora. Then you have to boot up the disk and strip out a couple features in that Fedora that don’t work with OpenStack. Then I “seal” the image to create a sort of template out of it. This just involves stripping out things that are created at boot time. Let’s do the install:

_install1  _install2

_install3  _install4
_install5  _install6

End result of this image set is a libvirt definition that will be used as a control node and a disk image that both the control and compute node will use as their base image for the qemu overlay.
It’s installed from with 4G of ram, 2 vcpus and an 8G disk with a mininal install of Fedora 20.

Next thing to do is to fix up the Fedora instance so it’s ready to have OpenStack installed.

Start by removing firewalld, NetworkManager, enabling the network and disabling selinux.
Firewalld and NetworkManager arn’t compatible with OpenStack that I know of yet. Disabling SELinux isn’t strictly necessary, but it’s a demo environment so it’s not really needed.To do this I’ll ssh into the vm now running on my system.

[root@localhost ~]# yum remove -y firewalld NetworkManager
Resolving Dependencies
–> Running transaction check
—> Package NetworkManager.x86_64 1: will be erased
—> Package firewalld.noarch 0:0.3.8-1.fc20 will be erased
–> Finished Dependency Resolution

Dependencies Resolved


NetworkManager.x86_64 1: firewalld.noarch 0:0.3.8-1.fc20


[root@localhost ~]# chkconfig network on
[root@localhost ~]# sed -i ‘s/SELINUX=enforcing/SELINUX=permissive/’ /etc/selinux/config
[root@localhost ~]# setenforce 0

Next install an ssh keypair so that you don’t have to fiddle with passwords in your demo. Do this on your laptop and copy over both the secret and the pub key as id_rsa and to the VM. This will also be used by packstack to ssh to the nodes to do the openstack installation. is the dhcp address that libvirt’s network gave to my VM. I added the following to my ~/.ssh/config so that I don’t have to manage the known hosts and specify the root user everytime I log in:

Host 192.168.122.*
    User                    root
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

Here’s installing the keys.

dradez@tirreno:~➤ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/dradez/.ssh/id_rsa): /home/dradez/.ssh/rdo-demo-id_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/dradez/.ssh/rdo-demo-id_rsa.
Your public key has been saved in /home/dradez/.ssh/

dradez@tirreno:~➤ ssh-copy-id -i ~/.ssh/rdo-demo-id_rsa root@
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘root@′”
and check to make sure that only the key(s) you wanted were added.

dradez@tirreno:~➤ scp ~/.ssh/rdo-demo-id_rsa root@
rdo-demo-id_rsa 100% 1679 1.6KB/s 00:00
dradez@tirreno:~➤ scp ~/.ssh/ root@ 100% 411 0.4KB/s 00:00
dradez@tirreno:~➤ ssh root@
Last login: Mon Jan 27 10:36:11 2014 from
[root@localhost ~]#


Before you shutdown the image and turn it into a template for your overlay images you have to seal it. To do this remove the host_ssh keys, the udev net-persistent rules and the device specific info in /etc/sysconfig/network-scripts/ifcfg-eth0. If you want to do a yum update you can. Go ahead and atleast install net-tools (so facter will find all your ipaddresses properly) and whatever else you think you may need and don’t want to have to install over and over and over. This will also import the Fedora update keys for future use. “Yum clean all” dumps yum cache so it’s not part of the base image. There’s other cache like this too that could be cleaned. I won’t get into it here.

[root@localhost ~]# rm -rf /etc/ssh/ssh_host_rsa_key*
[root@localhost ~]# rm -f /etc/ssh/moduli
[root@localhost ~]# sed -i ‘/UUID/d’ /etc/sysconfig/network-scripts/ifcfg-eth0
[root@localhost ~]# sed -i ‘/HWADDR/d’ /etc/sysconfig/network-scripts/ifcfg-eth0
[root@localhost ~]# yum update -y
[root@localhost ~]# yum install -y net-tools vim telnet
[root@localhost ~]# yum clean all

My VM didn’t have the net-persistent file. Look in /etc/udev/rules.d/ for a file named something like 70-net-persistent.rule and delete that if is exists.

Now shutdown your VM and sparsify its disk image.  This next step, sparsifying, is optional. When the file was created it was allocated the whole 8G, when it’s sparsified it’s reduced to the size of what’s actually being used. (If you get tmp dir warnings, google “virt-sparsify TMPDIR” and look at some of the options related to directory path and tmpfs modifications that you can do to work around making this happen.) It’s just nice to make this file small because it won’t be directly written to again and you free up lots of space doing this. You could also write a small kickstart to make this even smaller I think. I’ll post one if I get to writing one.

[root@tirreno images]# virt-sparsify –compress RDO-F20-control-node.img RDO-F20-x86_64.qcow

Create overlay file in /tmp to protect source disk …

Examine source disk …
Clearing Linux swap on /dev/fedora/swap …
Fill free space in /dev/sda1 with zero …
Copy to destination and make sparse …

Sparsify operation completed with no errors. Before deleting the old
disk, carefully check that the target disk boots and works correctly.
[root@tirreno images]# ll -h
-rw——-. 1 qemu qemu 8.1G Jan 27 10:52 RDO-F20-control-node.img
-rw-r–r–. 1 root root 917M Jan 27 10:59 RDO-F20-x86_64.qcow

You can see the original file reports it’s 8.1G, the sparsified file is only 917M. Now test that you can use your sparsified image with a qemu overlay file.

[root@tirreno images]# mv RDO-F20-control-node.img F20_orig.img
[root@tirreno images]# qemu-img create -b `pwd`/RDO-F20-x86_64.qcow -f qcow2 RDO-F20-control-node.qcow2
Formatting ‘RDO-F20-control-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/var/lib/libvirt/images/RDO-F20-x86_64.qcow’ encryption=off cluster_size=65536 lazy_refcounts=off
[root@tirreno images]# ll -h RDO-F20-control-node.qcow2
-rw-r–r–. 1 root root 193K Jan 27 11:07 RDO-F20-control-node.qcow2

I’ve created my overlay with a qcow2 extension instead of a img extenstion so the libvirt definition will have to be updated. I’ll edit /etc/libvirt/qemu/RDO-F20-control-node.xml and update the path to RDO-F20-control-node.img to RDO-F20-control-node.qcow2. restart libvirtd and boot up the VM and make sure it’s happy. Your IP may change because we stripped out the UUID and HWADDR from ifcfg-eth0.

What’s happened here is that you’ve booted your VM off an overlay image. Any changes you make, imparticular firt-boot generated things like your host ssh keys, udev rules, etc are written to the overlay disk and not the template disk that you first installed. This means that you can delete that overlay disk, recreate it and start fresh in a matter of seconds because the base template image is untouched. It also means that you can create more overlay images for more VMs without having to do more installs. So lets create the definition for the compute node now using that same template image.

[root@tirreno images]# qemu-img create -b `pwd`/RDO-F20-x86_64.qcow -f qcow2 RDO-F20-compute-node.qcow2
Formatting ‘RDO-F20-compute-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/var/lib/libvirt/images/RDO-F20-x86_64.qcow’ encryption=off cluster_size=65536 lazy_refcounts=off
[root@tirreno images]# ll -h RDO-F20-compute-node.qcow2
-rw-r–r–. 1 root root 193K Jan 27 11:07 RDO-F20-compute-node.qcow2

Now that the overlay image is there, import it into libvirt. I’ll use virt-manager again.

_compute1 _compute2
_compute3 _compute4


At this point I have 2 VM definitions. One for the control node and one for the compute node. Both are backed by an qemu overlay image. The final preparation for these VMs before I move to the openstack installation is to add a second nic to each of them and to attach a materials iso them. I’ll first build the iso, which will initially only have the template qcow2 image in it. More will be added to it later. Then I’ll show an example of adding the nic and iso to one of the nodes. Be sure to add the nic and iso to BOTH of your VMs. I’m also going to transition my disk images into a directory that my user owns. This is so the demo environment can be rebuilt without me having to be the root user. To do this, you just have to move the images that have already been created to a directory you own and update the libvirt xml files to point to their new path. I put mine in /var/lib/libvirt/dradez, which is owned by mu user and lives on my SSD. Don’t forget to restart libvirt when you update the xml files. I’m not going to show the details of this, You’ll just see my commands start to show my user instead of root as the user.

Next thing to do is to get the rebuild script, and go ahead and grab the setup script and packstack file too RDO-F20-packstack.txt. I use these to rebuild and seed the demo environment so that I can login and start installing without much interaction. You’ll seewalk through each of these scripts. Next create a directory for your iso content and build the iso. Also go ahead and run the rebuild script, it will create a cinder volumes disk image for you.

dradez@tirreno:~/OpenStackLab➤ mkdir RDO-F20-icehouse-iso
dradez@tirreno:~/OpenStackLab➤ ./ iso
dradez@tirreno:~/OpenStackLab➤ ./ all
Rebuilding Control Node
Formatting ‘RDO-F20-control-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/home/dradez/OpenStackLab/RDO-F20-x86_64.qcow2′ encryption=off cluster_size=65536 lazy_refcounts=off
Formatting ‘RDO-F20-cinder-volumes.qcow2′, fmt=qcow2 size=22548578304 encryption=off cluster_size=65536 lazy_refcounts=off
Rebuilding Compute Node
Formatting ‘RDO-F20-compute-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/home/dradez/OpenStackLab/RDO-F20-x86_64.qcow2′ encryption=off cluster_size=65536 lazy_refcounts=off

The second command generates RDO-F20-icehouse.iso, The third blows away your overlay images, rebuilds them and recreates a cinder-volumes disk image. Next add the iso and the second nic both VMs and the cinder volumes device to the control node. If you get an error message about the iso already being used when you add it to the second VM that’s ok, go ahead and add it anyway. Also it may be worth it to check what driver your existing nic is using. The image below shows that “Hypervisor Default” is selected, but this gave me a weird named network device. I updated the device to virtio to match my initial nics, then my second device got named eth1 to match the eth0 device.
_device1 _device2
_device0 _device3

Now that all the devices are added run the setup script. This will upload the packstack.txt file to the control node and try and pvcreate the cinder-volumes disk. You’ll notice the cinder-volumes setup fails. I’ll try and poke at this eventually and update this post if I figure it out. It’s probably something simple. In the mean time this doesn’t hurt anything. Packstack will just create a cinder-volumes volume group on the controller’s disk image instead of using the attached disk image. If you look at the setup script you’ll also see that I used to upload the ssh keys to the nodes during this process too. It’s kinda up to you if you’d rather have them already there or keep your base image more vanilla and upload them when you do setup. Just for fun lets use libguestfs to crack open the base image and pull the keys off of it. Again this is optional. If you do this then uncomment the ssh key setup in the setup script for the “all” section so that they get uploaded each time you do setup.

dradez@tirreno:~/OpenStackLab➤ mkdir tmp
dradez@tirreno:~/OpenStackLab➤ sudo chown dradez:dradez RDO-F20-x86_64.qcow2
[sudo] password for dradez:
dradez@tirreno:~/OpenStackLab➤ guestmount -a RDO-F20-x86_64.qcow2 -m /dev/fedora/root tmp
dradez@tirreno:~/OpenStackLab➤ rm -rf tmp/root/.ssh
dradez@tirreno:~/OpenStackLab➤ fusermount -uz tmp
dradez@tirreno:~/OpenStackLab➤ rmdir tmp
dradez@tirreno:~/OpenStackLab➤ ./ all
Rebuilding Control Node
Formatting ‘RDO-F20-control-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/home/dradez/OpenStackLab/RDO-F20-x86_64.qcow2′ encryption=off cluster_size=65536 lazy_refcounts=off
Formatting ‘RDO-F20-cinder-volumes.qcow2′, fmt=qcow2 size=22548578304 encryption=off cluster_size=65536 lazy_refcounts=off
Rebuilding Compute Node
Formatting ‘RDO-F20-compute-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/home/dradez/OpenStackLab/RDO-F20-x86_64.qcow2′ encryption=off cluster_size=65536 lazy_refcounts=off

Note the rebuild, The base image changed so you want to recreate the overlays, Now uncomment the ssh key setup and run setup and you’ll have to enter your passwords.

dradez@tirreno:~/OpenStackLab➤ ./ all
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
root@’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘’”
and check to make sure that only the key(s) you wanted were added.

Warning: Permanently added ‘’ (RSA) to the list of known hosts.
rdo-demo-id_rsa 100% 1679 1.6KB/s 00:00
Warning: Permanently added ‘’ (RSA) to the list of known hosts. 100% 411 0.4KB/s 00:00
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
root@’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘’”
and check to make sure that only the key(s) you wanted were added.

Warning: Permanently added ‘’ (RSA) to the list of known hosts.
rdo-demo-id_rsa 100% 1679 1.6KB/s 00:00
Warning: Permanently added ‘’ (RSA) to the list of known hosts. 100% 411 0.4KB/s 00:00
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
packstack.txt 100% 12KB 11.8KB/s 00:00
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
Device /dev/vdb not found (or ignored by filtering).
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
Physical volume /dev/vdb not found
Device /dev/vdb not found (or ignored by filtering).
Unable to add physical volume ‘/dev/vdb’ to volume group ‘cinder-volumes’.

If you chose not to remove the keys and leave the keysetup commented your setup run your setup and it will look like this:

dradez@tirreno:~/OpenStackLab➤ ./ all
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
packstack.txt 100% 12KB 11.8KB/s 00:00
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
Device /dev/vdb not found (or ignored by filtering).
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
Physical volume /dev/vdb not found
Device /dev/vdb not found (or ignored by filtering).
Unable to add physical volume ‘/dev/vdb’ to volume group ‘cinder-volumes’.

At this point the VMs are ready to install openstack. To speed up the installation and remove the network dependency we’ll build a yum repository on our iso file to do the installation out of. To do that use this list of files RDO-F20-iso-packages to download the packages you need and rebuild the iso including the yum repository.

dradez@tirreno:~/OpenStackLab➤ mkdir RDO-ISO-icehouse-iso/yum.repo
dradez@tirreno:~/OpenStackLab➤ cd RDO-ISO-icehouse-iso/yum.repo
dradez@tirreno:~/yum.repo➤ cat RDO-F20-iso-packages | xargs yumdownloader
dradez@tirreno:~/yum.repo➤ createrepo .
dradez@tirreno:~/yum.repo➤ cd ..
dradez@tirreno:~/OpenStackLab➤ ./ iso

All that’s left to do now is reset the OpenStack nodes and install OpenStack on them. So make sure that the VMs are off. Rebuild the nodes, start them up and run the setup script. Once you’ve got everything in order fire up packstack using the packstack txt answer file and you should end up with a freshly installed 2 node GRE tunneled OpenStack RDO demo environment.

[root@control ~]# yum install openstack-packstack
[root@control ~]# time packstack –answer-file packstack.txt

**** Installation completed successfully ******
Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* Because of the kernel update the host requires reboot.
* Because of the kernel update the host requires reboot.
* Because of the kernel update the host requires reboot.
* The installation log file is available at: /var/tmp/packstack/20140129-000932-ZUIa4T/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20140129-000932-ZUIa4T/manifests

real 16m13.467s
user 0m1.647s
sys 0m1.317s

From here use the slide decks I used last November to walk through using this environment. There’s not a lot that’s different between these two slide decks listed here. One was presented at the Philly, CT and NYC OpenStack Meetups and the other was presented at Red Hat Forum in Tokyo last November. They’re based on Havana, but all the core stuff that’s in them should be the same between Havana and Icehouse.

When you get to the glance section use your base Fedora 20 image as the glance image to upload.

OpenStack :: Logstash Elasticsearch Kibana (on apache)

January 16th, 2014

OpenStack has lots of moving parts. One of the challenges in administering a cluster is sifting through all the logs on the multiple nodes that make up a cluster. I help to administer TryStack and am always looking for tools to help managing this cluster go smoother.

I was introduced yesterday to Logstash + elasticsearch + Kibana. I’m not really sure which of these names to call what I was introduced to, I guess all of them. Idea is that all the logs from all the nodes are sent to a central location so that you can filter through them. I think there’s much more advanced usage of this trio. I’m still figuring out what to do with it beyond basic log searching.

My understanding is that Logstash helps to gather the logs, elasticsearch indexes them and kibana is the webui that queries elasticsearch. Here’s a screenshot of what I ended up with.

Logstash + Elasticsearch + Kibana


My co-worker Kambiz (kam-beez) pointed me to a couple links that he used to setup an instance of this and he more-or-less followed this post:

The main modifications to what he ended up with was to pull the logstash rpm from the logstash Jenkins instance:
Then to collect logs using this link’s method:
And finally there were a couple config changes to what the original post provided to get logstash running.

I already had apache running on the TryStack utility hosts and didn’t think it was nessesary to add nginx to the mix, which is what the post uses, so I figured it may be helpful to document the process I went through to get this running on apache. This install that has been very useful thus far and I’m glad I have it collecting logs.

First, get the rpms from and the logstash jenkins instance. I used these two links like this:

[root@host1 ~]# yum install
[root@host1 ~]# yum install

also get a copy of the latest kibana stuff, this is just html and javascript so I don’t think there is an rpm afaict. I unpacked it and moved it to /var/www.

[root@host1 ~]# wget
[root@host1 ~]# tar xzf kibana-latest.tar.gz
[root@host1 ~]# mv kibana-latest /var/www
[root@host1 ~]# restorecon -R /var/www/kibana-latest

make sure that apache is installed too, I already had it installed from my Foreman and Nagios instances running on this server. Now lets start to configure this stuff. Start with Kibana. Edit /var/www/kibana-latest/config.js and update the elasticsearch: line:

- elasticsearch: "http://"+window.location.hostname+":9200",
+ elasticsearch: "",

Note that the 9200 in the config.js file is dropped and replaced with /elasticsearch. When you fire up kibana in apache it will try to connect directly to elasticsearch on 9200. To avoid having to punch extra holes in the firewall we’ll setup a proxypass in apache to pass the traffic from to localhost:9200. We’ll configure apache once we finish kibana.

Before we get to apache back up the kibana-latest/app/dashboards/default.json file (if you want to) and replace it with my copy:

[root@host1 ~]# cd /var/www/kibana-latest/app/dashboards
[root@host1 dashboards]# cp default.json default.json.backup
[root@host1 dashboards]# wget

edit that file to have a title to your liking, I used “TryStack :: OpenStack LogStash Search”

 - "title": "TryStack :: OpenStack LogStash Search",
 + "title": "Your ingenious title here",

The default.json file is a definition of what panels to put on your default view in kibana. I modified the one referenced on the other blog post, that’s why I gave you a new link instead of using the one on that post. There were a couple redundant panels that I consolidated. Also the timespan it referenced by default was old and static so I changed it to show the last hour by default.

So lets add that apache config now. Add /etc/httpd/conf.d/elasticsearch.conf. You could call this whatever.conf if you wanted to. I put both my elasticsearch proxy pass and my kibana alias in this file like this:

ProxyPass /elasticsearch http://localhost:9200
ProxyPassReverse /elasticsearch http://localhost:9200
alias /kibana /var/www/kibana-latest

<Location /elasticsearch>
    Order allow,deny
    Allow from all

For this to work you’ll need mod_proxy and mod_proxy_http and if you’re using selinux you’ll need the httpd_can_network_connect bool set on. Google those if you’re not sure how to set them up. There’s lots of docs out there about them. Finally lets configure logstash. First edit /etc/sysconfig/logstash and set the START to true instead of false. The service won’t start if you don’t.

Next create /etc/logstash/conf.d/logstash.conf with this content:

input {
  syslog {
    type => syslog
    port => 5544

filter {
  mutate {
    add_field => [ "hostip", "%{host}" ]

output {
  elasticsearch {
    host => "localhost"

Last open up your firewall to allow your hosts to send rsyslog messages to port 5544. I added an iptables rule to /etc/sysconfig/iptables and restarted the firewall: (note: the minus in this is intended to be part of the line and does not indicate you should remove it)

-A INPUT -i em1 -m state --state NEW -m tcp -p tcp --dport 5544 -j ACCEPT

This rule listens on the em1 interface, my internal network. Easiest way to do this for your host is edit that file, copy the rule for port 22 and update the duplicated line to accept port 5544 instead of 22. Then restart iptables.

*** IMPORTANT *** there are security implications to opening this port. Please do not open this port to the world and allow anyone to pollute your logs. I’ve opened mine up only to my internal network for my cluster. You should also restrict traffic somehow to that only the hosts you expect to get logs from can connect to this port.

Finally fire it all up:

[root@host1 ~]# service elasticsearch start
[root@host1 ~]# service logstash start
[root@host1 ~]# service httpd start

This should give you a pretty uninteresting kibana interface. There won’t be any logs in it yet. Key here is to watch the top of the page and make sure that there isn’t a message that kibaba can’t connect to elasticsearch. If it can’t visit and make sure that you get a 200 back.

To populate with logs you could use the logstash client, but the logstash cookbook post referenced above suggests it’s a bit heavy weight. I’ve had really good results thus far just having rsyslog send over the logs. To do that, on each of the hosts that you want to aggregate logs you’ll need to create the file /etc/rsyslog.d/logstash.conf with this content:

*.* @@your_logstash_host:5544

This will send ALL LOGS from that host to logstash for indexing. Google rsyslog if you would like to find out how not to send all logs to logstash.

Once you start to see logs flow into the web interface you can change the time span in the upper right hand corner. You can query your logs. Try putting a hostname or ip into the query box at the top. You can use wildcards like *error* to find errors. You can layer filters. Try adding a filter for a host in the filter box just under the query box. Then add another one for *error* and you’ll get the errors for just that host within the timespan you’ve chosen.

Hope this helps you track down issues. I immediately found I had a rouge iscsi session on one of my compute nodes and was able to put it out of its misery. :)

*** Update Jan 20 ***

I noticed that not all the OpenStack logs were showing up in my logstash searches. Turns out you can toggle openstack using syslog. My puppet configs turn it off by default so I had to turn it on on all my hosts. This boils down to setting ‘use_syslog = true’ in all your component’s conf files. Here’s a link that talks more about it:

Adventures with Nagios and GlusterFS (monitoring and self-healing)

January 3rd, 2014

We recently added GlusterFS as the storage backed for glance and cinder on

I’ve been working on setting up nagios monitoring for the cluster recently. I finally have gotten around to spending some time on making sure that Gluster is behaving. I also knew that I had had some sync issues with Gluster, but I hadn’t spent much time on it because all my content seemed to be ok These sync issues were also fixed this in the process so that my nagios checks all came back happy.

First a bit of arch, We have 3 hosts each with a bunch of 550G drives in them. For now the gluster setup is version 3.4, has 3 peers with 2 bricks per peer. The one volume is configured with 3 replicas (I think I’m saying that correct)

An initial google showed there where a couple options of nagios scripts to monitor the cluster, I started with which pointed to a host that supposedly housed something called just gives me an apache test page. Figuring it was git I tried the same user name at github: Bingo.

Unfortunately it didn’t work so well, no matter what I tried I could only get something to the effect of:

[root@host13 ~]# sh
Host unreachable
cat: /tmp/ No such file or directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec … or kill -l [sigspec]

Turns out that this was for older versions of GlusterFS and won’t work with gluster 3.4.

next I tried this thread:

This actually worked out of the box and reminded me that I had sync issues when it reported that I had some files out of sync. I initially called it

[root@host13 ~]# sh CRITICAL peers: host14/ host15/ volumes: trystack/21 unsynchronized entries

I scanned through my google results to make sure there wasn’t anything else to peek at before I started moving forward with this and came across an updated version of this script that was posted on the nagios exchange.

After installing a couple dependencies (I installed the nagios-plugins rpm for the plugin and the bc rpm) I got these results:

[root@host13 export]# /usr/lib64/nagios/plugins/check_glusterfs -v trystack -n 3
/usr/lib64/nagios/plugins/check_gluster: line 101: -2: substring expression < 0
WARNING: 15 unsynched entries; found 1 bricks, expected 3

unsynched… heh, I want to spell that unsynced, so correct that and fix line 101.
The syntax used on line 101 requires bash 4.2, I’m running 4.1 on RHEL 6.5, So I’ll update the syntax to a 4.1 compatible syntax.

[root@host1 files]# diff -u check_glusterfs_orig check_glusterfs
--- check_glusterfs_orig	2014-01-03 13:24:12.020577771 -0800
+++ check_glusterfs	2014-01-03 11:22:04.943621593 -0800
@@ -81,7 +81,7 @@
 if [ "$heal" -gt 0 ]; then
-	errors=("${errors[@]}" "$heal unsynched entries")
+	errors=("${errors[@]}" "$heal unsynced entries")

 # get volume status
@@ -98,7 +98,8 @@
 		if [ "${key}" = "Disk Space Free" ]; then
-			free=${freeunit:0:-2}
+			free=${freeunit%'GB'}
 			if [ "$unit" != "GB" ]; then
 				Exit UNKNOWN "unknown disk space size $freeunit"

[root@host13 export]# /usr/lib64/nagios/plugins/check_glusterfs -v trystack -n 3
WARNING: 32 unsynced entries

That gets me a bit closer. Now to add this to nagios and figure out the unsynced entries. The nagios exchange page offers some sudo configs to give privileges to nagios to run the gluster commands. So next I made the sudo updates, added a service check into the nagios_service.cfg file for each host and an nrpe entry into each host’s nrpe.cfg. I actually did this in puppet, not directly in the files, but here’s the result in the nagios files:


define service {
        check_command                  check_nrpe!check_glusterfs
        service_description            Gluster Server Health Check
        use                            generic-service


command[check_glusterfs]=/usr/lib64/nagios/plugins/check_glusterfs -v trystack -n 3

When nagios ran the check I got the error “No Bricks Found”, running the nrpe command from my nagios host confirms this:

[root@host1 trystack]# /usr/lib64/nagios/plugins/check_nrpe -H -c check_glusterfs
CRITICAL: no bricks found

I wasted a good bit of time trying to figure this out. End result: turns out that the note on the nagios exchange page for this plugin didn’t address nrpe, it only referenced the nagios user. I had put my sudo configs in place using the user nagios, but when nrpe runs it runs as the user nrpe. So I updated my sudoers.d file:

[root@host13 export]# cat /etc/sudoers.d/nrpe
Defaults:nrpe !requiretty
nrpe ALL=(root) NOPASSWD:/usr/sbin/gluster volume status [[\:graph\:]]* detail,/usr/sbin/gluster volume heal [[\:graph\:]]* info

So now lets try and rerun the nrpe command from the nagios host to make sure it’s happy too:

[root@host1 trystack]# /usr/lib64/nagios/plugins/check_nrpe -H -c check_glusterfs
WARNING: 32 unsynced entries

That looks better. On to figure out the sync issues.

I can’t say that I understand exactly what’s going on under the covers with gluster. I can tell you there’s two places you can work with on each brick to sort out your sync issues, the content you see on the brick and the .gluster directory. If you’re careful about it you can fix it by just deleting content directly off the bricks and wait for gluster to self-heal itself. Here’s what I did.

The script i just installed ran the volume heal info command to report sync issues, so I ran that by hand to see what it’s spitting out:

[root@host13 export]# gluster volume heal trystack info
Gathering Heal info on volume trystack has been successful

Brick host13:/export/sdb1
Number of entries: 1

Brick host14:/export/sdb1
Number of entries: 1

Brick host15:/export/sdb1
Number of entries: 1

Brick host13:/export/sdc1
Number of entries: 4

Brick host14:/export/sdc1
Number of entries: 4

Brick host15:/export/sdc1
Number of entries: 4

I googled for a bit and found a couple things that referred to the brick content and each brick’s .gluster directory, as I just mentioned. Turns out to store the content there’s a bunch of hard links that connect the content you see in the bricks and content you see in the brick’s .gluster directories. The logs suggest for you to delete all but the version of the file you want to fix the sync, but say nothing about this .gluster directory. Turns out if you delete the content and the .gluster directory directly from the brick then gluster will rebuild it as part of its self heal process. I treated my host13 as the copy to rebuild from and host 14 and 15 as those to rebuild. So here we go:

I have no idea if this is recommended practice.

[root@host14 export]# cd sdb1
[root@host14 sdb1]# rm -rf *
[root@host14 sdb1]# rm -rf .gluster
[root@host14 sdb1]# cd ../sdc1
[root@host14 sdc1]# rm -rf *
[root@host14 sdc1]# rm -rf .gluster
[root@host15 export]# cd sdb1
[root@host15 sdb1]# rm -rf *
[root@host15 sdb1]# rm -rf .gluster
[root@host15 sdb1]# cd ../sdc1
[root@host15 sdc1]# rm -rf *
[root@host15 sdc1]# rm -rf .gluster

After a little while (It takes time to self-heal) my heal info command looked like this:

[root@host13 sdb1]# gluster volume heal trystack info

Gathering Heal info on volume trystack has been successful

Brick host13:/export/sdb1
Number of entries: 1

Brick host14:/export/sdb1
Number of entries: 0

Brick host15:/export/sdb1
Number of entries: 0

Brick host13:/export/sdc1
Number of entries: 1

Brick host14:/export/sdc1
Number of entries: 0

Brick host15:/export/sdc1
Number of entries: 0

That looks alot better, but there’s still those weird root entries that say they’re out of sync. A quick scan over each brick’s content across the 3 hosts and all looks like the content matches. So I went ahead and destroyed my host13 brick’s content:

[root@host13 export]# cd sdb1
[root@host13 sdb1]# rm -rf *
[root@host13 sdb1]# rm -rf .gluster
[root@host13 sdb1]# cd ../sdc1
[root@host13 sdc1]# rm -rf *
[root@host13 sdc1]# rm -rf .gluster

A little more time passes and eventually my nagios check starts reporting happiness:

[root@host1 trystack]# /usr/lib64/nagios/plugins/check_nrpe -H -c check_glusterfs
OK: 6 bricks; free space 540GB

So in summary… GlusterFS is pretty cool stuff so far. I’m not sure what I did was sanctioned but it seemed to work. Nrpe checks happen as the nrpe user. Hope this helps save someone some time in the future.

I have more work to do, there’s thresholds you can add to the nagios command to alert you when you’re running out of space, and the “-n 3″ is a brick count, I’m not sure yet how that’s supposed to fit in yet. I have 6 bricks and I used a 3 and didn’t get any complaints.

Tomorrow is just another day :)

TryStack Havana configuration management

December 4th, 2013

To manage configuration on TryStack ( we use foreman, which uses puppet under the covers.

TryStack is a public OpenStack cloud that anyone with a Facebook account can get access to and try out using OpenStack. Last year Red Hat donated RHEL subscriptions for the x68_64 cluster and commited my team’s time to maintain this cluster. We’re currently in the process of upgrading the Cluster to RDO Havana and we are backing glance and cinder with GlusterFS.

In our TryStack deployment Foreman mainly supplies a puppet master, configuration key value pairs (foreman global variables) and the host groups which assign a role to a node in the cluster. Right now there are two host groups, more will come as we expand monitoring and storage.


Havana Control Node

  • trystack
  • trystack::control
  • trystack::swift_proxy

Havana Compute node

  • trystack
  • trystack::compute


The two host groups are currently “Havana Control Node” and “Havana Compute node”. Each of these host groups include just a couple puppet classes as listed above. The number of these classes has been deliberately kept low in each host group. The complexity is wired together in the trystack puppet module which has now been posted in its current form to github:

This puppet module consumes two things: 1. variables (foreman global variables) 2. the puppet modules that to do the OpenStack configuration.

First the puppet modules, in the RDO package set there is a package named openstack-puppet-modules. We’ve simply used the puppet modules that this package provides to populate the puppet module path, along with the trystack module.

Second the variables. There’s a small script included in the puppet module repo called This script simply uses the foreman api to read a config file and populate these variables into foreman. You can use this script by following these steps:

  1. change the 0 to 1 on the line “if 0″ by the comment “# generate”
  2. run ‘python’ this will generate a trystack.cfg file with empty values
  3. switch the 1 back to a 0
  4. edit the trystack.cfg file with appropriate values
  5. edit the user and password and url around line 40 to point to foreman
  6. run ‘python’

you can update the cfg file and run the script over and over and it will update your config values for you. I’ve also been made aware of hammer which is a cli client for the foreman api, though I’ve not used it a few of my team mates have. Do a ‘yum search hammer’ on the foreman yum repo to find this package.

To summarize this, we have 5 pieces to this puzzle: Foreman, Foreman Host Groups, Foreman Global Variables, TryStack Puppet Modules, RDO openstack-puppet-modules

Once they are all in place we install a host, add puppet client have the agent check in and sign its cert, assign the host to a host group and rerun puppet agent to configure the node.

Red Hat:
Unrelated Hammer: