OpenStack Summit ATL Voting is live / Red Hat Summit

February 24th, 2014

I have a few talks proposed for the OpenStack summit in Atlanta in May. The community votes on them for acceptance. Please take a few moments and vote.

I’ll also be at Red Hat Summit / DevNation in April in San Francisco. If you’re attending, check out my sessions there too:

DevNation Session:

2:30 p.m.-3:30 p.m.

Red Hat Summit Sessions:

Session Date Time
Red Hat Cloud Infrastructure architecture design Tuesday, April 15 4:50 pm – 5:50 pm
Building scalable cloud infrastructure using Red Hat Enterprise Linux OpenStack Platform Wednesday, April 16 10:40 am – 11:40 am

Building a small Fedora 20 image

February 5th, 2014

In my RDO demo environment post I hinted at building a smaller Fedora disk image to use in the environment.

An easy answer to this would be to use the Fedora could image from http://fedoraproject.org/en/get-fedora#clouds
This image could be booted up or cracked open with libguestfs.and customized as was shown in the post.

If you’d like to customize one yourself you can use this process, It’s by far not perfect and probably will not suit every need, but will result in a disk image just under 300M. Use it as an example to get started.

First you’ll need a kickstart file, start with this one: small-F20.cfg
(thanks Forrest Taylor, this is derived from the one you sent me.)

edit that file and update the root password. You can generate one like this:

python -c 'import crypt; print(crypt.crypt("My Password", "$6$My Salt"))'

This will generate sha512 crypt of your password using your provided salt. [1]

You could also add some of the setup steps from my demo post into the kickstart. In this one the image is being sealed. You can see the network device modification and the udev rule being deleted. The ssh keys haven’t been generated yet because the image hasn’t been booted yet so it’s not necessary to delete them in the kickstart.

I’m going to use the same libvirt environment as the other post. I installed apache and put the kickstart file at /var/www/html/small-F20.cfg
It will later be referenced at http://192.168.122.1/small-F20.cfg by virt-install
This file can live anywhere your installed VM will be able to access via http so if you don’t want to do it this way do it a different way. :)

Once you have the kickstart in place and the root password updated create a disk image, and start the install. I created a small script to do this since there’s so many options.

#!/bin/sh
rm -f small-F20.img
qemu-img create -f qcow2 small-F20.img 8G

sudo virt-install -n small-F20 -r 2048 --vcpus=2 --network bridge=virbr0 \
  --graphics=spice --noautoconsole --noreboot -v \
  -l http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/ \
  --disk=path=small-F20.img,format=qcow2 \
  -x "ks=http://192.168.122.1/small-F20.cfg"

This will launch a VM and do the install. You can connect the way is tells you to or open up the terminal window through virt-manager.

_virt-install

 

Once the install completes sparsify the image and move the new image into place.

# if /tmp does not have enough space; you can do this to use /var/lib/libvirt/images instead
export TMPDIR=/var/lib/libvirt/images

sudo virt-sparsify –compress small-F20.img small-F20.qcow2
ll small-F20.img small-F20.qcow2
-rw-r–r–. 1 qemu qemu 1.1G Feb 5 10:30 small-F20.img
-rw-r–r–. 1 root root 273M Feb 5 10:52 small-F20.qcow2
mv small-F20.qcow2 small-F20.img

This new image can just be swapped out with your base image from the demo, rebuild the overlays and you’re in business with a sub 300M Fedora 20 image.


The cost to overcome

February 4th, 2014

Imagine yourself on a clear cold evening. The stars are just coming out over a wooded mountainous landscape, blanketed by a covering of snow.
There’s a wide, fairly deep, river flowing at the base of the mountain.

The is is the image in my mind when I listen to Overcome by The Digital Age.

I love this image. I grew up visiting my grandparents in Summit, NY. They owned something like 12 acres of land. On the back side of the property was a lake. I can remember many winters adventuring out into the property in knee high snow. Walking on the frozen lake, Playing in the snow, Spending time with my family in the wintry outdoors. It wasn’t exactly the landscape I’ve described, but for me it salts the imagery very well.

While taking my son to school this morning we were listening to this song. I found myself trying to explain the symbolism in this song in 4 year old terms. I think it ended up being more edifying to me. After I dropped him off I continued to consider the lyrics to the song and how they correlate to the two books I’m reading right now. Redemption: accomplished and applied by John Murray and Not a fan by Kyle Idleman.

Back to that landscape you just imagined, add an avalanche on the mountain in the distance. An impressive event of snow barreling down the side of a mountain. Destroying anything and everything in its path. No chance of human survival for someone caught in the middle of it. It’s overwhelming to even picture yourself witness to something like this.

God’s love is like an avalanche they sing:

I can hear the roar it’s a mighty sound
Your love is an avalanche
I’m overcome

The avalanche settles at the foot of the mountain. Right along the river. For something so full of power, a river often a symbol of peace. The sheer volume of water flowing right in front of you could carry you a seemingly endless distance.

The peace of God and the glow of His majesty is like this river they sing:

I’m taken away by a river of peace
on this starlit night
I’m overcome
I’m lost in the glow of Your majesty
oh my God how You captivate
I’m overcome

Looking off as far into the distance as you can see, snow has fallen and covered everything. Nothing has escaped its covering.

As such, our sin has been covered they sing:

like a snowfall that blankets the Earth
my sin has been covered
I’m overcome

Murray begins his book discussing the necessity of and the nature for Christ’s work on the cross, why and how it happened the way it did. In establishing the relationship of the atonement to justification he states:

The only righteousness conceivable that will meet the requirements of our situation as sinners and meet the requirements of a full and irrevocable justification is the righteousness of Christ. This implies his obedience and therefore his incarnation, death and resurrection.

We often put so much emphasis on the cross. I think what Murray points to here is that there was an extensive amount of supporting work that happened to enable his death on the cross would overcome our sin. Further, Murray later associates the fulfillment of the Levitical law with Christ’s work on the cross by establishing him as the priest that offered himself as the sacrifice:

That Christ’s work was to offer himself a sacrifice for sin implies, however, a complementary truth too frequently overlooked. It is that, if Christ offered himself as a sacrifice, he was also a priest. And it was as a priest that he offered himself. He was not offered up by another; he offered himself. This is something that could not be exemplified in the ritual of the Old Testament. The priest did not offer himself and neither did the offering offer itself. But in Christ we have this unique combination that serves to exhibit the uniqueness of his sacrifice, the transcendent character of his priestly office, and the perfection inherent in his priestly offering. It is in virtue of his priestly office and in pursuance of his priestly function that he makes atonement for sin.

The song continues:

the price has been paid
the war is already won
the blood of my savior was shed
He’s overcome
and I’m overcome

He’s overcome and I’m overcome. Christ overcame sin, now we are called to be overcome by him. That’s what Kyle Idleman’s book is all about.

It cost Christ everything to save us. It costs us everything to be saved. It requires us to be overcome by everything that God is and let go of everything that we were.


RDO OpenStack Demo Environment :: How To

January 29th, 2014

I’ve given a “Getting started with OpenStack using RDO” presentation a couple times over the past year. This presentation includes a live demo of installing and configuring RDO on virtual machines on my laptop. The installation runs entirely off my laptop, there is no internet access needed. I’ve been ask a couple times to document how to build this demo environment. So here goes…. finally :)

I’ve had a couple people try and recreate this environment with different tools. The issue I’ve heard most people run into on other tools is support for multiple virtual nics on the VMs. To use packstack and neutron together with GRE you have to have 2 nics on each of the VMs.

To clarify, I’ll call the OpenStack nodes that I’m installing OpenStack on “VMs” and/or “OpenStack nodes”. The represent the bare metal in a real installation. A virtual machine that OpenStack launches will be called an instance, or an OpenStack instance.

One more house keeping item. My blog displays two characters wrong.
1. single quotes turn into a weird back tick, so when you copy and paste sed commands update the tick looking character to a plain single quote.
2. double dash (or double minus) turns into this super long dash character. So when you do commands like the sparsify command that has a –compress param make sure that’s dash dash compress.

The OpenStack Installation:
There will be two nodes. A control / network node and a compute node. In reality the network node should be separated from the control node. This is just a demo environment so the simplicity is beneficial to the amount of time it takes to demo the environment. Each node will have 2 ipaddresses. 192.168.122.x for the “public” address and 192.168.123.x for the private address. Node 1 will be 12{2,3}.101 and node2 will be 12{2,3}.102.

Hardware:
I’m currently running this demo on a Lenovo T530 with 8 cores, 16G of memory, and I run my virtual machines off a Crucial M500 240GB mSATA SSD.

The environment was originally designed and run off Fedora 19 and the disk images were Fedora 19, using RDO Havana. I’ve just upgraded the demo environment to Fedora 20 and RDO Icehouse, so this post will be F20 / RDO Icehouse. I’ll note differences where I find them, Though i don’t expect many if any. So all disk images in this tutorial will be Fedora 20.

Note that the SSD is not necessary, but it does speed up the demo in general quite a bit as the installation of OpenStack is a disk intensive operation.

Building the OpenStack Nodes:
The first step in getting this environment stood up is to create a base disk image. This image will serve 2 purposes.
1. qemu overlay images will be created on top of it for the OpenStack nodes
2. In the demo it’s used as the glance image to import and launch instances off of.

*** UPDATE ***
For a much smaller footprint disk image use this method instead of a basic install.

To build this base image I just use the virt-manager in Fedora to install a minimal install of Fedora. Then you have to boot up the disk and strip out a couple features in that Fedora that don’t work with OpenStack. Then I “seal” the image to create a sort of template out of it. This just involves stripping out things that are created at boot time. Let’s do the install:

_install1  _install2

_install3  _install4
_install5  _install6

End result of this image set is a libvirt definition that will be used as a control node and a disk image that both the control and compute node will use as their base image for the qemu overlay.
It’s installed from http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/ with 4G of ram, 2 vcpus and an 8G disk with a mininal install of Fedora 20.

Next thing to do is to fix up the Fedora instance so it’s ready to have OpenStack installed.

Start by removing firewalld, NetworkManager, enabling the network and disabling selinux.
Firewalld and NetworkManager arn’t compatible with OpenStack that I know of yet. Disabling SELinux isn’t strictly necessary, but it’s a demo environment so it’s not really needed.To do this I’ll ssh into the vm now running on my system.

[root@localhost ~]# yum remove -y firewalld NetworkManager
Resolving Dependencies
–> Running transaction check
—> Package NetworkManager.x86_64 1:0.9.9.0-20.git20131003.fc20 will be erased
—> Package firewalld.noarch 0:0.3.8-1.fc20 will be erased
–> Finished Dependency Resolution

Dependencies Resolved

[...snip...]

Removed:
NetworkManager.x86_64 1:0.9.9.0-20.git20131003.fc20 firewalld.noarch 0:0.3.8-1.fc20

Complete!

[root@localhost ~]# chkconfig network on
[root@localhost ~]# sed -i ‘s/SELINUX=enforcing/SELINUX=permissive/’ /etc/selinux/config
[root@localhost ~]# setenforce 0

Next install an ssh keypair so that you don’t have to fiddle with passwords in your demo. Do this on your laptop and copy over both the secret and the pub key as id_rsa and id_rsa.pub to the VM. This will also be used by packstack to ssh to the nodes to do the openstack installation. 192.168.122.57 is the dhcp address that libvirt’s network gave to my VM. I added the following to my ~/.ssh/config so that I don’t have to manage the known hosts and specify the root user everytime I log in:

Host 192.168.122.*
    User                    root
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

Here’s installing the keys.

dradez@tirreno:~➤ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/dradez/.ssh/id_rsa): /home/dradez/.ssh/rdo-demo-id_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/dradez/.ssh/rdo-demo-id_rsa.
Your public key has been saved in /home/dradez/.ssh/rdo-demo-id_rsa.pub.

dradez@tirreno:~➤ ssh-copy-id -i ~/.ssh/rdo-demo-id_rsa root@192.168.122.57
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@192.168.122.57′s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘root@192.168.122.57′”
and check to make sure that only the key(s) you wanted were added.

dradez@tirreno:~➤ scp ~/.ssh/rdo-demo-id_rsa root@192.168.122.57:.ssh/id_rsa
rdo-demo-id_rsa 100% 1679 1.6KB/s 00:00
dradez@tirreno:~➤ scp ~/.ssh/rdo-demo-id_rsa.pub root@192.168.122.57:.ssh/id_rsa.pub
rdo-demo-id_rsa.pub 100% 411 0.4KB/s 00:00
dradez@tirreno:~➤ ssh root@192.168.122.57
Last login: Mon Jan 27 10:36:11 2014 from 192.168.122.1
[root@localhost ~]#

 

Before you shutdown the image and turn it into a template for your overlay images you have to seal it. To do this remove the host_ssh keys, the udev net-persistent rules and the device specific info in /etc/sysconfig/network-scripts/ifcfg-eth0. If you want to do a yum update you can. Go ahead and atleast install net-tools (so facter will find all your ipaddresses properly) and whatever else you think you may need and don’t want to have to install over and over and over. This will also import the Fedora update keys for future use. “Yum clean all” dumps yum cache so it’s not part of the base image. There’s other cache like this too that could be cleaned. I won’t get into it here.

[root@localhost ~]# rm -rf /etc/ssh/ssh_host_rsa_key*
[root@localhost ~]# rm -f /etc/ssh/moduli
[root@localhost ~]# sed -i ‘/UUID/d’ /etc/sysconfig/network-scripts/ifcfg-eth0
[root@localhost ~]# sed -i ‘/HWADDR/d’ /etc/sysconfig/network-scripts/ifcfg-eth0
[root@localhost ~]# yum update -y
[root@localhost ~]# yum install -y net-tools vim telnet
[root@localhost ~]# yum clean all

My VM didn’t have the net-persistent file. Look in /etc/udev/rules.d/ for a file named something like 70-net-persistent.rule and delete that if is exists.

Now shutdown your VM and sparsify its disk image.  This next step, sparsifying, is optional. When the file was created it was allocated the whole 8G, when it’s sparsified it’s reduced to the size of what’s actually being used. (If you get tmp dir warnings, google “virt-sparsify TMPDIR” and look at some of the options related to directory path and tmpfs modifications that you can do to work around making this happen.) It’s just nice to make this file small because it won’t be directly written to again and you free up lots of space doing this. You could also write a small kickstart to make this even smaller I think. I’ll post one if I get to writing one.

[root@tirreno images]# virt-sparsify –compress RDO-F20-control-node.img RDO-F20-x86_64.qcow

Create overlay file in /tmp to protect source disk …

Examine source disk …
[...snip...]
Clearing Linux swap on /dev/fedora/swap …
Fill free space in /dev/sda1 with zero …
Copy to destination and make sparse …

Sparsify operation completed with no errors. Before deleting the old
disk, carefully check that the target disk boots and works correctly.
[root@tirreno images]# ll -h
-rw——-. 1 qemu qemu 8.1G Jan 27 10:52 RDO-F20-control-node.img
-rw-r–r–. 1 root root 917M Jan 27 10:59 RDO-F20-x86_64.qcow

You can see the original file reports it’s 8.1G, the sparsified file is only 917M. Now test that you can use your sparsified image with a qemu overlay file.

[root@tirreno images]# mv RDO-F20-control-node.img F20_orig.img
[root@tirreno images]# qemu-img create -b `pwd`/RDO-F20-x86_64.qcow -f qcow2 RDO-F20-control-node.qcow2
Formatting ‘RDO-F20-control-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/var/lib/libvirt/images/RDO-F20-x86_64.qcow’ encryption=off cluster_size=65536 lazy_refcounts=off
[root@tirreno images]# ll -h RDO-F20-control-node.qcow2
-rw-r–r–. 1 root root 193K Jan 27 11:07 RDO-F20-control-node.qcow2

I’ve created my overlay with a qcow2 extension instead of a img extenstion so the libvirt definition will have to be updated. I’ll edit /etc/libvirt/qemu/RDO-F20-control-node.xml and update the path to RDO-F20-control-node.img to RDO-F20-control-node.qcow2. restart libvirtd and boot up the VM and make sure it’s happy. Your IP may change because we stripped out the UUID and HWADDR from ifcfg-eth0.

What’s happened here is that you’ve booted your VM off an overlay image. Any changes you make, imparticular firt-boot generated things like your host ssh keys, udev rules, etc are written to the overlay disk and not the template disk that you first installed. This means that you can delete that overlay disk, recreate it and start fresh in a matter of seconds because the base template image is untouched. It also means that you can create more overlay images for more VMs without having to do more installs. So lets create the definition for the compute node now using that same template image.

[root@tirreno images]# qemu-img create -b `pwd`/RDO-F20-x86_64.qcow -f qcow2 RDO-F20-compute-node.qcow2
Formatting ‘RDO-F20-compute-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/var/lib/libvirt/images/RDO-F20-x86_64.qcow’ encryption=off cluster_size=65536 lazy_refcounts=off
[root@tirreno images]# ll -h RDO-F20-compute-node.qcow2
-rw-r–r–. 1 root root 193K Jan 27 11:07 RDO-F20-compute-node.qcow2

Now that the overlay image is there, import it into libvirt. I’ll use virt-manager again.

_compute1 _compute2
_compute3 _compute4
_compute5

 

At this point I have 2 VM definitions. One for the control node and one for the compute node. Both are backed by an qemu overlay image. The final preparation for these VMs before I move to the openstack installation is to add a second nic to each of them and to attach a materials iso them. I’ll first build the iso, which will initially only have the template qcow2 image in it. More will be added to it later. Then I’ll show an example of adding the nic and iso to one of the nodes. Be sure to add the nic and iso to BOTH of your VMs. I’m also going to transition my disk images into a directory that my user owns. This is so the demo environment can be rebuilt without me having to be the root user. To do this, you just have to move the images that have already been created to a directory you own and update the libvirt xml files to point to their new path. I put mine in /var/lib/libvirt/dradez, which is owned by mu user and lives on my SSD. Don’t forget to restart libvirt when you update the xml files. I’m not going to show the details of this, You’ll just see my commands start to show my user instead of root as the user.

Next thing to do is to get the rebuild script RDO-F20-icehouse_rebuild.sh, and go ahead and grab the setup script RDO-F20-icehouse_setup.sh and packstack file too RDO-F20-packstack.txt. I use these to rebuild and seed the demo environment so that I can login and start installing without much interaction. You’ll seewalk through each of these scripts. Next create a directory for your iso content and build the iso. Also go ahead and run the rebuild script, it will create a cinder volumes disk image for you.

dradez@tirreno:~/OpenStackLab➤ mkdir RDO-F20-icehouse-iso
dradez@tirreno:~/OpenStackLab➤ ./RDO-F20-icehouse_rebuild.sh iso
dradez@tirreno:~/OpenStackLab➤ ./RDO-F20-icehouse_rebuild.sh all
Rebuilding Control Node
Formatting ‘RDO-F20-control-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/home/dradez/OpenStackLab/RDO-F20-x86_64.qcow2′ encryption=off cluster_size=65536 lazy_refcounts=off
Formatting ‘RDO-F20-cinder-volumes.qcow2′, fmt=qcow2 size=22548578304 encryption=off cluster_size=65536 lazy_refcounts=off
Rebuilding Compute Node
Formatting ‘RDO-F20-compute-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/home/dradez/OpenStackLab/RDO-F20-x86_64.qcow2′ encryption=off cluster_size=65536 lazy_refcounts=off

The second command generates RDO-F20-icehouse.iso, The third blows away your overlay images, rebuilds them and recreates a cinder-volumes disk image. Next add the iso and the second nic both VMs and the cinder volumes device to the control node. If you get an error message about the iso already being used when you add it to the second VM that’s ok, go ahead and add it anyway. Also it may be worth it to check what driver your existing nic is using. The image below shows that “Hypervisor Default” is selected, but this gave me a weird named network device. I updated the device to virtio to match my initial nics, then my second device got named eth1 to match the eth0 device.
_device1 _device2
_device0 _device3

Now that all the devices are added run the setup script. This will upload the packstack.txt file to the control node and try and pvcreate the cinder-volumes disk. You’ll notice the cinder-volumes setup fails. I’ll try and poke at this eventually and update this post if I figure it out. It’s probably something simple. In the mean time this doesn’t hurt anything. Packstack will just create a cinder-volumes volume group on the controller’s disk image instead of using the attached disk image. If you look at the setup script you’ll also see that I used to upload the ssh keys to the nodes during this process too. It’s kinda up to you if you’d rather have them already there or keep your base image more vanilla and upload them when you do setup. Just for fun lets use libguestfs to crack open the base image and pull the keys off of it. Again this is optional. If you do this then uncomment the ssh key setup in the setup script for the “all” section so that they get uploaded each time you do setup.

dradez@tirreno:~/OpenStackLab➤ mkdir tmp
dradez@tirreno:~/OpenStackLab➤ sudo chown dradez:dradez RDO-F20-x86_64.qcow2
[sudo] password for dradez:
dradez@tirreno:~/OpenStackLab➤ guestmount -a RDO-F20-x86_64.qcow2 -m /dev/fedora/root tmp
dradez@tirreno:~/OpenStackLab➤ rm -rf tmp/root/.ssh
dradez@tirreno:~/OpenStackLab➤ fusermount -uz tmp
dradez@tirreno:~/OpenStackLab➤ rmdir tmp
dradez@tirreno:~/OpenStackLab➤ ./RDO-F20-icehouse_rebuild.sh all
Rebuilding Control Node
Formatting ‘RDO-F20-control-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/home/dradez/OpenStackLab/RDO-F20-x86_64.qcow2′ encryption=off cluster_size=65536 lazy_refcounts=off
Formatting ‘RDO-F20-cinder-volumes.qcow2′, fmt=qcow2 size=22548578304 encryption=off cluster_size=65536 lazy_refcounts=off
Rebuilding Compute Node
Formatting ‘RDO-F20-compute-node.qcow2′, fmt=qcow2 size=8589934592 backing_file=’/home/dradez/OpenStackLab/RDO-F20-x86_64.qcow2′ encryption=off cluster_size=65536 lazy_refcounts=off

Note the rebuild, The base image changed so you want to recreate the overlays, Now uncomment the ssh key setup and run setup and you’ll have to enter your passwords.

dradez@tirreno:~/OpenStackLab➤ ./RDO-F20-icehouse_setup.sh all
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
root@192.168.122.101′s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ’192.168.122.101′”
and check to make sure that only the key(s) you wanted were added.

Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
rdo-demo-id_rsa 100% 1679 1.6KB/s 00:00
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
rdo-demo-id_rsa.pub 100% 411 0.4KB/s 00:00
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
Warning: Permanently added ’192.168.122.102′ (RSA) to the list of known hosts.
root@192.168.122.102′s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ’192.168.122.102′”
and check to make sure that only the key(s) you wanted were added.

Warning: Permanently added ’192.168.122.102′ (RSA) to the list of known hosts.
rdo-demo-id_rsa 100% 1679 1.6KB/s 00:00
Warning: Permanently added ’192.168.122.102′ (RSA) to the list of known hosts.
rdo-demo-id_rsa.pub 100% 411 0.4KB/s 00:00
Warning: Permanently added ’192.168.122.102′ (RSA) to the list of known hosts.
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
packstack.txt 100% 12KB 11.8KB/s 00:00
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
Device /dev/vdb not found (or ignored by filtering).
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
Physical volume /dev/vdb not found
Device /dev/vdb not found (or ignored by filtering).
Unable to add physical volume ‘/dev/vdb’ to volume group ‘cinder-volumes’.

If you chose not to remove the keys and leave the keysetup commented your setup run your setup and it will look like this:

dradez@tirreno:~/OpenStackLab➤ ./RDO-F20-icehouse_setup.sh all
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
packstack.txt 100% 12KB 11.8KB/s 00:00
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
Device /dev/vdb not found (or ignored by filtering).
Warning: Permanently added ’192.168.122.101′ (RSA) to the list of known hosts.
Physical volume /dev/vdb not found
Device /dev/vdb not found (or ignored by filtering).
Unable to add physical volume ‘/dev/vdb’ to volume group ‘cinder-volumes’.

At this point the VMs are ready to install openstack. To speed up the installation and remove the network dependency we’ll build a yum repository on our iso file to do the installation out of. To do that use this list of files RDO-F20-iso-packages to download the packages you need and rebuild the iso including the yum repository.

dradez@tirreno:~/OpenStackLab➤ mkdir RDO-ISO-icehouse-iso/yum.repo
dradez@tirreno:~/OpenStackLab➤ cd RDO-ISO-icehouse-iso/yum.repo
dradez@tirreno:~/yum.repo➤ cat RDO-F20-iso-packages | xargs yumdownloader
[...snip...]
dradez@tirreno:~/yum.repo➤ createrepo .
[...snip..]
dradez@tirreno:~/yum.repo➤ cd ..
dradez@tirreno:~/OpenStackLab➤ ./RDO-F20-icehouse_rebuild.sh iso

All that’s left to do now is reset the OpenStack nodes and install OpenStack on them. So make sure that the VMs are off. Rebuild the nodes, start them up and run the setup script. Once you’ve got everything in order fire up packstack using the packstack txt answer file and you should end up with a freshly installed 2 node GRE tunneled OpenStack RDO demo environment.

[root@control ~]# yum install openstack-packstack
[root@control ~]# time packstack –answer-file packstack.txt
[...snip...]

**** Installation completed successfully ******
Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.123.101. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.123.101/dashboard.
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* Because of the kernel update the host 192.168.123.102 requires reboot.
* Because of the kernel update the host 192.168.123.101 requires reboot.
* Because of the kernel update the host 192.168.122.101 requires reboot.
* The installation log file is available at: /var/tmp/packstack/20140129-000932-ZUIa4T/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20140129-000932-ZUIa4T/manifests

real 16m13.467s
user 0m1.647s
sys 0m1.317s

From here use the slide decks I used last November to walk through using this environment. There’s not a lot that’s different between these two slide decks listed here. One was presented at the Philly, CT and NYC OpenStack Meetups and the other was presented at Red Hat Forum in Tokyo last November. They’re based on Havana, but all the core stuff that’s in them should be the same between Havana and Icehouse.

http://www.slideshare.net/danradez/open-stackmeetup-11-2013
http://www.slideshare.net/danradez/red-hat-forum-tokyo-openstack-architecture

When you get to the glance section use your base Fedora 20 image as the glance image to upload.


OpenStack :: Logstash Elasticsearch Kibana (on apache)

January 16th, 2014

OpenStack has lots of moving parts. One of the challenges in administering a cluster is sifting through all the logs on the multiple nodes that make up a cluster. I help to administer TryStack and am always looking for tools to help managing this cluster go smoother.

I was introduced yesterday to Logstash + elasticsearch + Kibana. I’m not really sure which of these names to call what I was introduced to, I guess all of them. Idea is that all the logs from all the nodes are sent to a central location so that you can filter through them. I think there’s much more advanced usage of this trio. I’m still figuring out what to do with it beyond basic log searching.

My understanding is that Logstash helps to gather the logs, elasticsearch indexes them and kibana is the webui that queries elasticsearch. Here’s a screenshot of what I ended up with.

Logstash + Elasticsearch + Kibana

 

My co-worker Kambiz (kam-beez) pointed me to a couple links that he used to setup an instance of this and he more-or-less followed this post:

http://blog.basefarm.com/blog/how-to-install-logstash-with-kibana-interface-on-rhel/

The main modifications to what he ended up with was to pull the logstash rpm from the logstash Jenkins instance:  http://build.logstash.net/view/logstash/
Then to collect logs using this link’s method: http://cookbook.logstash.net/recipes/rsyslog-agent/
And finally there were a couple config changes to what the original post provided to get logstash running.

I already had apache running on the TryStack utility hosts and didn’t think it was nessesary to add nginx to the mix, which is what the post uses, so I figured it may be helpful to document the process I went through to get this running on apache. This install that has been very useful thus far and I’m glad I have it collecting logs.

First, get the rpms from http://www.elasticsearch.org/download and the logstash jenkins instance. I used these two links like this:

[root@host1 ~]# yum install https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.10.noarch.rpm
[root@host1 ~]# yum install http://build.logstash.net/view/logstash/job/logstash-daily/79/artifact/pkg/logstash-1.3.2-1_centos.201401070105.5cd5b2e.noarch.rpm

also get a copy of the latest kibana stuff, this is just html and javascript so I don’t think there is an rpm afaict. I unpacked it and moved it to /var/www.

[root@host1 ~]# wget https://download.elasticsearch.org/kibana/kibana/kibana-latest.tar.gz
[root@host1 ~]# tar xzf kibana-latest.tar.gz
[root@host1 ~]# mv kibana-latest /var/www
[root@host1 ~]# restorecon -R /var/www/kibana-latest

make sure that apache is installed too, I already had it installed from my Foreman and Nagios instances running on this server. Now lets start to configure this stuff. Start with Kibana. Edit /var/www/kibana-latest/config.js and update the elasticsearch: line:

- elasticsearch: "http://"+window.location.hostname+":9200",
+ elasticsearch: "http://yourhostname.com/elasticsearch",

Note that the 9200 in the config.js file is dropped and replaced with /elasticsearch. When you fire up kibana in apache it will try to connect directly to elasticsearch on 9200. To avoid having to punch extra holes in the firewall we’ll setup a proxypass in apache to pass the traffic from yourhostname.com:80/elasticsearch to localhost:9200. We’ll configure apache once we finish kibana.

Before we get to apache back up the kibana-latest/app/dashboards/default.json file (if you want to) and replace it with my copy:

[root@host1 ~]# cd /var/www/kibana-latest/app/dashboards
[root@host1 dashboards]# cp default.json default.json.backup
[root@host1 dashboards]# wget http://www.jaddog.org/wp-content/uploads/2014/default.json

edit that file to have a title to your liking, I used “TryStack :: OpenStack LogStash Search”

 - "title": "TryStack :: OpenStack LogStash Search",
 + "title": "Your ingenious title here",

The default.json file is a definition of what panels to put on your default view in kibana. I modified the one referenced on the other blog post, that’s why I gave you a new link instead of using the one on that post. There were a couple redundant panels that I consolidated. Also the timespan it referenced by default was old and static so I changed it to show the last hour by default.

So lets add that apache config now. Add /etc/httpd/conf.d/elasticsearch.conf. You could call this whatever.conf if you wanted to. I put both my elasticsearch proxy pass and my kibana alias in this file like this:

ProxyPass /elasticsearch http://localhost:9200
ProxyPassReverse /elasticsearch http://localhost:9200
alias /kibana /var/www/kibana-latest

<Location /elasticsearch>
    Order allow,deny
    Allow from all
</Location>

For this to work you’ll need mod_proxy and mod_proxy_http and if you’re using selinux you’ll need the httpd_can_network_connect bool set on. Google those if you’re not sure how to set them up. There’s lots of docs out there about them. Finally lets configure logstash. First edit /etc/sysconfig/logstash and set the START to true instead of false. The service won’t start if you don’t.

Next create /etc/logstash/conf.d/logstash.conf with this content:

input {
  syslog {
    type => syslog
    port => 5544
  }
}

filter {
  mutate {
    add_field => [ "hostip", "%{host}" ]
  }
}

output {
  elasticsearch {
    host => "localhost"
  }
}

Last open up your firewall to allow your hosts to send rsyslog messages to port 5544. I added an iptables rule to /etc/sysconfig/iptables and restarted the firewall: (note: the minus in this is intended to be part of the line and does not indicate you should remove it)

-A INPUT -i em1 -m state --state NEW -m tcp -p tcp --dport 5544 -j ACCEPT

This rule listens on the em1 interface, my internal network. Easiest way to do this for your host is edit that file, copy the rule for port 22 and update the duplicated line to accept port 5544 instead of 22. Then restart iptables.

*** IMPORTANT *** there are security implications to opening this port. Please do not open this port to the world and allow anyone to pollute your logs. I’ve opened mine up only to my internal network for my cluster. You should also restrict traffic somehow to that only the hosts you expect to get logs from can connect to this port.

Finally fire it all up:

[root@host1 ~]# service elasticsearch start
[root@host1 ~]# service logstash start
[root@host1 ~]# service httpd start

This should give you a pretty uninteresting kibana interface. There won’t be any logs in it yet. Key here is to watch the top of the page and make sure that there isn’t a message that kibaba can’t connect to elasticsearch. If it can’t visit yourhostname.com/elasticsearch and make sure that you get a 200 back.

To populate with logs you could use the logstash client, but the logstash cookbook post referenced above suggests it’s a bit heavy weight. I’ve had really good results thus far just having rsyslog send over the logs. To do that, on each of the hosts that you want to aggregate logs you’ll need to create the file /etc/rsyslog.d/logstash.conf with this content:

*.* @@your_logstash_host:5544

This will send ALL LOGS from that host to logstash for indexing. Google rsyslog if you would like to find out how not to send all logs to logstash.

Once you start to see logs flow into the web interface you can change the time span in the upper right hand corner. You can query your logs. Try putting a hostname or ip into the query box at the top. You can use wildcards like *error* to find errors. You can layer filters. Try adding a filter for a host in the filter box just under the query box. Then add another one for *error* and you’ll get the errors for just that host within the timespan you’ve chosen.

Hope this helps you track down issues. I immediately found I had a rouge iscsi session on one of my compute nodes and was able to put it out of its misery. :)

*** Update Jan 20 ***

I noticed that not all the OpenStack logs were showing up in my logstash searches. Turns out you can toggle openstack using syslog. My puppet configs turn it off by default so I had to turn it on on all my hosts. This boils down to setting ‘use_syslog = true’ in all your component’s conf files. Here’s a link that talks more about it:

http://docs.openstack.org/trunk/openstack-ops/content/logging_monitoring.html


Adventures with Nagios and GlusterFS (monitoring and self-healing)

January 3rd, 2014

We recently added GlusterFS as the storage backed for glance and cinder on TryStack.org.

I’ve been working on setting up nagios monitoring for the cluster recently. I finally have gotten around to spending some time on making sure that Gluster is behaving. I also knew that I had had some sync issues with Gluster, but I hadn’t spent much time on it because all my content seemed to be ok These sync issues were also fixed this in the process so that my nagios checks all came back happy.

First a bit of arch, We have 3 hosts each with a bunch of 550G drives in them. For now the gluster setup is version 3.4, has 3 peers with 2 bricks per peer. The one volume is configured with 3 replicas (I think I’m saying that correct)

An initial google showed there where a couple options of nagios scripts to monitor the cluster, I started with http://www.gluster.org/pipermail/gluster-users/2010-April/027316.html which pointed to a git.gluster.org host that supposedly housed something called glfs-health.sh. Git.gluster.org just gives me an apache test page. Figuring it was git I tried the same user name at github: https://github.com/avati/glfs-health. Bingo.

Unfortunately it didn’t work so well, no matter what I tried I could only get something to the effect of:

[root@host13 ~]# sh glfs-heath.sh
Host unreachable
cat: /tmp/.glusterfs.pid.11992: No such file or directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec … or kill -l [sigspec]

Turns out that this was for older versions of GlusterFS and won’t work with gluster 3.4.

next I tried this thread: http://gluster.org/pipermail/gluster-users/2012-June/010798.html

This actually worked out of the box and reminded me that I had sync issues when it reported that I had some files out of sync. I initially called it check_gluster.sh

[root@host13 ~]# sh check_gluster.sh
check_gluster.sh CRITICAL peers: host14/ host15/ volumes: trystack/21 unsynchronized entries

I scanned through my google results to make sure there wasn’t anything else to peek at before I started moving forward with this and came across an updated version of this script that was posted on the nagios exchange.

http://exchange.nagios.org/directory/Plugins/System-Metrics/File-System/GlusterFS-checks/details

After installing a couple dependencies (I installed the nagios-plugins rpm for the utils.sh plugin and the bc rpm) I got these results:

[root@host13 export]# /usr/lib64/nagios/plugins/check_glusterfs -v trystack -n 3
/usr/lib64/nagios/plugins/check_gluster: line 101: -2: substring expression < 0
WARNING: 15 unsynched entries; found 1 bricks, expected 3

unsynched… heh, I want to spell that unsynced, so correct that and fix line 101.
The syntax used on line 101 requires bash 4.2, I’m running 4.1 on RHEL 6.5, So I’ll update the syntax to a 4.1 compatible syntax.

[root@host1 files]# diff -u check_glusterfs_orig check_glusterfs
--- check_glusterfs_orig	2014-01-03 13:24:12.020577771 -0800
+++ check_glusterfs	2014-01-03 11:22:04.943621593 -0800
@@ -81,7 +81,7 @@
 	fi
 done
 if [ "$heal" -gt 0 ]; then
-	errors=("${errors[@]}" "$heal unsynched entries")
+	errors=("${errors[@]}" "$heal unsynced entries")
 fi

 # get volume status
@@ -98,7 +98,8 @@
 		key=${field[@]:0:3}
 		if [ "${key}" = "Disk Space Free" ]; then
 			freeunit=${field[@]:4}
-			free=${freeunit:0:-2}
+			free=${freeunit%'GB'}
 			unit=${freeunit#$free}
 			if [ "$unit" != "GB" ]; then
 				Exit UNKNOWN "unknown disk space size $freeunit"

[root@host13 export]# /usr/lib64/nagios/plugins/check_glusterfs -v trystack -n 3
WARNING: 32 unsynced entries

That gets me a bit closer. Now to add this to nagios and figure out the unsynced entries. The nagios exchange page offers some sudo configs to give privileges to nagios to run the gluster commands. So next I made the sudo updates, added a service check into the nagios_service.cfg file for each host and an nrpe entry into each host’s nrpe.cfg. I actually did this in puppet, not directly in the files, but here’s the result in the nagios files:

nagios_service.cfg:

define service {
        check_command                  check_nrpe!check_glusterfs
        service_description            Gluster Server Health Check
        host_name                      10.100.0.13
        use                            generic-service
}

nrpe.cfg:

command[check_glusterfs]=/usr/lib64/nagios/plugins/check_glusterfs -v trystack -n 3

When nagios ran the check I got the error “No Bricks Found”, running the nrpe command from my nagios host confirms this:

[root@host1 trystack]# /usr/lib64/nagios/plugins/check_nrpe -H 10.100.0.13 -c check_glusterfs
CRITICAL: no bricks found

I wasted a good bit of time trying to figure this out. End result: turns out that the note on the nagios exchange page for this plugin didn’t address nrpe, it only referenced the nagios user. I had put my sudo configs in place using the user nagios, but when nrpe runs it runs as the user nrpe. So I updated my sudoers.d file:

[root@host13 export]# cat /etc/sudoers.d/nrpe
Defaults:nrpe !requiretty
nrpe ALL=(root) NOPASSWD:/usr/sbin/gluster volume status [[\:graph\:]]* detail,/usr/sbin/gluster volume heal [[\:graph\:]]* info

So now lets try and rerun the nrpe command from the nagios host to make sure it’s happy too:

[root@host1 trystack]# /usr/lib64/nagios/plugins/check_nrpe -H 10.100.0.13 -c check_glusterfs
WARNING: 32 unsynced entries

That looks better. On to figure out the sync issues.

I can’t say that I understand exactly what’s going on under the covers with gluster. I can tell you there’s two places you can work with on each brick to sort out your sync issues, the content you see on the brick and the .gluster directory. If you’re careful about it you can fix it by just deleting content directly off the bricks and wait for gluster to self-heal itself. Here’s what I did.

The script i just installed ran the volume heal info command to report sync issues, so I ran that by hand to see what it’s spitting out:

[root@host13 export]# gluster volume heal trystack info
Gathering Heal info on volume trystack has been successful

Brick host13:/export/sdb1
Number of entries: 1
/

Brick host14:/export/sdb1
Number of entries: 1
/

Brick host15:/export/sdb1
Number of entries: 1
/

Brick host13:/export/sdc1
Number of entries: 4
/glance/images/5518ec29-7555-4632-88c7-76b81432c1c2
/glance/images/83d90cd4-180a-4c6d-893d-2cd0d3dd4d3b
/
/glance/images/d7f5ba96-c741-4dd0-9cf9-94fc607034f7

Brick host14:/export/sdc1
Number of entries: 4
/glance/images/83d90cd4-180a-4c6d-893d-2cd0d3dd4d3b
/glance/images/5518ec29-7555-4632-88c7-76b81432c1c2
/
/glance/images/d7f5ba96-c741-4dd0-9cf9-94fc607034f7

Brick host15:/export/sdc1
Number of entries: 4
/
/glance/images/d7f5ba96-c741-4dd0-9cf9-94fc607034f7
/glance/images/5518ec29-7555-4632-88c7-76b81432c1c2
/glance/images/83d90cd4-180a-4c6d-893d-2cd0d3dd4d3b

I googled for a bit and found a couple things that referred to the brick content and each brick’s .gluster directory, as I just mentioned. Turns out to store the content there’s a bunch of hard links that connect the content you see in the bricks and content you see in the brick’s .gluster directories. The logs suggest for you to delete all but the version of the file you want to fix the sync, but say nothing about this .gluster directory. Turns out if you delete the content and the .gluster directory directly from the brick then gluster will rebuild it as part of its self heal process. I treated my host13 as the copy to rebuild from and host 14 and 15 as those to rebuild. So here we go:

**** BIG DISCLAIMER ****
I have no idea if this is recommended practice.

[root@host14 export]# cd sdb1
[root@host14 sdb1]# rm -rf *
[root@host14 sdb1]# rm -rf .gluster
[root@host14 sdb1]# cd ../sdc1
[root@host14 sdc1]# rm -rf *
[root@host14 sdc1]# rm -rf .gluster
[root@host15 export]# cd sdb1
[root@host15 sdb1]# rm -rf *
[root@host15 sdb1]# rm -rf .gluster
[root@host15 sdb1]# cd ../sdc1
[root@host15 sdc1]# rm -rf *
[root@host15 sdc1]# rm -rf .gluster

After a little while (It takes time to self-heal) my heal info command looked like this:

[root@host13 sdb1]# gluster volume heal trystack info

Gathering Heal info on volume trystack has been successful

Brick host13:/export/sdb1
Number of entries: 1
/

Brick host14:/export/sdb1
Number of entries: 0

Brick host15:/export/sdb1
Number of entries: 0

Brick host13:/export/sdc1
Number of entries: 1
/

Brick host14:/export/sdc1
Number of entries: 0

Brick host15:/export/sdc1
Number of entries: 0

That looks alot better, but there’s still those weird root entries that say they’re out of sync. A quick scan over each brick’s content across the 3 hosts and all looks like the content matches. So I went ahead and destroyed my host13 brick’s content:

[root@host13 export]# cd sdb1
[root@host13 sdb1]# rm -rf *
[root@host13 sdb1]# rm -rf .gluster
[root@host13 sdb1]# cd ../sdc1
[root@host13 sdc1]# rm -rf *
[root@host13 sdc1]# rm -rf .gluster

A little more time passes and eventually my nagios check starts reporting happiness:

[root@host1 trystack]# /usr/lib64/nagios/plugins/check_nrpe -H 10.100.0.13 -c check_glusterfs
OK: 6 bricks; free space 540GB

So in summary… GlusterFS is pretty cool stuff so far. I’m not sure what I did was sanctioned but it seemed to work. Nrpe checks happen as the nrpe user. Hope this helps save someone some time in the future.

I have more work to do, there’s thresholds you can add to the nagios command to alert you when you’re running out of space, and the “-n 3″ is a brick count, I’m not sure yet how that’s supposed to fit in yet. I have 6 bricks and I used a 3 and didn’t get any complaints.

Tomorrow is just another day :)


TryStack Havana configuration management

December 4th, 2013

To manage configuration on TryStack (http://trystack.org) we use foreman, which uses puppet under the covers.

TryStack is a public OpenStack cloud that anyone with a Facebook account can get access to and try out using OpenStack. Last year Red Hat donated RHEL subscriptions for the x68_64 cluster and commited my team’s time to maintain this cluster. We’re currently in the process of upgrading the Cluster to RDO Havana and we are backing glance and cinder with GlusterFS.

In our TryStack deployment Foreman mainly supplies a puppet master, configuration key value pairs (foreman global variables) and the host groups which assign a role to a node in the cluster. Right now there are two host groups, more will come as we expand monitoring and storage.

 

Havana Control Node

  • trystack
  • trystack::control
  • trystack::swift_proxy

Havana Compute node

  • trystack
  • trystack::compute

 

The two host groups are currently “Havana Control Node” and “Havana Compute node”. Each of these host groups include just a couple puppet classes as listed above. The number of these classes has been deliberately kept low in each host group. The complexity is wired together in the trystack puppet module which has now been posted in its current form to github: https://github.com/trystack/puppet-trystack

This puppet module consumes two things: 1. variables (foreman global variables) 2. the puppet modules that to do the OpenStack configuration.

First the puppet modules, in the RDO package set there is a package named openstack-puppet-modules. We’ve simply used the puppet modules that this package provides to populate the puppet module path, along with the trystack module.

Second the variables. There’s a small script included in the puppet module repo called api.py. This script simply uses the foreman api to read a config file and populate these variables into foreman. You can use this script by following these steps:

  1. change the 0 to 1 on the line “if 0″ by the comment “# generate”
  2. run ‘python api.py’ this will generate a trystack.cfg file with empty values
  3. switch the 1 back to a 0
  4. edit the trystack.cfg file with appropriate values
  5. edit the user and password and url around line 40 to point to foreman
  6. run ‘python api.py’

you can update the cfg file and run the script over and over and it will update your config values for you. I’ve also been made aware of hammer which is a cli client for the foreman api, though I’ve not used it a few of my team mates have. Do a ‘yum search hammer’ on the foreman yum repo to find this package.

To summarize this, we have 5 pieces to this puzzle: Foreman, Foreman Host Groups, Foreman Global Variables, TryStack Puppet Modules, RDO openstack-puppet-modules

Once they are all in place we install a host, add puppet client have the agent check in and sign its cert, assign the host to a host group and rerun puppet agent to configure the node.

Links:
TryStack: http://trystack.org
RDO: http://openstack.redhat.com
Foreman: http://theforeman.org
Gluster: http://gluster.org
Red Hat: http://redhat.com
Unrelated Hammer: http://www.youtube.com/watch?v=NyEE0qpfeig


FedUp!

November 26th, 2013

$ yum update -y
$ fedup –network 20

Success! I’m on Fedora 20 Beta now.
Took about an hour to pull down the packages and do the update. Thus far I’ve found no glaring issues being on F20 Beta.

Nice work Fedora team!


Getting Started with RDO Havana

November 22nd, 2013

I was in the North East US this week presenting a getting started session on RDO Havana.

Posted the slides here: http://www.slideshare.net/danradez/open-stackmeetup-11-2013


Red Hat Forum Tokyo

October 23rd, 2013

Found this today

Screenshot from 2013-10-23 16:00:06

 

No thanks Google, you don’t need to translate. Kinda cool to see my name among a bunch of Japanese.

Though, Google’s translation next to the little red headphones says that the Red Hat Forum will have a translator for me, I guess since I don’t speak japanese. :)

Also found out, by having Google translate, that my name in japanese is like this: ダン・ラデス
Even though Google thinks that the translation is Dan Rades. Come on Google, can’t you figure out how to translate Radez from Japanese to English? j/k