Nova flavors with access control!

Recently, I wanted to create a new flavor in nova in our OpenStack deployment and noticed there was a way to do access control to make the flavor only visible/usable to specific tenants.

To do this first create the flavor:

$ nova flavor-create 8vCPU-16GB_Mem 99 16384 0 8 --is-public=False
+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name           | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
| 99 | 8vCPU-16GB_Mem | 16384     | 0    | 0         |      |     8 | 1.0         |     False |
+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+

Then, find the tenant_id you want to give access to this flavor.

$ keystone tenant-list | grep arosen
| d4e4332d5f8c4a8eab9fcb1345406cb0 | arosen | True |

Associate tenant with flavor:

$ nova flavor-access-add 99 d4e4332d5f8c4a8eab9fcb1345406cb0
+-----------+----------------------------------+
| Flavor_ID | Tenant_ID                        |
+-----------+----------------------------------+
|        99 | d4e4332d5f8c4a8eab9fcb1345406cb0 |
+-----------+----------------------------------+

Now, this flavor is only exposed to this tenant to use:

$ nova flavor-list
+----+-----------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name            | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------------+-----------+------+-----------+------+-------+-------------+-----------+
| 0  | 1vCPU-2GB_Mem   | 2048      | 0    | 0         |      | 1     | 1.0         | True      |
| 1  | 1vCPU-4GB_Mem   | 4096      | 0    | 0         |      | 1     | 1.0         | True      |
| 10 | 4vCPU-10GB_Mem  | 10240     | 0    | 0         |      | 4     | 4.0         | True      |
| 11 | 1vCPU-512MB_Mem | 512       | 0    | 0         |      | 1     | 1.0         | True      |
| 12 | 1vCPU-1GB_Mem   | 1024      | 0    | 0         |      | 1     | 1.0         | True      |
| 2  | 2vCPU-2GB_Mem   | 2048      | 0    | 0         |      | 2     | 2.0         | True      |
| 4  | 2vCPU-8GB_Mem   | 8192      | 0    | 0         |      | 2     | 2.0         | True      |
| 5  | 4vCPU-4GB_Mem   | 4096      | 0    | 0         |      | 4     | 4.0         | True      |
| 6  | 4vCPU-8GB_Mem   | 8192      | 0    | 0         |      | 4     | 4.0         | True      |
| 7  | 4vCPU-16GB_Mem  | 16384     | 0    | 0         |      | 4     | 4.0         | True      |
| 99 | 8vCPU-16GB_Mem  | 16384     | 0    | 0         |      | 8     | 1.0         | False     |
+----+-----------------+-----------+------+-----------+------+-------+-------------+-----------+

This ended up being pretty useful and has been a feature of OpenStack since Folsom!

Posted in openstack | Leave a comment

Bootstrapping Instances via Metadata and Public Cloud Metadata Support!

OpenStack and several other cloud management platforms like (Amazon, CloudStack, etc) provide a metadata service that allows one to pass in additional information to an instance at boot time. This can be helpful in order to automate the configuration within an instance.

One use case people usually use this service for is to have one’s ssh-key automatically pulled into your instance when it boots so you can ssh into an instance without having a hard coded password in the image. There are several other neat things one can do with it, such as specifying a script for your instance to run in order to bootstrap setup configuration.

In this blog post we’ll use the metadata service to boot up several instances at once and have them automatically install devstack for us all from a single command. Then, try to see if we can get this working on a few of the big OpenStack public cloud providers.

To start, I’m assuming you already have a working OpenStack environment (with metadata) and a ubuntu-12.04 image that has cloud-init already installed. Cloud-init is a set of scripts that needs to be present in your guest instance in order to leverage the metadata service easily. You can grab the image I’m using for this from here if you need one: https://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img and upload it to glance OR you can use the HP cloud (hpcloud.com) and follow along as this also works there too!

First we’ll create an ssh key if you don’t already one:

$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ubuntu/.ssh/id_rsa.
Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub.
The key fingerprint is:
a7:96:9b:dd:e6:a6:1e:33:a9:80:b2:13:47:ab:ea:3a ubuntu@ubuntu
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|    .            |
|   . .  S .      |
|  . o.   + .     |
|  .+. . + =      |
|E oo   o = =o    |
|=+..    +.+=o    |
+-----------------+

Upload it to nova:

$ nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey

Lists the uploaded key:

$ nova keypair-list
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | a7:96:9b:dd:e6:a6:1e:33:a9:80:b2:13:47:ab:ea:3a |
+-------+-------------------------------------------------+

Boot an instance and specify the key, when the instance comes up  the cloud-init scripts will automatically grab the key and put it in the right place.

$ nova boot --image ubuntu-server-12.04 --flavor 4 --key-name mykey my_vm

Once the instance boots you’ll be able to ssh into using your ssh key:

$ ssh ubuntu@<instance_ip>

Next, we’re going to automate the installation of OpenStack via devstack. Below is a simple script that runs the commands required to setup devstack. We’re going to put this script in a file called setup_devstack.sh which we’ll pass via nova boot.

#!/bin/bash
sudo apt-get update
sudo apt-get install -y git
git clone https://github.com/openstack-dev/devstack  /home/ubuntu/devstack
cat > /home/ubuntu/devstack/localrc << "EOF" ENABLED_SERVICES=g-api,g-reg,key,mysql,n-api,n-cond,n-cpu,n-crt,n-obj,n-sch,q-agt,q-dhcp,q-l3,q-lbaas,q-meta,q-svc,q-vpn,quantum,rabbit,horizon,n-novnc,n-xvnc DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_TOKEN=password SERVICE_PASSWORD=password ADMIN_PASSWORD=password EOF # HOST_IP is needed for devstack script. Determine IP from metadata service for the the heck of it :) HOST_IP=`curl 169.254.169.254/latest/meta-data/local-ipv4 | awk '{split($0,array,",")} END{print array[1]}'` echo HOST_IP=$HOST_IP >> /home/ubuntu/devstack/localrc
chown -R ubuntu:ubuntu /home/ubuntu/devstack/
su ubuntu -c /home/ubuntu/devstack/stack.sh &

To pass this script to the instance you need to specify an additional parameter –user-data with the script path to nova boot as shown below:

$ nova boot --image  ubuntu-server-12.04 --flavor 5 --key-name mykey --user-data ~/setup_devstack.sh devstack_vm

When the instance boots up, cloud-init will fetch your ssh key and then start running your script. In order to figure out how far your script has run, you can use nova console-log to find out where it is (as the console output is written to the serial port):

$ nova console-log devstack_vm
Unpacking sqlite3 (from .../sqlite3_3.7.9-2ubuntu1.1_amd64.deb) ...
Selecting previously unselected package unzip.
Unpacking unzip (from .../unzip_6.0-4ubuntu2_amd64.deb) ...
Selecting previously unselected package vbetool.
Unpacking vbetool (from .../vbetool_1.1-2ubuntu1_amd64.deb) ...
Selecting previously unselected package x11-utils.
Unpacking x11-utils (from .../x11-utils_7.6+4ubuntu0.1_amd64.deb) ...
Selecting previously unselected package xbitmaps.
Unpacking xbitmaps (from .../xbitmaps_1.1.1-1_all.deb) 

Downloading packages…..

$ nova console-log devstack_vm
+ mysql -uroot -ppassword -h127.0.0.1 -e 'DROP DATABASE IF EXISTS neutron_ml2;'
+ mysql -uroot -ppassword -h127.0.0.1 -e 'CREATE DATABASE neutron_ml2 CHARACTER SET utf8;'
+ /usr/local/bin/neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade None -> folsom, folsom initial database
INFO  [alembic.migration] Running upgrade folsom -> 2c4af419145b, l3_support
INFO  [alembic.migration] Running upgrade 2c4af419145b -> 5a875d0e5c, ryu
INFO  [alembic.migration] Running upgrade 5a875d0e5c -> 48b6f43f7471, DB support for service types
INFO  [alembic.migration] Running upgrade 48b6f43f7471 -> 3cb5d900c5de, security_groups

Setting up the neutron database…..

$ nova console-log devstack_vm
Horizon is now available at http://10.0.0.9/
Keystone is serving at http://10.0.0.9:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: password
This is your host ip: 10.0.0.9
2014-06-27 22:21:00.883 | stack.sh completed in 1089 seconds.

Done!

At this point I wanted to see how easy it would be to test this out on some of the public OpenStack clouds and see if they were providing this metadata service.  I started with HP cloud and found a ubuntu image

$ glance --os-region-name  region-b.geo-1 image-show 75d47d10-fef8-473b-9dd1-fe2f7649cb41
No handlers could be found for logger "keystoneclient.httpclient"
+----------------------------------------------+----------------------------------------------------------------------------------+
| Property                                     | Value                                                                            |
+----------------------------------------------+----------------------------------------------------------------------------------+
| Property 'architecture'                      | x86_64                                                                           |
| Property 'com.hp__1__bootable_volume'        | True                                                                             |
| Property 'com.hp__1__image_lifecycle'        | active                                                                           |
| Property 'com.hp__1__image_type'             | disk                                                                             |
| Property 'com.hp__1__os_distro'              | com.ubuntu                                                                       |
| Property 'com.hp__1__vendor'                 | Canonical                                                                        |
| Property 'com.ubuntu.cloud__1__milestone'    | release                                                                          |
| Property 'com.ubuntu.cloud__1__official'     | True                                                                             |
| Property 'com.ubuntu.cloud__1__published_at' | 2014-06-11T19:07:04                                                              |
| Property 'com.ubuntu.cloud__1__serial'       | 20140606                                                                         |
| Property 'com.ubuntu.cloud__1__stream'       | server                                                                           |
| Property 'com.ubuntu.cloud__1__suite'        | precise                                                                          |
| Property 'com.ubuntu.cloud_images.official'  | True                                                                             |
| Property 'description'                       | Ubuntu Server 12.04 LTS (amd64 20140606) for HP Public Cloud. Ubuntu Server is   |
|                                              | the world's most popular Linux for cloud environments. Updates and patches for   |
|                                              | Ubuntu 12.04 LTS will be available until 2017-04-26. Ubuntu Server is the        |
|                                              | perfect platform for all workloads from web applications to NoSQL databases and  |
|                                              | Hadoop. More information regarding Ubuntu Cloud is available from                |
|                                              | http://www.ubuntu.com/cloud and instructions for using Juju to deploy workloads  |
|                                              | are available from http://juju.ubuntu.com EULA: http://www.ubuntu.com/about      |
|                                              | /about-ubuntu/licensing Privacy Policy: http://www.ubuntu.com/privacy-policy     |
| Property 'os_type'                           | linux-ext4                                                                       |
| Property 'os_version'                        | 12.04                                                                            |
| checksum                                     | a224dbd167ef6648c5b892b2d4b54780                                                 |
| container_format                             | bare                                                                             |
| created_at                                   | 2014-06-11T18:42:12                                                              |
| deleted                                      | False                                                                            |
| disk_format                                  | qcow2                                                                            |
| id                                           | 75d47d10-fef8-473b-9dd1-fe2f7649cb41                                             |
| is_public                                    | True                                                                             |
| min_disk                                     | 8                                                                                |
| min_ram                                      | 0                                                                                |
| name                                         | Ubuntu Server 12.04 LTS (amd64 20140606) - Partner Image                         |
| owner                                        | 10014302369510                                                                   |
| protected                                    | False                                                                            |
| size                                         | 260768256                                                                        |
| status                                       | active                                                                           |
| updated_at                                   | 2014-06-11T19:07:05                                                              |
+----------------------------------------------+----------------------------------------------------------------------------------+


Then, I tried the same command that I had run locally:

$ nova --os-region-name region-b.geo-1 boot --image 75d47d10-fef8-473b-9dd1-fe2f7649cb41 --flavor 102 --key-name mykey --user-data ~/setup_devstack.sh devstack_instance_hp

Everything worked as expected even nova console log :)

Next, I decided to test out the Rackspace cloud; Found a ubuntu-12.04 image and booted it the same way I had done locally and on the HP cloud. On the rackspace cloud I needed to pass –insecure to nova :/ . In addition, it was hard to figure out the credentials, though I opened a help chat and someone was able to provide that info quickly. I recommended that they should add a “Download OpenStack RC File” button to the UI to make it easier, hopefully they do that.

Anyways, Booted an instance via:

$ nova --insecure --os-region-name ORD boot --flavor 4 --image ffa476b1-9b14-46bd-99a8-862d1d94eb7a --key-name mykey --user-data ~/setup_devstack.sh devstack_rax

Sadly, nova console-log didn’t work here  :( ,  though I do understand that this an extension to the nova-api and probably has some added support cost enabling it.

$ nova --insecure --os-region-name ORD console-log devstack_rax
ERROR (BadRequest): There is no such action: os-getConsoleOutput (HTTP 400) (Request-ID: req-80a4115d-e61a-4db8-afb1-c17b4e779f49)

Once the VM came up I was able to ssh into it so the metadata bits did work for getting my ssh key though I wasn’t able to find my userdata script within the instance (/var/lib/cloud/instance/user-data.txt) and it hadn’t run. I didn’t do too much digging around why this wasn’t working but I guess RAX doesn’t support this yet. Hopefully in the future they will, unless they do and I’m doing something wrong.  Would be curious to know.

Hopefully you found this useful/interesting! I plan to do a follow up post on the inner workings of the metadata service within OpenStack.

Posted in openstack, Uncategorized | 1 Comment

Quick guide to creating a OpenStack bootable image

Usually every time I go to install a VM from scratch I end up having to google around for the exact commands I need… so I figured I’d do a quick post on how to install your own image that’s boot-able via OpenStack.

First acquire an ISO that you want to install. For my image, I’m just using a simple ubuntu-server image.

wget http://releases.ubuntu.com/14.04/ubuntu-14.04-server-amd64.iso

Create disk image:

qemu-img create -f qcow2 ubuntu-14.04-server.img 30G

Using KVM, launch an instance using the ISO and disk image we just created (we’re launching the instance with 4096Mb of ram and 2 processors – though you can choose different values if you want a larger or smaller instance).

kvm -hda ubuntu-14.04-server.img -cdrom ubuntu-14.04-server-amd64.iso -m 4096 -smp 2

A window should pop out where you can walk through the installation steps:

Screenshot from 2014-06-01 20:12:56

When it’s finished installing you should be able to upload the image to glance:

glance image-create --name ubuntu-14.04-server --disk-format=qcow2 --container-format=bare --is-public=True < ubuntu-14.04-server.img

boot it:

nova boot --image ubuntu-14.04-server --flavor 3 vm1

The first time you boot an image it will probably take a few minutes to boot as it needs to first copy the disk image to the compute node but eventually it will (*hopefully*) go active:

nova list
+--------------------------------------+------+--------+------------+-------------+------------------+
| ID                                   | Name | Status | Task State | Power State | Networks         |
+--------------------------------------+------+--------+------------+-------------+------------------+
| f24839a2-6523-438b-a4c3-b46ddba11389 | vm1  | ACTIVE | -          | Running     | private=10.0.0.4 |
+--------------------------------------+------+--------+------------+-------------+------------------+

I know there are other tools out there that are probably better for this and Canonical even provides daily boot-able images though this method works fine for me since I rarely have to create a new image. One important thing to note is that in some distro’s you need to delete /etc/udev/rules.d/70-persistent-net.rules (before uploading the image to glance) in order for the nic interface ordering to start at eth0 otherwise it will start at eth1 which might not automatically start a dhcp-client on the interface for you depending on the guest’s configuration.

Posted in openstack | 3 Comments

Implementing High Availability Instances with Neutron using VRRP

In the Havana release we added a new extension called “Allowed-Address-Pairs” that allows one to add additional ips (or cidrs) with their mac-address to a port to allow traffic that matches those values to pass through. This was needed because by default neutron ports only allow traffic through that match the mac-address and fixed-ips fields on a port (which is done to enforce anti-spoofing).  Because of this, there was no way to support protocols such as VRRP which require mapping the same ip-address to multiple ports which neutron does not allow by design.

In this post I’m going to demo how to use the allowed-address-pairs extension which is currently supported in the following plugins: OVS, ML2, VMware NSX, BigSwitch, and NEC. Then, demo it working with VRRP using keepalived.

Just to give a little background how VRRP (Virtual Routing Redundancy Protocol) works before we dive in.  VRRP works by having hosts participate in a group which share a configured  ip-address. One node is elected master and is the only node that will respond on that giving address. The other nodes are slaves and just monitor the current master node periodically to ensure that it’s still running.  If the master node goes down one of the slave nodes take over as the master and start replying on the specified ip-address. The nice thing about this is it allows one to avoid a single point of failure in the system and doesn’t require a load-balancer if you’re not trying to provide scale out (though VRRP is often used in conjunction with load-balancers to provide additional HA and scale-out at the same time).

To get started I’m assuming you already have an OpenStack deployment up and running that is Havana or newer with one of the supported plugins that implement allowed-address-pairs. If you’re unsure if you have this you can find out by querying neutron for the extensions it has loaded and looking for the allowed-address-pairs extension:

$ neutron ext-list
+-----------------------+-----------------------------------------------+
| alias                 | name                                          |
+-----------------------+-----------------------------------------------+
| network-gateway       | Neutron-NVP Network Gateway                   |
| security-group        | security-group                                |
| dist-router           | Distributed Router                            |
| router                | Neutron L3 Router                             |
| mac-learning          | MAC Learning                                  |
| port-security         | Port Security                                 |
| ext-gw-mode           | Neutron L3 Configurable external gateway mode |
| binding               | Port Binding                                  |
| quotas                | Quota management support                      |
| agent                 | agent                                         |
| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |
| external-net          | Neutron external network                      |
| multi-provider        | Multi Provider Network                        |
| allowed-address-pairs | Allowed Address Pairs                         |
| extraroute            | Neutron Extra Route                           |
| provider              | Provider Network                              |
| nvp-qos               | nvp-qos                                       |
+-----------------------+-----------------------------------------------+

First, we’ll create a network that we’re going to put theses host on:

$ neutron net-create vrrp-net

Next, we’re going to attach a subnet to that network with a specified allocation-pool range. The reason we do this is we want to manually allocate addresses after 10.0.0.200 to use specifically for VRRP and we don’t want neutron to allocate those automatically.

$ neutron subnet-create  --name vrrp-subnet --allocation-pool start=10.0.0.2,end=10.0.0.200 vrrp-net 10.0.0.0/24

Then, we’re going to go ahead and create a router, uplink the vrrp-subnet to it, and attach the router to an upstream network called public:

$ neutron router-create router1
$ neutron router-interface-add router1 vrrp-subnet 
$ neutron router-gateway-set router1 public

Now, we’re going to create a security group called vrrp-sec-group and we’ll add rules to allow icmp, tcp port 80 and 22 ingress:

$ neutron security-group-create vrrp-sec-group
$ neutron security-group-rule-create  --protocol icmp vrrp-sec-group 
$ neutron security-group-rule-create  --protocol tcp  --port-range-min 80 --port-range-max 80 vrrp-sec-group 
$ neutron security-group-rule-create  --protocol tcp  --port-range-min 22 --port-range-max 22 vrrp-sec-group

Next, we’re going to boot two instances using a ubuntu-12.04  image attached to the vrrp-net network (which is this uuid – 24e92ee1-8ae4-4c23-90af-accb3919f4d1) and vrrp-sec-group security group.

nova boot --num-instances 2 --image ubuntu-12.04 --flavor 1 --nic net-id=24e92ee1-8ae4-4c23-90af-accb3919f4d1 vrrp-node --security_groups vrrp-sec-group

The instances:

$ nova list
+--------------------------------------+-------------------------------------------------+--------+------------+-------------+--------------------------------------------------------+
| ID                                   | Name                                            | Status | Task State | Power State | Networks                                               |
+--------------------------------------+-------------------------------------------------+--------+------------+-------------+--------------------------------------------------------+
| 15b70af7-2628-4906-a877-39753082f84f | vrrp-node-15b70af7-2628-4906-a877-39753082f84f | ACTIVE | -          | Running     | vrrp-net=10.0.0.3                                      |
| e683e9d1-7eea-48dd-9d3a-a54cf9d9b7d6 | vrrp-node-e683e9d1-7eea-48dd-9d3a-a54cf9d9b7d6 | ACTIVE | -          | Running     | vrrp-net=10.0.0.4                                      |
+--------------------------------------+-------------------------------------------------+--------+------------+-------------+--------------------------------------------------------+

Create a port in the VRRP ip range that we left out of the ip-allocation range:

$ neutron port-create --fixed-ip ip_address=10.0.0.201 --security-group vrrp-sec-group vrrp-net
Created a new port:
+-----------------------+-----------------------------------------------------------------------------------+
| Field                 | Value                                                                             |
+-----------------------+-----------------------------------------------------------------------------------+
| admin_state_up        | True                                                                              |
| allowed_address_pairs |                                                                                   |
| device_id             |                                                                                   |
| device_owner          |                                                                                   |
| fixed_ips             | {"subnet_id": "94a0c371-d37c-4796-821e-57c2a8ec65ae", "ip_address": "10.0.0.201"} |
| id                    | 6239f501-e902-4b02-8d5c-69062896a2dd                                              |
| mac_address           | fa:16:3e:20:67:9f                                                                 |
| name                  |                                                                                   |
| network_id            | 24e92ee1-8ae4-4c23-90af-accb3919f4d1                                              |
| port_security_enabled | True                                                                              |
| security_groups       | 36c8131f-d504-4bcc-b708-f330c9f6b67a                                              |
| status                | DOWN                                                                              |
| tenant_id             | d4e4332d5f8c4a8eab9fcb1345406cb0                                                  |
+-----------------------+-----------------------------------------------------------------------------------+

As you can see we were allocated a port with the ip-address 10.0.0.201 which we requested. We’ll also associate a floatingip to this port at this time so we can access it publicly:

$ neutron floatingip-create --port-id=6239f501-e902-4b02-8d5c-69062896a2dd public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    | 10.0.0.201                           |
| floating_ip_address | 10.36.12.139                         |
| floating_network_id | 3696c581-9474-4c57-aaa0-b6c70f2529b0 |
| id                  | a26931de-bc94-4fd8-a8b9-c5d4031667e9 |
| port_id             | 6239f501-e902-4b02-8d5c-69062896a2dd |
| router_id           | 178fde65-e9e7-4d84-a218-b1cc7c7b09c7 |
| tenant_id           | d4e4332d5f8c4a8eab9fcb1345406cb0     |
+---------------------+--------------------------------------+

We now need to update the ports attached to our VRRP instances to include this ip-address as an allowed-address-pair so they will be able to send traffic out using this address.  You’ll first need to find the ports attached to these instances:

$ neutron port-list -- --network_id=24e92ee1-8ae4-4c23-90af-accb3919f4d1
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                         |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
| 12bf9ea4-4845-4e2c-b511-3b8b1ad7291d |      | fa:16:3e:7a:7b:18 | {"subnet_id": "94a0c371-d37c-4796-821e-57c2a8ec65ae", "ip_address": "10.0.0.4"}   |
| 14f57a85-35af-4edb-8bec-6f81beb9db88 |      | fa:16:3e:2f:7e:ee | {"subnet_id": "94a0c371-d37c-4796-821e-57c2a8ec65ae", "ip_address": "10.0.0.2"}   |
| 6239f501-e902-4b02-8d5c-69062896a2dd |      | fa:16:3e:20:67:9f | {"subnet_id": "94a0c371-d37c-4796-821e-57c2a8ec65ae", "ip_address": "10.0.0.201"} |
| 87094048-3832-472e-a100-7f9b45829da5 |      | fa:16:3e:b3:38:30 | {"subnet_id": "94a0c371-d37c-4796-821e-57c2a8ec65ae", "ip_address": "10.0.0.1"}   |
| c080dbeb-491e-46e2-ab7e-192e7627d050 |      | fa:16:3e:88:2e:e2 | {"subnet_id": "94a0c371-d37c-4796-821e-57c2a8ec65ae", "ip_address": "10.0.0.3"}   |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+

Add this address to the ports c080dbeb-491e-46e2-ab7e-192e7627d050, 12bf9ea4-4845-4e2c-b511-3b8b1ad7291d which are (10.0.0.3 and 10.0.0.4 our vrrp-node instances).

$ neutron port-update  c080dbeb-491e-46e2-ab7e-192e7627d050 --allowed_address_pairs list=true type=dict ip_address=10.0.0.201
$ neutron port-update  12bf9ea4-4845-4e2c-b511-3b8b1ad7291d --allowed_address_pairs list=true type=dict ip_address=10.0.0.201

The allowed-address-pair 10.0.0.201 now shows up on the port:

$ neutron port-show 12bf9ea4-4845-4e2c-b511-3b8b1ad7291d
+-----------------------+---------------------------------------------------------------------------------+
| Field                 | Value                                                                           |
+-----------------------+---------------------------------------------------------------------------------+
| admin_state_up        | True                                                                            |
| allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address": "fa:16:3e:7a:7b:18"}                |
| device_id             | e683e9d1-7eea-48dd-9d3a-a54cf9d9b7d6                                            |
| device_owner          | compute:None                                                                    |
| fixed_ips             | {"subnet_id": "94a0c371-d37c-4796-821e-57c2a8ec65ae", "ip_address": "10.0.0.4"} |
| id                    | 12bf9ea4-4845-4e2c-b511-3b8b1ad7291d                                            |
| mac_address           | fa:16:3e:7a:7b:18                                                               |
| name                  |                                                                                 |
| network_id            | 24e92ee1-8ae4-4c23-90af-accb3919f4d1                                            |
| port_security_enabled | True                                                                            |
| security_groups       | 36c8131f-d504-4bcc-b708-f330c9f6b67a                                            |
| status                | ACTIVE                                                                          |
| tenant_id             | d4e4332d5f8c4a8eab9fcb1345406cb0                                                |
+-----------------------+---------------------------------------------------------------------------------+


Login into our instances and configure VRRP first by installing keepalived:

sudo apt-get install keepalived

Now, we’ll need to configure keepalived. We’ll need to pick one of our nodes to be the master and the other as slave (though you can have multiple slave nodes if you want).

Master configuration:

$ cat  /etc/keepalived/keepalived.conf
vrrp_instance vrrp_group_1 {
 state MASTER
 interface eth0
 virtual_router_id 1
 priority 100
 authentication {
  auth_type PASS
  auth_pass password
 }
 virtual_ipaddress {
  10.0.0.201/24 brd 10.0.0.255 dev eth0
 }
}


Slave Configuration:

$ cat /etc/keepalived/keepalived.conf
vrrp_instance vrrp_group_1 {
 state BACKUP
 interface eth0
 virtual_router_id 1
 priority 50
 authentication {
  auth_type PASS
  auth_pass password
 }
 virtual_ipaddress {
  10.0.0.201/24 brd 10.0.0.255 dev eth0
}

restart keepalived to load the configuration change on both nodes:

 service keepalived restart

Install a simple web-server to demo this working:

sudo apt-get install apache2

Run the following command to change the server response on each respective node:

$ sudo echo "VRRP-node1" > /var/www/index.html
$ sudo echo "VRRP-node2" > /var/www/index.html

Issuing curl against the floatingip attached to the instance shows:

$ curl 10.36.12.139
VRRP-node1

Now, we’ll induce a failure by setting the admin-state-up value on the port to False to show the fail over automatically working:

$ neutron port-update 12bf9ea4-4845-4e2c-b511-3b8b1ad7291d --admin_state_up=False

Curl shows our slave VRRP node responding:

$ curl 10.36.12.139
VRRP-node2

And failing back over:

$ neutron port-update 12bf9ea4-4845-4e2c-b511-3b8b1ad7291d --admin_state_up=True
$ neutron port-update  c080dbeb-491e-46e2-ab7e-192e7627d050 --admin_state_up=False
$ curl  10.36.12.139
VRRP-node1

In this example I created a special port to use with VRRP.  The only reason I did that is for flexibility so one could have a dedicated address on the VRRP instances and a special address that’s used to float between the instances. One could have also reused one of the addresses from one of the VRRP instances and mapped that as an allowed-address-pair instead.

If you made it this far hopefully you found this useful :) . As always comments and questions welcome!

VRRP config shamelessly stolen from: http://archive09.linux.com/feature/114005

Posted in openstack | 6 Comments

Testing cross project related patches in gate before merging!

A few months back I was working on a blueprint with Dan Smith which required changes to be made to three different projects (neutron, nova, and python-novaclient). One of the tricks that we came up with was modifying devstack in order to run all of our patches through the gate to test the cross project integration before the patches merged. We found this to be super helpful and allowed us to shake out several bugs pretty quickly that we wouldn’t have found until we started merging some of the patches. I’ve also found it useful a number of other times so I figured it would be good to share this tip in case others find it helpful.  Here’s the patch set that we used to test the cross project related patches before merging https://review.openstack.org/#/c/78052/  (diff below).

diff --git a/stackrc b/stackrc
index 6bb6f37..21bfb8c 100644
--- a/stackrc
+++ b/stackrc
@@ -152,11 +152,11 @@ KEYSTONECLIENT_BRANCH=${KEYSTONECLIENT_BRANCH:-master}

# compute service
NOVA_REPO=${NOVA_REPO:-${GIT_BASE}/openstack/nova.git}
-NOVA_BRANCH=${NOVA_BRANCH:-master}
+NOVA_BRANCH=${NOVA_BRANCH:-"refs/changes/32/74832/23"}

# python client library to nova that horizon (and others) use
NOVACLIENT_REPO=${NOVACLIENT_REPO:-${GIT_BASE}/openstack/python-novaclient.git}
-NOVACLIENT_BRANCH=${NOVACLIENT_BRANCH:-master}
+NOVACLIENT_BRANCH=${NOVACLIENT_BRANCH:-"refs/changes/63/74763/8"}

# consolidated openstack python client
OPENSTACKCLIENT_REPO=${OPENSTACKCLIENT_REPO:-${GIT_BASE}/openstack/python-openstackclient.git}
@@ -200,7 +200,7 @@ PBR_BRANCH=${PBR_BRANCH:-master}

# neutron service
NEUTRON_REPO=${NEUTRON_REPO:-${GIT_BASE}/openstack/neutron.git}
-NEUTRON_BRANCH=${NEUTRON_BRANCH:-master}
+NEUTRON_BRANCH=${NEUTRON_BRANCH:-"refs/changes/41/78041/6"}

# neutron client
NEUTRONCLIENT_REPO=${NEUTRONCLIENT_REPO:-${GIT_BASE}/openstack/python-neutronclient.git}

As you can see above, devstack has a feature which allows you to set the branch that you want it to use which you can update to be the ref change from gerrit in order for it to pull your patch in.

Another place I used this this trick was here (https://review.openstack.org/#/c/94462/) which allowed me to test a tempest related change in conjunction with a glance change to confirm that they are working together before the tempest change is merged. Hopefully someone else finds this useful as well!

Posted in openstack, Uncategorized | 1 Comment

OpenStack Interface Hot Plugging

Sometimes it’s useful to dynamically add or remove interfaces to already running instances without having to recreate the instance. For example, if you want to reorganize which networks you originally spawned your instances on or perhaps if creating a service VM that acts as a router between different subnets.

In Grizzly, this was made possible. In order demonstrate this I’ve created two networks net1 and net2 as shown:

$ quantum net-list
+--------------------------------------+---------+----------------------------------------------------+
| id                                   | name    | subnets                                            |
+--------------------------------------+---------+----------------------------------------------------+
| d58e9f6b-d9af-468e-a0cb-f4eea18b6065 | net1    | 4671d053-ac63-4b0f-afe8-18d9444ad8c0 10.2.0.0/24   |
| e79f5d1a-289e-44b6-8070-943535cbbeae | net2    | e73a25ca-d211-44b4-a376-64791ea1dabe 10.3.0.0/24   |
+--------------------------------------+---------+----------------------------------------------------+

Create an instance on net1 via:

$ nova boot --image cirros-0.3.1-x86_64-uec --flavor 1 --nic net-id=d58e9f6b-d9af-468e-a0cb-f4eea18b6065 vm1

Nova list shows the running instance and it’s id:

$ nova list
+--------------------------------------+------+--------+------------+-------------+---------------+
| ID                                   | Name | Status | Task State | Power State | Networks      |
+--------------------------------------+------+--------+------------+-------------+---------------+
| 54ca2943-46e7-4e2e-b470-a015f23797c0 | vm1  | ACTIVE | None       | Running     | net1=10.2.0.3 |
+--------------------------------------+------+--------+------------+-------------+---------------+

Running ifconfig -a in the instance returns the following:

$ ifconfig -a
eth0      Link encap:Ethernet  HWaddr FA:16:3E:59:08:AC  
          inet addr:10.2.0.3  Bcast:10.2.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe59:8ac/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:89 errors:0 dropped:0 overruns:0 frame:0
          TX packets:140 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:14652 (14.3 KiB)  TX bytes:14688 (14.3 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:46 errors:0 dropped:0 overruns:0 frame:0
          TX packets:46 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4192 (4.0 KiB)  TX bytes:4192 (4.0 KiB)

Now if we want to attach an additional interface on net2 running the following command achieves this:

$ nova interface-attach --net-id e79f5d1a-289e-44b6-8070-943535cbbeae 54ca2943-46e7-4e2e-b470-a015f23797c0 # <-- instance id
$ ifconfig -a
eth0      Link encap:Ethernet  HWaddr FA:16:3E:59:08:AC  
          inet addr:10.2.0.3  Bcast:10.2.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe59:8ac/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:144 errors:0 dropped:0 overruns:0 frame:0
          TX packets:178 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:21919 (21.4 KiB)  TX bytes:21250 (20.7 KiB)

eth1      Link encap:Ethernet  HWaddr FA:16:3E:EA:83:F5  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:46 errors:0 dropped:0 overruns:0 frame:0
          TX packets:46 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4192 (4.0 KiB)  TX bytes:4192 (4.0 KiB)

As you can see eth1 was added. We can also remove the interface attached to net1 (eth0) via:

$ nova interface-detach  54ca2943-46e7-4e2e-b470-a015f23797c0 d4911c36-2c8d-4dd3-a128-2d7e411ce877 # <--port-uuid

and ifconfig -a shows that it was removed:

$ ifconfig -a
eth1      Link encap:Ethernet  HWaddr FA:16:3E:EA:83:F5  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:46 errors:0 dropped:0 overruns:0 frame:0
          TX packets:46 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4192 (4.0 KiB)  TX bytes:4192 (4.0 KiB)

Heads up if you are running the first RC release of Grizzly there was a bug that caused this not to work when using quantum but this was shortly by https://github.com/openstack/nova/commit/bba57e9fa63b7f55d403d9f6950c4cde425d83b0.

Posted in openstack | 1 Comment

Building a Multi-Tier Application With OpenStack

In this blog post I’m going to give you a walk through on how to build a simple multi-tier application on OpenStack. Before I do this, I want to give a little background about the features that went into OpenStack during the Grizzly release cycle that enabled this. First I’ll talk about security groups and then about Load-balancer-as-a-Service.

During the Grizzly cycle of OpenStack there was a lot of work that went into security groups, in particular security groups involving Quantum. If you aren’t familiar with Quantum, it is a relatively new OpenStack project that became core in Folsom and its goal is to handle the networking part of OpenStack, which makes sense that security groups should be implemented there. In Folsom, if one wanted to use security groups, one would need to use Nova’s security group implementation, which had a few limitations that we wanted to fix. The first limitation was that Nova security groups implementation did not work if one wanted to use overlapping ip addresses. In addition, Nova’s security groups did not support egress filtering unless a tenant enforced this themselves within the instance. Egress filtering allows tenants to enforce which end hosts and protocols their instances are able to initiate communication with, which is useful if one wants to lock down who their instances can communicate with.

The last part of the security group work was to implement the ability for Nova to proxy its security group calls to Quantum. This is important because it allows one to create an instance via Nova and specific security groups in Quantum, which helps reduce orchestration requirements (i.e: first creating a port in quantum with specific security groups and then tell nova to use that port). In addition, this proxy layer allows existing scripts and tools to continue to work even if using quantum security groups, allows Nova’s EC2 security group implementation to work with quantum, and lastly allows the Nova security group api to work in conjunction with overlapping ips as the calls are proxied to Quantum, which handles this.

In addition, to security groups, Load-balancer-as-a-Service (LBaaS) was another feature added to Quantum during grizzly. LBaaS allows the ability to provision on demand loadbalancers pragmatically, which in my opinion is pretty freaking cool! This allows one to create several instances all running the same application and then distribute the load across them in order to scale out an application and provide high availability.

Multi-Tier Application Walk Through

Consider the scenario where a tenant wants a multi-tier application that consists of web servers and database servers. The tenant only wants to allow HTTP port 80 to be accessible to the internet from the web servers and only allows the web servers to be able to communicate with the database servers over port 3306 to access the database. The tenant also wants to have a jump box accessibly from the internet to ssh to and then ssh from that to any of the web servers or database servers if needed. This increases the security by not having your web servers and database severs directly ssh-able from the internet. Lastly, the tenant wants the requests to be loadbalanced between the two web servers. To demonstrate this using OpenStack, we’ll use a project called devstack to quickly setup OpenStack.

Get the devstack code:

$ git clone https://github.com/openstack-dev/devstack
$ cd devstack

Next, create a file called localrc which is used to tell devstack which components we want it to setup and install and put the following content below in it. For this demo we’ll be using the Open vSwitch Quantum plugin.

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,n-cauth,horizon,mysql,rabbit,sysstat,cinder,c-api,c-vol,c-sch,n-cond,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas,n-novnc,n-xvnc,q-lbaas
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_TOKEN=password
SERVICE_PASSWORD=password
ADMIN_PASSWORD=password

Next run ./stack.sh and grab a cup of coffee as this will take a few minutes to complete as it downloads all the required packages and code.

In order to start using OpenStack, you’ll need to authenticate as a tenant. To do this, run the following command in order to put the demo user’s credentials in your environment.

$ source openrc demo demo

The devstack script automatically creates two networks for you ‘private’ and ‘public’. The ‘public’ network we’ll use to allocate floating ips out of later. Running quantum net-list will show this:

$ quantum net-list
+--------------------------------------+---------+--------------------------------------------------+
| id                                   | name    | subnets                                          |
+--------------------------------------+---------+--------------------------------------------------+
| 02e0a203-8349-47fc-8d61-8987b4197f1c | public  | c1787d38-fa26-48c9-9f36-d2f563b38c70             |
| 812fb7bd-3ad9-4583-b9f0-55c9ab5a7d55 | private | a00e7146-d77c-4835-93d0-5ab743f3aee6 10.0.0.0/24 |
+--------------------------------------+---------+--------------------------------------------------+

First we’ll create the three security groups we’ll need to contain the members: web, database and ssh.

$ quantum security-group-create web
$ quantum security-group-create database
$ quantum security-group-create ssh

Now we’ll add rules into these security groups for their desired functionality.

Allow all HTTP Port 80 traffic to web security group:

$ quantum security-group-rule-create --direction ingress --protocol TCP --port-range-min 80 --port-range-max 80 web

Allow database severs to be accessed from web servers:

$ quantum security-group-rule-create --direction ingress --protocol TCP --port-range-min 3306 --port-range-max 3306 --remote-group-id web database

Allow Jump host to ssh into database servers and webservers

$ quantum security-group-rule-create --direction ingress --protocol TCP --port-range-min 22 --port-range-max 22 --remote-group-id ssh database
$ quantum security-group-rule-create --direction ingress --protocol TCP --port-range-min 22 --port-range-max 22 --remote-group-id ssh web

Allow outside world to be able to access the jumpbox over port 22 for ssh:

$ quantum security-group-rule-create --direction ingress --protocol tcp  --port-range-min 22 --port-range-max 22 ssh

Let’s now boot some vms using these security groups. First, run quantum net-list to obtain the private network uuid that we are going to be using:

$ quantum net-list
+--------------------------------------+---------+--------------------------------------------------+
| id                                   | name    | subnets                                          |
+--------------------------------------+---------+--------------------------------------------------+
| 02e0a203-8349-47fc-8d61-8987b4197f1c | public  | c1787d38-fa26-48c9-9f36-d2f563b38c70             |
| 812fb7bd-3ad9-4583-b9f0-55c9ab5a7d55 | private | a00e7146-d77c-4835-93d0-5ab743f3aee6 10.0.0.0/24 |
+--------------------------------------+---------+--------------------------------------------------+

Then, we’ll run nova image-list to determine the images available to boot our instances with. Since we’re using devstack, the script automatically uploaded an image to glance for us to use.

$ nova image-list
+--------------------------------------+---------------------------------+--------+--------+
| ID                                   | Name                            | Status | Server |
+--------------------------------------+---------------------------------+--------+--------+
| fe2a01fe-202a-4b4d-b98c-2cbcc5c72d18 | cirros-0.3.1-x86_64-uec         | ACTIVE |        |
| 79b967f1-4f4e-4876-a2c0-edf647337e38 | cirros-0.3.1-x86_64-uec-kernel  | ACTIVE |        |
| 4d14dc5d-ef4a-451e-9ef8-bba3f8aaf66d | cirros-0.3.1-x86_64-uec-ramdisk | ACTIVE |        |
+--------------------------------------+---------------------------------+--------+--------+

Boot four instances: two web severs, one database server, and our ssh jump box.

Note: When running nova boot without –nic net-id= an instance will be allocated a port on every network the tenant owns. In this case the tenant only owns one network so we are leaving off the –nic net-id=812fb7bd-3ad9-4583-b9f0-55c9ab5a7d55 for simplicity so the commands can be copied and pasted.

Boots two instances named web_server1 and web_server2 on the private network using the cirros image and part of the web security group:

$ nova boot --image cirros-0.3.1-x86_64-uec --security_groups web --flavor 1 web_server1
$ nova boot --image cirros-0.3.1-x86_64-uec --security_groups web --flavor 1 web_server2

Boot database server:

$ nova boot --image cirros-0.3.1-x86_64-uec --security_groups database --flavor 1 database_server1

Boot ssh jump host:

$ nova boot --image cirros-0.3.1-x86_64-uec --security_groups ssh --flavor 1 jumpbox

Boot client instance that we’ll use to access the web servers from (Since we did not specify a security group this instance will be part of a ‘default’ security group which allows the instance to make outgoing connections to anyone but only accept incoming connections from members of this same security group):

$ nova boot --image cirros-0.3.1-x86_64-uec --flavor 1 client

Running nova list will display the status of the instances. After a few seconds all of the instances should go to an ACTIVE status.

$ nova list
+--------------------------------------+------------------+--------+------------------+
| ID                                   | Name             | Status | Networks         |
+--------------------------------------+------------------+--------+------------------+
| d4305afe-4985-45c1-9337-f6fd4f1c83f1 | client           | ACTIVE | private=10.0.0.7 |
| 5c4128a7-3133-43ee-8a98-ba8ff8fda65f | database_server1 | ACTIVE | private=10.0.0.5 |
| 50cbfad8-004a-4385-aa94-23d0a335fffa | jumpbox          | ACTIVE | private=10.0.0.6 |
| ec0ada74-c978-468e-9f5f-251c5dc0c710 | webserver1       | ACTIVE | private=10.0.0.3 |
| c66d0545-44af-476c-8d27-6d45c91e7e71 | webserver2       | ACTIVE | private=10.0.0.4 |
+--------------------------------------+------------------+--------+------------------+

To make the jumpbox publicly accessible on the internet we’ll need to assign a floating IP to it. To do this first create a floating IP via:

$ quantum floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 172.24.4.227                         |
| floating_network_id | 02e0a203-8349-47fc-8d61-8987b4197f1c |
| id                  | d67ec5ae-04e3-4e87-a6a5-7cc04a3d5608 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | d8293c068214472d8008000a79d369da     |
+---------------------+--------------------------------------+

Next, we need to determine the port id of the jumpbox:

$ quantum port-list
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| 4cfc707f-f78a-4bd9-9c27-bb6f8b01dfcc |      | fa:16:3e:03:f7:c4 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.2"} |
| 5620c0d6-046e-49de-b4ab-5d8948a78cbf |      | fa:16:3e:12:3b:75 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.5"} |
| 5912c994-be8e-4b9e-8fe9-6bcb1f1c89dd |      | fa:16:3e:ba:04:b7 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.1"} |
| 5cfcf996-4a3a-4a68-84e5-5c68d06832b4 |      | fa:16:3e:47:2e:72 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.6"} |
| a219f389-c792-4087-a3a6-d8b9c681c170 |      | fa:16:3e:71:ce:0a | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.4"} |
| edeb0b7b-89f1-4018-9eae-fbf70bcee7ac |      | fa:16:3e:59:c8:ea | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.3"} |
| faa8718a-2efd-443f-a1b0-29c06c281e72 |      | fa:16:3e:5a:9d:a9 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.7"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+

and find the id that matches the IP address of the jumphost (10.0.0.6) and associate it via:

$ quantum floatingip-associate d67ec5ae-04e3-4e87-a6a5-7cc04a3d5608 5cfcf996-4a3a-4a68-84e5-5c68d06832b4
Associated floatingip d67ec5ae-04e3-4e87-a6a5-7cc04a3d5608

Now you should be able to ssh to the jumbox via with password cubswin:) :

$ ssh cirros@172.24.4.227

(Optional) To access your instances from horizon, point your web browser at the IP address of this box and it should take you to the horizon landing page. Log in via demo/password and then set the current project to ‘demo’ (if not already set) and click on the instances tab. This should display all of your running instances and should allow you to access them via VNC.

After logging into the jumpbox you’ll be able to ssh into your webserver1, webserver2, and database server via:

$ ssh 10.0.0.3
$ ssh 10.0.0.4
$ ssh 10.0.0.5

but none of those instances will be able to ssh to each other. The point of this instance is so that you do not need to have all of your other instances publicly addressable and directly accessible via the internet.

Now let’s log in to web_server1 and web_server2 (via ssh or via horizon) and setup a simple web server to handle requests and reply with who they are:

# On web_server 1 
$ while true; do echo -e 'HTTP/1.0 200 OK\r\n\r\nweb_server1' | sudo nc -l -p 80 ; done 

# on web_server 2
$ while true; do echo -e 'HTTP/1.0 200 OK\r\n\r\nweb_server2' | sudo nc -l -p 80 ; done

Now, log in to your client vm. From there if you run:

$ wget -O - http://10.0.0.3/
Connecting to 10.0.0.3 (10.0.0.3:80)
web_server1
               100% |************************************| 12 0:00:00 ETA

$ wget -O - http://10.0.0.4/
Connecting to 10.0.0.4 (10.0.0.4:80)
web_server2
               100% |************************************| 12 0:00:00 ETA

This demonstrates that our simple web server is working on our two web server instances.

We can demonstrate that the web security group is working correctly by killing our simple web server and changing the port number. (Note: to kill the web server you may need to hold control + c for a second in order for it to break out of the while loop before another instance of nc is created.)

# On web_server 1 
$ while true; do echo -e 'HTTP/1.0 200 OK\r\n\r\nweb_server1' | sudo nc -l -p 81 ; done

Now on the client run:

$ wget -O - http://10.0.0.3:81
Connecting to 10.0.0.3 (10.0.0.3:81)
wget: can't connect to remote host (10.0.0.3): Connection timed out

As you can see the request is never answered as expected because our web security group does not allow port 81 ingress. Now let’s set the web server to run on port 80 again.

At this point were going to provision loadbalancer via quantum order to load balance requests between our two web server instances.

First we determine the subnet uuid of the private network:

$ quantum subnet-list
+--------------------------------------+------+-------------+--------------------------------------------+
| id                                   | name | cidr        | allocation_pools                           |
+--------------------------------------+------+-------------+--------------------------------------------+
| a00e7146-d77c-4835-93d0-5ab743f3aee6 |      | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} |
+--------------------------------------+------+-------------+--------------------------------------------+

Create a loadbalancer pool:

$ quantum lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id a00e7146-d77c-4835-93d0-5ab743f3aee6

and then associate our two web servers with this pool.

$ quantum lb-member-create --address 10.0.0.3 --protocol-port 80 mypool
$ quantum lb-member-create --address 10.0.0.4 --protocol-port 80 mypool

Now, let’s create a health monitor, which checks to make sure our instances are still running and associate that with the pool:

$ quantum lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
Created a new health_monitor:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| admin_state_up | True                                 |
| delay          | 3                                    |
| expected_codes | 200                                  |
| http_method    | GET                                  |
| id             | ba9ed221-848a-4001-be0e-3a0314981396 |
| max_retries    | 3                                    |
| status         | PENDING_CREATE                       |
| tenant_id      | 53b0a7fe38b747b78e84207a34e46857     |
| timeout        | 3                                    |
| type           | HTTP                                 |
| url_path       | /                                    |
+----------------+--------------------------------------+

$ quantum lb-healthmonitor-associate ba9ed221-848a-4001-be0e-3a0314981396 mypool
Associated health monitor ba9ed221-848a-4001-be0e-3a0314981396

Let’s now create a VIP (Virtual IP Address) that when accessed the loadblancer will direct the request to either instance to loadbalance the request.

$ quantum lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id  a00e7146-d77c-4835-93d0-5ab743f3aee6 mypool
Created a new vip:
+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| address          | 10.0.0.8                             |
| admin_state_up   | True                                 |
| connection_limit | -1                                   |
| description      |                                      |
| id               | e91236dc-b920-4302-bdfa-a6ec029ea128 |
| name             | myvip                                |
| pool_id          | 6aea6769-2cdf-4258-bbe7-60fed0965be9 |
| port_id          | bd9609f3-4c26-4b85-90f0-3e3139fc5c0e |
| protocol         | HTTP                                 |
| protocol_port    | 80                                   |
| status           | PENDING_CREATE                       |
| subnet_id        | a00e7146-d77c-4835-93d0-5ab743f3aee6 |
| tenant_id        | 53b0a7fe38b747b78e84207a34e46857     |
+------------------+--------------------------------------+

Finally, let’s test out the loadbalancer. From the client instance we should be able to run wget at 10.0.0.8 and see that it loadbalancers our requests.

$ wget -O - http://10.0.0.8/
Connecting to 10.0.0.8 (10.0.0.8:80)
web_server1
               100% |************************************| 12 0:00:00 ETA

$ wget -O - http://10.0.0.8/
Connecting to 10.0.0.8 (10.0.0.8:80)
web_server2
               100% |************************************| 12 0:00:00 ETA

$ wget -O - http://10.0.0.8/
Connecting to 10.0.0.8 (10.0.0.8:80)
web_server1
               100% |************************************| 12 0:00:00 ETA

$ wget -O - http://10.0.0.8/
Connecting to 10.0.0.8 (10.0.0.8:80)
web_server2
               100% |************************************| 12 0:00:00 ETA

From the output above you can see that the the requests are being handled by web_server1 then web_server2 in a round robin fashion.

Now to make our VIP publicly accessible via the internet we need to create another floating IP:

$ quantum floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 172.24.4.228                         |
| floating_network_id | 02e0a203-8349-47fc-8d61-8987b4197f1c |
| id                  | c1dad50e-fd98-4863-9d42-8d383ad441e4 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | d8293c068214472d8008000a79d369da     |
+---------------------+--------------------------------------+

Determine the port_id for the VIP:

$ quantum port-list
+--------------------------------------+------------------------------------------+-------------------+---------------------------------------------------------------------------------+
| id                                   | name                                     | mac_address       | fixed_ips                                                                       |
+--------------------------------------+------------------------------------------+-------------------+---------------------------------------------------------------------------------+
| 1334c522-14ad-4420-bc1a-53031ed68df9 | vip-37a3eeff-c24b-425f-8f29-c8288e1081c6 | fa:16:3e:e6:87:de | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.8"} |
| 4cfc707f-f78a-4bd9-9c27-bb6f8b01dfcc |                                          | fa:16:3e:03:f7:c4 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.2"} |
| 5620c0d6-046e-49de-b4ab-5d8948a78cbf |                                          | fa:16:3e:12:3b:75 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.5"} |
| 5912c994-be8e-4b9e-8fe9-6bcb1f1c89dd |                                          | fa:16:3e:ba:04:b7 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.1"} |
| 5cfcf996-4a3a-4a68-84e5-5c68d06832b4 |                                          | fa:16:3e:47:2e:72 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.6"} |
| a219f389-c792-4087-a3a6-d8b9c681c170 |                                          | fa:16:3e:71:ce:0a | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.4"} |
| edeb0b7b-89f1-4018-9eae-fbf70bcee7ac |                                          | fa:16:3e:59:c8:ea | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.3"} |
| faa8718a-2efd-443f-a1b0-29c06c281e72 |                                          | fa:16:3e:5a:9d:a9 | {"subnet_id": "82294d13-5e55-4ae2-bc95-9d4e09ab4ebf", "ip_address": "10.0.0.7"} |
+--------------------------------------+------------------------------------------+-------------------+---------------------------------------------------------------------------------+

Associate VIP port with floating IP:

$ quantum floatingip-associate c1dad50e-fd98-4863-9d42-8d383ad441e4  1334c522-14ad-4420-bc1a-53031ed68df9
Associated floatingip c1dad50e-fd98-4863-9d42-8d383ad441e4

At this point the VIP port is a member of the ‘default’ security group which does not allow ingress traffic unless you are also part of this security group so we need to update the VIP port to be a member of the web security group so that requests from the internet are allowed to pass (not just from our client instance).
Get the web security group uuid:

$ quantum security-group-list
+--------------------------------------+----------+-------------+
| id                                   | name     | description |
+--------------------------------------+----------+-------------+
| 4a65c786-b7a2-4158-a8fd-8ceff9d40337 | ssh      |             |
| 51eaf14f-d873-4e6b-ac12-648ba0bf4ce0 | web      |             |
| e11e7617-2683-48f2-a474-8634721248d2 | default  | default     |
| e637afb3-831e-4298-8e54-ceddedcd33ee | database |             |
+--------------------------------------+----------+-------------+

Update VIP port to be a member of the web security group:

$ quantum port-update 1334c522-14ad-4420-bc1a-53031ed68df9 --security_groups list=true 51eaf14f-d873-4e6b-ac12-648ba0bf4ce0
Updated port: 1334c522-14ad-4420-bc1a-53031ed68df9

At this point your VIP is publicly addressable:

$ wget -O - http://172.24.4.228
Connecting to 172.24.4.228 (172.24.4.228:80)
web_server1
               100% |************************************| 12 0:00:00 ETA

$ wget -O - http://172.24.4.228
Connecting to 172.24.4.228 (172.24.4.228:80)
web_server2
               100% |************************************| 12 0:00:00 ETA

To demonstration high availability, we’ll go and delete our web_server1 instance to simulate a failure.

$ nova delete web_server1

After our health monitor detects this. it will stop sending request to web_server1 and we can see this happening here as web_server2 handles all the requests:

$ wget -O - http://172.24.4.228
Connecting to 172.24.4.228 (172.24.4.228:80)
web_server2
               100% |************************************| 12 0:00:00 ETA

$ wget -O - http://172.24.4.228
Connecting to 172.24.4.228 (172.24.4.228:80)
web_server2
               100% |************************************| 12 0:00:00 ETA

I’d like to give a shout out to the whole OpenStack community on the amazing amount of work that was done in Grizzly and I’m excited for what’s in store for Havana!

Posted in openstack, Uncategorized | 10 Comments

Open vSwitch and Libvirt

Recently I started playing with libvirt in order to manage all of my virtual machines since it’s able to handle most hypervisors through one API. One issue I ran into though is that I use Open vSwitch as my bridging mechanism. For those of you who do not know what Open Vswitch is, it’s basically a virtual switch that has a ton of cool features like OpenFlow, 802.1Q , sFlow, and many other things. I use Open vSwitch mostly  as a software OpenFlow switch for my thesis work. Open vSwitch has something called brcompat which makes brctl compatibly with OVS bridges.  This said it seems like libvirt is not compatible with it. For example when I used virsh in order to start a vm and attach it to an OVS bridge it said, “error: Failed to add tap interface to bridge ‘br0′: Invalid argument”. Though, when I ran brctl by hand I could attach the tap interface to br0 just fine. I’m not sure if this is a bug or I was missing something in my setup.

Anyways.. Libvirt now has support for interfacing with OVS bridges, so I was interested in trying that out. The first step to doing this was to get the latest libvirt source code with these new changes. Since the underlying host runs gentoo linux and provides a live-ebuild for libvirt I started using that. Unfortunately the ebuild for that is not up to date with the current source in libvirt repo. In order to fix this I needed to add an overlay which required the following steps:

mkdir -p /usr/local/portage/profiles/
echo "my_local_overlay" > /usr/local/portage/profiles/repo_name
Added PORTDIR_OVERLAY="/usr/local/portage/" to  /etc/make.conf
mkdir -p /usr/local/portage/app-emulation/libvirt
cp /usr/portage/app-emulation/libvirt/libvirt-9999.ebuild \
    /usr/local/portage/app-emulation/libvirt/
cp -r /usr/portage/app-emulation/libvirt/files \
   /usr/local/portage/app-emulation/libvirt/

Now libvirt-9999.ebuild needed to be corrected. Thankfully someone already figured out what needed to be changed and filed a bug report here (also thanks to hasufell from #gentoo to  pointing me to this!)  so I was able to easily make those changes and finally compile libvirt successfully.

ebuild libvirt-9999.ebuild manifest
emerge app-emulation/libvirt

(I also emerged the virt-manager from ebuild live though I don’t think the newest version is required. )

of course I could have just cloned libvirt directly from source:

git://libvirt.org/libvirt.git

though I wanted portage to do all of the heavy lifting for me such as handling dependencies, etc. This also lets me use smart-live-rebuild in order to pull the latest source code for all of my live-ebuilds so I don’t need to manage them all individuality.

Now to actually using Open vSwitch and libvirt. Open vSwitch can be downloaded from:

git clone git://openvswitch.org/openvswitch

after getting the source  follow INSTALL.Linux for instruction on how to build and configure it. First, I created a br0 and attach eth0 to it.

ovs-vsctl add-br br0
ovs-vsctl add-port br0 eth0

Next,  I changed the interface section of the configuration file for the vm telling it to use my  OVS bridge interface.

virsh edit <domain>

<interface type='bridge'>
    <mac address='52:54:00:43:1f:f4'/>
    <source bridge='br0'/>
    <virtualport type='openvswitch'>
    </virtualport>
</interface>

Now, when the vm is started it will be added to the OVS bridge. As I was saying before I use Open vSwitch as an OpenFlow switch to experiment with, though I also run several other virtual machines for different tasks on the network (Cacti/Nagios VM, etc). In order to experiment with things on the network and keep other VMs unaffected I create a special ovsbr datapath and attach a patch port between ovsbr and br0.

ovs-vsctl add-br ovsbr
ovs-vsctl add-port br0 patch-to-ovsbr
ovs-vsctl set Interface patch-to-ovsbr type=patch
ovs-vsctl set Interface patch-to-ovsbr options:peer=patch-to-br0
ovs-vsctl add-port ovsbr patch-to-br0
ovs-vsctl set Interface patch-to-br0 type=patch
ovs-vsctl set Interface patch-to-br0 options:peer=patch-to-ovsbr

After doing that I also enable OpenFlow on the ovsbr datapath and point that at my OpenFlow controller:

ovs-vsctl set-controller dp0 tcp:130.127.39.177:6633

This allows me to try out crazy things on the ovsbr and isolate this from the rest of the VMs attached to br0.

Posted in Uncategorized | Leave a comment

Blog Configuration

I figured my first post on here would be about how I setup this blog. So here it goes… blog.aaronorosen.com runs as a virtual machine using kvm. Instead of setting up and installing the vm directly using the kvm commands, I decided to try out libvirt since I had not played with it before. Libvirt is a virtualization API that supports KVM, XEN, LXC, OpenVZ, and several other hypervisors.

Since I only have one public IP address outside of the firewall and didn’t want to run this website on that, a reverse proxy was needed. For this I chose Nginx and was surprised how fully featured it was and how easy it was to use. Below shows what I needed to add to the configuration in order to make it work.

server {
    listen 80;
    server_name blog.aaronorosen.com;
    access_log /var/log/nginx/blog.aaronorosen.access.log;
    error_log /var/log/nginx/blog.aaronorosen.error.log;

    location / {
        proxy_set_header Host $host;
        proxy_pass http://130.127.39.238;
    }
}

What this does is when request comes to blog.aaronorosen.com, nginx looks at the HTTP header and forwards it to the proxy_pass location. The only gotcha I ran into is that I used a vhost entry for the domain on the vm and unless proxy_set_header is specified the host headers from the request are not forward.  This cause vhost entry to obviously not work.

As far as the VM goes it runs gentoo linux and installing wordpress on it was very easy following this guide.  The only issue I ran into with wordpress is that wordpress uses the URL that you use to configure it as it’s / location. I initially configured it using http://130.127.39.238/wordpress so all of the linkage on the site was set to that address which was definitely not what I wanted. To correct this the following commands were needed to be run against the database.

UPDATE wp_options set `option_value` = 'http://blog.aaronorosen.com' where `option_name` = 'siteurl';
UPDATE wp_options set `option_value` = 'http://blog.aaronorosen.coml' where `option_name` = 'home';

After that that everything was up and running!

Posted in Uncategorized | 1 Comment

Hello World!

So I figured I’d setup a blog where I could share random tidbits of things that I found interesting. More to come later. Stay tuned!

Posted in Uncategorized | 2 Comments