Open Ethernet Networking (OpEN) API Guide and Reference Manual
3.6.0.3
|
OpenStack VTEP is an ICOS application that provides OpenStack tenants access to physical devices which are plugged into a properly configured ICOS switch.
OpenStack VTEP supports OpenStack clusters based on the following:
Note:
While it is possible to deploy OpenStack without the use of devstack, we based our work on devstack since it is the easiest way to get a reliable OpenStack cluster up and running for the purposes of evaluating OpenStack VTEP. Some hints on how to integrate with non-devstack deployments will be provided in the installation section.
The following sections describe the configuration of a two node OpenStack cluster capable of demonstrating OpenStack VTEP. One of the nodes is configured to act as a controller (i.e., it runs OpenStack services such as nova, neutron, keystone) and both nodes are configured as compute nodes capable of having tenants VMs scheduled on them by the OpenStack scheduler.
In the following, we are pulling a specific commit of devstack because it is known to work (pulling stable/icehouse was not as stable as the branch name implies, as checkins can still happen on that branch as it is not locked down), and the branch itself is subject to end of life (the current trend appears to be to remove the stable branches every 2 release).
DBUS and X11 seem to be problematic at times; best results are to ensure that you are logged in to the graphical UI on all hosts in the cluster, and in your environment you have set your X11 DISPLAY variable in .bashrc or similar:
$ DISPLAY=:0; export DISPLAY
Otherwise, you may run into failures at start of devstack involving DBUS and X11. This might cause, for example, keystone to not start. The problem was first observed in the Havana timeframe, and may be fixed in Icehouse or later but we recommend setting your DISPLAY variable just in case.
The following is known to work. The key requirement is the CPU be x86_64, since OpenStack VTEP consists of binaries compiled for that architecture.
127.0.0.1 compute 192.168.3.3 controller
$ sudo vi /etc/default/grub
Change the line:
GRUB_CMDLINE_LINUX_DEFAULT=""
to
GRUB_CMDLINE_LINUX_DEFAULT="biosdevname=0"
Then:
$ sudo update-grub (sudo update-grub2 is also known to work) $ sudo reboot -h now
$sudo apt-get update
$sudo vi /etc/network/interfaces auto eth0 eth1 iface eth0 inet dhcp` iface eth1 inet static address 192.168.3.3 network 192.168.3.0 netmask 255.255.255.0
$sudo ifup eth1
$sudo apt-get install git
$ sudo easy_install --upgrade pip
sudo ovs-vsctl showThe last line of the output will display the version number.
$ git clone git://github.com/openstack-dev/devstack.git $ cd devstack/ $ git checkout 1921be9226315b175ad135f07aeb0a715aaffb24
The above tag is from the icehouse period.
[[local|localrc]] disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-meta enable_service neutron enable_service vtepd disable_service q-l3
HOST_IP=192.168.3.3 FLAT_INTERFACE=eth1 FIXED_RANGE=10.4.128.0/24 NETWORK_GATEWAY=10.4.128.1 LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=xyzpdqlazydog
Q_ML2_TENANT_NETWORK_TYPE=vxlan SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler IMAGE_URLS="http://cloud-images.ubuntu.com/quantal/current/quantal-server-cloudimg-amd64-disk1.img" Q_PLUGIN=ml2 ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=False ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=False SCREEN_LOGDIR=/opt/stack/logs
[[post-config|$NOVA_CONF]] [DEFAULT]
metadata_host=192.168.3.3
[[post-config|$Q_PLUGIN_CONF_FILE]] [DEFAULT]
tunnel_types=vxlan
[[post-config|$Q_DHCP_CONF_FILE]] [DEFAULT]
enable_metadata_network=True enable_isolated_metadata=True
$./stack.sh
Important! after the first install, update local.conf to set OFFLINE to True. This will speed up starts of the controller.
OFFLINE=True
./unstack.sh
$ sudo vi /etc/default/grub
Change the line:
GRUB_CMDLINE_LINUX_DEFAULT="biosdevname=0"
to
GRUB_CMDLINE_LINUX_DEFAULT="biosdevname=0"
Then:
$ sudo update-grub2 $ sudo reboot -h now
$sudo apt-get update
$sudo vi /etc/network/interfaces auto eth0 eth1 iface eth0 inet dhcp iface eth1 inet static address 192.168.3.4 network 192.168.3.0 netmask 255.255.255.0
$ sudo reboot
$sudo apt-get install git
$ sudo easy_install --upgrade pip
sudo ovs-vsctl showThe last line of the output will display the version number. -Clone devstack
$ git clone git://github.com/openstack-dev/devstack.git $ cd devstack/ $ git checkout 1921be9226315b175ad135f07aeb0a715aaffb24
[[local|localrc]] HOST_IP=192.168.3.4 FLAT_INTERFACE=eth1 FIXED_RANGE=10.4.128.0/20 FIXED_NETWORK_SIZE=4096 NETWORK_GATEWAY=10.4.128.1 LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=xyzpdqlazydog DATABASE_TYPE=mysql SERVICE_HOST=192.168.3.3 MYSQL_HOST=192.168.3.3 RABBIT_HOST=192.168.3.3 GLANCE_HOSTPORT=192.168.3.3:9292
Q_PLUGIN=ml2
# Timeouts ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=False
ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=False SCREEN_LOGDIR=/opt/stack/logs
Q_ML2_TENANT_NETWORK_TYPE=vxlan
ENABLED_SERVICES=n-cpu,rabbit,n-api,q-agt,vtepd
[[post-config|$NOVA_CONF]] [DEFAULT]
metadata_host=192.168.3.3
[[post-config|$Q_PLUGIN_CONF_FILE]] [DEFAULT]
tunnel_types=vxlan
$./stack.sh
Important! after the first install, update local.conf to set OFFLINE to True. This will speed up starts of the compute node.
OFFLINE=True
./unstack.sh
On the controller:
$ cd devstack; source openrc $ nova-manage service list 2> /dev/null Binary Host Zone Status State Updated_At nova-compute openstack1 nova enabled :-) 2012-11-12 20:21:00 nova-scheduler openstack1 nova enabled :-) 2012-11-12 20:21:00 nova-compute openstack3 nova enabled :-) 2012-11-12 20:21:32
$ sudo ovs-vsctl show 5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e Bridge br-int Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "tap44890bb6-48" tag: 1 Interface "tap44890bb6-48" type: internal Port br-int Interface br-int type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Port "vxlan-192.168.3.2" Interface "vxlan-192.168.3.2" type: vxlan options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168.3.2"} ovs_version: "1.10.2"
You can only launch VMs from the console on the controller. UUIDs of images and networks change each time you launch the OpenStack cluster, so it is better to automate the process by scripting it.
Here we illustrate a script that illustrates multi-tenancy. It creates two tenants, demo1 and demo2, and launches 2 VMs on each of these tenants. The demo1 tenant is assigned a network subnet of 10.4.128.0/20, and the demo2 tenant is assigned a network subnet of 10.5.128.0/20.
Type the following bash code into a file (e.g., launch.sh) in the devstack directory, then execute it:
#!/bin/bash
source openrc demo demo
keystone --os-username admin --os-password password tenant-create --name demo1 --description demo1 DEMO1_TENANT=`keystone --os-username admin --os-password password \ tenant-list | grep " demo1 " | cut -c3-34` echo "DEMO1_TENANT tenant is $DEMO1_TENANT"
keystone --os-username admin --os-password password tenant-create --name demo2 --description demo2 DEMO2_TENANT=`keystone --os-username admin --os-password password \ tenant-list | grep " demo2 " | cut -c3-34` echo "DEMO1_TENANT tenant is $DEMO2_TENANT"
source openrc admin admin
echo "Setting rules" nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 2> /dev/null nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 2> /dev/null nova secgroup-add-rule default udp 1 65535 0.0.0.0/0
function get_field { while read data; do if [ "$1" -lt 0 ]; then field="(\$(NF$1))" else field="\$$(($1 + 1))" fi echo "$data" | awk -F'[ ]*\|[ ]*' "{print $field}" done }
demo=`keystone --os-username admin --os-password password \ tenant-list | grep " demo " | cut -c3-34` echo "demo tenant is $demo"
nova --os-username demo --os-password password --os-tenant-id $demo \ keypair-add mykey1 > oskey.priv echo "demo keypairs" nova --os-username demo --os-password password --os-tenant-id $demo \ keypair-list chmod 600 oskey.priv
source openrc admin admin
ubuntuimage=`nova image-list 2> /dev/null | grep "quantal" | cut -c3-39` #ubuntuimage=`nova image-list 2> /dev/null | grep " cirros-0.3.2-x86_64-uec " | cut -c3-39`
echo "image is $ubuntuimage"
echo "Detecting the network" net=`neutron net-list --os-username admin --os-password password \ --tenant_id $demo 2> /dev/null | grep private | cut -c3-39`
echo "private network is $net"
# Create users
keystone --os-username admin --os-password password user-create --name demo1 --pass password --email demo1 DEMO1_USER=`keystone --os-username admin --os-password password \ user-list | grep " demo1 " | cut -c3-34` @exa mple. com
keystone --os-username admin --os-password password user-create --name demo2 --pass password --email demo2 DEMO2_USER=`keystone --os-username admin --os-password password \ user-list | grep " demo2 " | cut -c3-34` @exa mple. com
# Find UUID for Member role
MEMBER_UID=`keystone --os-username admin --os-password password \ role-list | grep " Member " | cut -c3-34` ADMIN_UID=`keystone --os-username admin --os-password password \ role-list | grep " admin " | cut -c3-34`
# Add users to tenants
keystone --os-username admin --os-password password user-role-add --user $DEMO1_USER --role $MEMBER_UID --tenant $DEMO1_TENANT # keystone --os-username admin --os-password password user-role-add --user $DEMO1_USER --role $ADMIN_UID --tenant $DEMO1_TENANT # keystone --os-username admin --os-password password user-role-add --user $DEMO2_USER --role $MEMBER_UID --tenant $DEMO2_TENANT # keystone --os-username admin --os-password password user-role-add --user $DEMO2_USER --role $ADMIN_UID --tenant $DEMO2_TENANT
# # Create networks for tenants
source openrc demo1 demo1
nova --os-tenant-id $DEMO1_TENANT \ keypair-add mykey2 > oskey2.priv echo "demo1 keypairs" nova --os-tenant-id $DEMO1_TENANT \ keypair-list chmod 600 oskey2.priv
echo "Setting rules" nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 2> /dev/null nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 2> /dev/null nova secgroup-add-rule default udp 1 65535 0.0.0.0/0
source openrc demo2 demo2
nova --os-tenant-id $DEMO2_TENANT \ keypair-add mykey3 > oskey3.priv echo "demo2 keypairs" nova --os-tenant-id $DEMO2_TENANT \ keypair-list chmod 600 oskey3.priv
echo "Setting rules" nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 2> /dev/null nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 2> /dev/null nova secgroup-add-rule default udp 1 65535 0.0.0.0/0
source openrc admin admin
NETWORK_GATEWAY=10.4.128.1 FIXED_RANGE=10.4.128.0/20
DEMO1_NET_ID=$(neutron net-create --os-username admin --os-password password \ --tenant-id $DEMO1_TENANT demo1net | \ grep ' id ' | get_field 2)
DEMO1_SUBNET_ID=$(neutron subnet-create --os-username admin --os-password password \ --tenant-id $DEMO1_TENANT --ip_version 4 \ --gateway $NETWORK_GATEWAY --name \ demo1subnet $DEMO1_NET_ID $FIXED_RANGE | grep ' id ' | \ get_field 2)
NETWORK_GATEWAY=10.5.128.1 FIXED_RANGE=10.5.128.0/20
DEMO2_NET_ID=$(neutron net-create --os-username admin --os-password password \ --tenant-id $DEMO2_TENANT demo2net | \ grep ' id ' | get_field 2)
DEMO2_SUBNET_ID=$(neutron subnet-create --os-username admin --os-password password \ --tenant-id $DEMO2_TENANT --ip_version 4 \ --gateway $NETWORK_GATEWAY --name \ demo2subnet $DEMO2_NET_ID $FIXED_RANGE | grep ' id ' | \ get_field 2)
echo "Creating VMs"
source openrc admin admin
nova --os-username demo2 --os-password password --os-tenant-id $DEMO2_TENANT \ boot --image $ubuntuimage --key_name mykey3 --flavor 2 --nic net-id=$DEMO2_NET_ID test1 nova --os-username demo2 --os-password password --os-tenant-id $DEMO2_TENANT \ boot --image $ubuntuimage --key_name mykey3 --flavor 2 --nic net-id=$DEMO2_NET_ID test2
source openrc admin admin
nova --os-username demo1 --os-password password --os-tenant-id $DEMO1_TENANT \ boot --image $ubuntuimage --key_name mykey2 --flavor 2 --nic net-id=$DEMO1_NET_ID test3 nova --os-username demo1 --os-password password --os-tenant-id $DEMO1_TENANT \ boot --image $ubuntuimage --key_name mykey2 --flavor 2 --nic net-id=$DEMO1_NET_ID test4
echo "Delaying to allow VMs to spin up" sleep 60
The above script does the following:
Some notes on the VMs used:
The cirros image is the standard image most people first use when they try OpenStack. One of the problems with the image is it's simplicity; in particular, it does not have arp command enabled in busybox, so we cannot easily update the arp table as we will do later when trying to ping devices from within a VM. For that reason, we selected a somewhat older version of Ubuntu to run as a VM. We experimented with various images and this was the first one that we were able to successfully log into via ssh.
The next step is to inventory the VMs and try to ping them from a network namespace. On the controller:
On the controller:
$ source openrc demo1 demo1 $ nova list (output will show VMs and their IP address assignments) $ neutron net-list $ ip netns list qdhcp-cdc60eee-8cc7-426a-ab8a-ffced6b57ae8 qdhcp-eb2367bd-6e43-4de7-a0ab-d58ebf6e7dc0
(one of the namespaces will be for demo1 tenant, the other will be for demo2 tenant. The UUID following qdhcp is the ID of the network assigned to the tenant. use the output of neutron net-list to determine the UUIDs of the networks for the demo1 and demo2 tenants.)
$ sudo ip netns exec qdhcp-cdc60eee-8cc7-426a-ab8a-ffced6b57ae8 bash # ping 10.4.128.9 (use one of the IP addresses displayed in the output of nova list) PING 10.4.128.9 (10.4.128.9) 56(84) bytes of data. 64 bytes from 10.4.128.9: icmp_req=1 ttl=64 time=10.6 ms 64 bytes from 10.4.128.9: icmp_req=2 ttl=64 time=3.14 ms
To ping VMs for the other tenant, repeat the above steps, but you must authenticate as the demo2 user, use the qdhcp namespace for the demo2 tenant, and ping IP addresses assigned to the VMs of the demo2 tenant. For example:
$ source openrc '''demo2 demo2''' $ nova list (output will show VMs and their IP address assignments, which may be different than for the demo1 tenant) $ neutron net-list $ ip netns list qdhcp-cdc60eee-8cc7-426a-ab8a-ffced6b57ae8 qdhcp-eb2367bd-6e43-4de7-a0ab-d58ebf6e7dc0
$ sudo ip netns exec qdhcp-eb2367bd-6e43-4de7-a0ab-d58ebf6e7dc0 bash # ping 10.5.128.9 (use one of the IP addresses displayed in the output of nova list) PING 10.5.128.9 (10.5.128.9) 56(84) bytes of data. 64 bytes from 10.5.128.9: icmp_req=1 ttl=64 time=10.6 ms 64 bytes from 10.5.128.9: icmp_req=2 ttl=64 time=3.14 ms
This step involves logging into one of the Ubuntu VMs via SSH, and trying to ping one of the other VMs. To ping from within a VM, you need to ssh into the VM. While within the appropriate namespace (use oskey2.priv for demo1 tenant, and oskey3.priv for demo2 tenant):
$ ssh -i oskey2.priv root@.5 ... $ ping 10.5.128.6 10.5 .128