Open Ethernet Networking (OpEN) API Guide and Reference Manual
3.6.0.3
|
OpenStack VTEP is an ICOS application that provides OpenStack tenants access to physical devices that are plugged into a properly configured ICOS switch.
OpenStack VTEP consists of vtepc, which runs on a controller node in the OpenStack cluster, and vtepd, which runs on all compute nodes in the cluster as well as on OpenStack VTEP switches.
This section assumes that an OpenStack cluster compatible with OpenStack VTEP has been configured, that VMs can be created, and from a namespace, you are able to ping and ssh into VMs.
Furthermore, it is assumed that devstack was used to install OpenStack on all cluster nodes.
The instructions below detail how to install vtepd and vtepc as a side cluster to the OpenStack cluster. Separate instructions are provided for compute nodes, controller, and ICOS switches.
vtepd is a python application. For this reason, python must be installed on the switch via RPMs that are included with OpEN ADK. For instructions, please refer to RPM Installation. You can verify the availability of python as follows from the switch console:
User:admin Password: (Routing) >enable
(Routing) #linuxsh Trying 127.0.0.1...
Connected to 127.0.0.1
Linux System Login
# python Python 2.6.6 (r266:84292, Aug 2 2013, 14:16:39) [GCC 4.6.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> quit() #
OpenStack VTEP is distributed via a tarball as a part of the OpEN ADK. The name of the tarball is openstack_vtep.tar.gz. This tarball can be used to install OpenStack VTEP components on controller and compute nodes, as well as on the switches. The following sections describe the installation process for each type of install. Each install will prompt for paths. When asked for paths, please provide absolute pathnames (i.e., paths must start with "/").
$ tar -zxvf openstack_vtep.tar.gz $ cd install/controller $ sudo sh install.sh
$ tar -zxvf openstack_vtep.tar.gz $ cd install/compute $ sudo sh install.sh
Note: In the sample cluster configuration, our physical controller node is configured as an OpenStack controller and compute node, so both vtepd and vtepc must be installed on it.
The following steps are for switches that boot into ICOS and provide access to the Linux shell via #linuxsh command.
User:admin Password: (Routing) >enable
(Routing) #linuxsh Trying 127.0.0.1...
Connected to 127.0.0.1
Linux System Login
#
$ tar -zxvf openstack_vtep.tar.gz $ cd install/switch $ sudo sh install.sh
The install script applies a patch to the devstack "stack.sh" script which will cause vtepd to launch as a part of running the stack.sh command. vtepd will be shutdown when the unstack.sh command is run.
Note that vtepd will be assigned its own screen. Executing "rejoin-stack.sh" after stack.sh completes, and using ctrl-a-n to navigate to the vtepd screen will allow you to view any output that vtepd generates.
On a OpenStack VTEP switch, vtepd can be started from the linuxsh prompt:
User:admin Password: (Routing) >enable
(Routing) #linuxsh Trying 127.0.0.1...
Connected to 127.0.0.1
Linux System Login
# /etc/init.d/openstack_vtep.rc start ... 2014-10-21T11:25:59Z|71|ovs-vtep|INFO|adding 33 to br-int 2014-10-21T11:25:59Z|72|ovs-vtep|INFO|adding 55 to br-int 2014-10-21T11:25:59Z|73|ovs-vtep|INFO|adding 74 to br-int 2014-10-21T11:25:59Z|74|ovs-vtep|INFO|adding 37 to br-int 2014-10-21T11:25:59Z|75|ovs-vtep|INFO|adding 47 to br-int 2014-10-21T11:25:59Z|76|ovs-vtep|INFO|adding 17 to br-int 2014-10-21T11:25:59Z|77|ovs-vtep|INFO|adding 57 to br-int 2014-10-21T11:25:59Z|78|ovs-vtep|INFO|adding 50 to br-int ...
To stop vtepd on a switch:
# /etc/init.d/openstack_vtep.rc stop
Configure /etc/vtepc/vtepc_config.py:
#********************************************************************* # # (C) Copyright Broadcom 2016 # # Licensed under the Apache License, Version 2.0 (the "License") # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # #***********************************************************************
gateways = [{"ip": "192.168.3.100"}] tenants = [{"name": "demo1", "password": "password"}, {"name": "demo2", "password": "password"}] devices = [ {"name": "Ubuntu 12.04.1", "description": "ecosystem wiki host", "mac": "90-e2-ba-19-b1-ed", "port": "29", "gateway": "192.168.3.100"}, {"name": "Ubuntu 12.04.2", "description": "ecosystem wiki host", "mac": "90-e2-ba-19-b1-ec", "port": "30", "gateway": "192.168.3.100"}, ] tenantdevmap = [ {"tenant": "demo1", "devices": [ "Ubuntu 12.04.1"]}, {"tenant": "demo2", "devices": [ "Ubuntu 12.04.2"]}, ]
local_interface = "eth4"
The above basically configures the tenants (demo1 and demo2) and the devices that these tenants have access to. The minimum you need to change is the mac addresses of the devices plugged into ports 29 and 30 (you might need to change the port numbers as well if your cabling is not the same as documented here).
On controller nodes, vtepc can be run from the command line as follows:
$ sudo vtepc.py
To stop vtepc, issue a ctrl-c or "sudo killall vtepc" from a separate terminal.