Open Ethernet Networking (OpEN) API Guide and Reference Manual
3.6.0.3
|
OpenStack VTEP is an ICOS application that provides OpenStack tenants access to physical devices that are plugged into a properly configured ICOS switch.
OpenStack VTEP supports OpenStack clusters based on the following:
Note:
While it is possible to deploy OpenStack without the use of devstack, we based our work on devstack since it is the easiest way to get a reliable OpenStack cluster up and running for the purposes of evaluating OpenStack VTEP. Some hints on how to integrate with non-devstack deployments will be provided in the installation section.
OpenStack VTEP is software installed in the OpenStack cluster to integrate physical switches into OpenStack so that configured tenant VMs can reach physical devices plugged into the switches. In this way, OpenStack VTEP provides cloud administrators with the means for integrating legacy physical devices into OpenStack tenant networks.
OpenStack VTEP is a side cluster than runs alongside of OpenStack. On nodes that run OpenStack Nova to provide compute services, a companion process named vtepd runs. Its job is to ensure that packets that are issued from VMs intended for a device on a gateway switch are properly sent to the device. It also is responsible for ensuring that packets incoming from a switch to a tenant VM are able to reach the tenant. It does this by creating tunnels between the OVS virtual switch that tenant VMs are plugged into, and physical switches running ICOS that support VXCLAN overlay networks. The vtep daemon also runs on the switch which hosts the physical device. On the switch, vtepd creates tunnels in the opposite direction, originating at the switch and terminating on the OpenStack compute nodes where tenant VMs are executing.
On an OpenStack controller node, a second process, vtepc, runs alongside of OpenStack. Using the APIs exposed by OpenStack, it is aware of the OpenStack cluster and its topology, tenants, and networking configuration. It uses this information, along with an administrator-defined configuration to determine which tunnels must exist between compute nodes in the OpenStack cluster and OpenStack VTEP-enabled switches. vtepc commands vtepd instances running on compute nodes and VTEP switches to create and destroy tunnels to each other. It does this by writing data into a database running on the affected compute nodes and VTEP switches. vtepd on a compute node reacts to changes in the database by creating and destroying tunnels patched to the Open vSwitch integration bridges created by OpenStack. On the VTEP switch, vtepd interacts with the OpEN API overlay API to create and destroy tunnels.
Let's say, for example, the OpenStack tenant "demo" has been configured to access a physical host that is plugged into a port on a VTEP switch in the cluster. The administrator would express this in configuration with a small amount of python code that maps the "demo" tenant to the switch and port where the device is located.
vtepc will use this configuration to determine where tunnels need to exist in the openstack cluster. It does this by evaluating with nodes running Nova are currently running VMs that belong to the tenant. If the node has a tenant VM instance, it connects to the database on that node and populates it with the data necessary for vtepd to create a tunnel in the direction of the configured VTEP switch. A part of the data written to the database is the tenant ID; this tenant ID is unique to the "demo" tenant and is used as the VNID in the tunnel packets that are sent between the compute node and the gateway. vtepd calls Open vSwitch via command line to create the tunnel and patch the tunnel into the OpenStack integration bridge so that the tunnel is reachable by "demo" tenant VMs. vtepc also connects to the OpenStack VTEP switch and writes data needed by vtepd on the switch to create a "demo" tunnel to all the hosts running "demo" VMs. vtepd uses OpEN overlay API, not Open vSwitch, to create these tunnels.