1.6.4. OpenStack Stein, one compute with two VMs, Open vSwitch / VXLAN, iperf traffic¶
Two VMs are started on the compute node. Open vSwitch is used as mechanism driver and VXLAN is used as tenant type.
Virtual Accelerator is launched using the default configuration. In this case, 1 core is used (2 logical cores).
OpenStack Stein installation is done using official online documentation: http://docs.openstack.org/stein/
OpenStack project is an open source cloud computing platform. Some vocabulary is required to handle the following guide.
See also
The overview section of the OpenStack documentation.
Two links are used, one for the OpenStack control traffic (10.100.0.0/24
),
one for the VXLAN traffic ( 192.168.0.0/24
). No public interface is
configured. In this case, controller and network nodes are the
same machine.
We use the following prompts in this section:
root@controller:~# #=> controller node
root@compute:~# #=> compute node
root@vm1:~# #=> nova virtual machine 1 (on compute)
root@vm2:~# #=> nova virtual machine 2 (on compute)
OpenStack and Virtual Accelerator installation¶
Networking configuration¶
On
controller
node, set networking persistent configuration:root@controller:~# echo >> /etc/network/interfaces root@controller:~# echo "source /etc/network/interfaces.d/*" >> /etc/network/interfaces root@controller:~# echo 'auto eth0' >> /etc/network/interfaces.d/eth0 root@controller:~# echo 'iface eth0 inet static' >> /etc/network/interfaces.d/eth0 root@controller:~# echo ' address 10.100.0.11' >> /etc/network/interfaces.d/eth0 root@controller:~# echo ' netmask 255.255.255.0' >> /etc/network/interfaces.d/eth0 root@controller:~# CTRL_TUN_IP=192.168.0.1 root@controller:~# echo 'auto eth2' >> /etc/network/interfaces.d/eth2 root@controller:~# echo 'iface eth2 inet static' >> /etc/network/interfaces.d/eth2 root@controller:~# echo ' address $CTRL_TUN_IP' >> /etc/network/interfaces.d/eth2 root@controller:~# echo ' netmask 255.255.255.0' >> /etc/network/interfaces.d/eth2 root@controller:~# service networking restart root@controller:~# echo "10.100.0.11 controller" >> /etc/hosts root@controller:~# echo "10.100.0.31 compute" >> /etc/hosts
On
compute
node, set networking persistent configuration:root@compute:~# echo >> /etc/network/interfaces root@compute:~# echo "source /etc/network/interfaces.d/*" >> /etc/network/interfaces root@compute:~# echo 'auto eth0' >> /etc/network/interfaces.d/eth0 root@compute:~# echo 'iface eth0 inet static' >> /etc/network/interfaces.d/eth0 root@compute:~# echo ' address 10.100.0.31' >> /etc/network/interfaces.d/eth0 root@compute:~# echo ' netmask 255.255.255.0' >> /etc/network/interfaces.d/eth0 root@compute:~# CMPT_TUN_IP=192.168.0.2 root@compute:~# echo 'auto eth2' >> /etc/network/interfaces.d/eth2 root@compute:~# echo 'iface eth2 inet static' >> /etc/network/interfaces.d/eth2 root@compute:~# echo ' address $CMPT_TUN_IP' >> /etc/network/interfaces.d/eth2 root@compute:~# echo ' netmask 255.255.255.0' >> /etc/network/interfaces.d/eth2 root@compute:~# service networking restart root@compute:~# echo "10.100.0.11 controller" >> /etc/hosts root@compute:~# echo "10.100.0.31 compute" >> /etc/hosts
At this point, check connectivity between
controller
andcompute
:root@controller:~# ping compute -c 1 root@controller:~# ping 192.168.0.2 -c 1 root@compute:~# ping controller -c 1 root@compute:~# ping 192.168.0.1 -c 1
On
compute
node, activate hugepages:root@compute:~# echo 16 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages root@compute:~# mkdir -p /dev/hugepages root@compute:~# mount -t hugetlbfs none /dev/hugepages root@compute:~# cat /proc/meminfo ... HugePages_Total: 16 HugePages_Free: 16 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB
Security setups¶
On all machines, create
/root/passwords.env
:root@all:~# echo 'export DB_PASS=admin' > /root/passwords.env root@all:~# echo 'export RABBIT_PASS=guest' >> /root/passwords.env root@all:~# echo 'export KEYSTONE_DBPASS=keystone' >> /root/passwords.env root@all:~# echo 'export ADMIN_PASS=admin' >> /root/passwords.env root@all:~# echo 'export DEMO_PASS=admin' >> /root/passwords.env root@all:~# echo 'export GLANCE_DBPASS=glance' >> /root/passwords.env root@all:~# echo 'export GLANCE_PASS=glance' >> /root/passwords.env root@all:~# echo 'export NOVA_DBPASS=nova' >> /root/passwords.env root@all:~# echo 'export NOVA_PASS=nova' >> /root/passwords.env root@all:~# echo 'export PLACEMENT_PASS=placement' >> /root/passwords.env root@all:~# echo 'export PLACEMENT_DBPASS=placement' >> /root/passwords.env root@all:~# echo 'export DASH_DBPASS=dash' >> /root/passwords.env root@all:~# echo 'export NEUTRON_DBPASS=neutron' >> /root/passwords.env root@all:~# echo 'export NEUTRON_PASS=neutron' >> /root/passwords.env root@all:~# echo 'export METADATA_SECRET=neutron' >> /root/passwords.env
Source the created file:
See also
The environment security section of the OpenStack documentation.
root@all:~# source /root/passwords.env
Preliminaries and environment packages¶
Install
crudini
and rename it toopenstack-config
:root@all:~# yum install -y crudini root@all:~# ln -s /usr/bin/crudini /usr/bin/openstack-config
On all machines, update the distribution and install the environment packages:
See also
The environment packages section of the OpenStack documentation.
root@all:~# yum install -y centos-release-openstack-stein root@all:~# yum update && yum upgrade root@all:~# yum install -y python-openstackclient root@all:~# yum install -y openstack-selinux
On all machines, install qemu and libvirt:
See also
root@all:~# apt-get install -y libvirt-bin python-libvirt qemu qemu-system-x86
On the controller, install and configure the SQL database packages:
See also
The SQL database section of the OpenStack documentation.
root@controller:~# yum install -y mariadb mariadb-server python2-PyMySQL root@controller:~# MYSQL_FILE=/etc/my.cnf.d/openstack.cnf root@controller:~# touch $MYSQL_FILE root@controller:~# crudini --set $MYSQL_FILE mysqld bind-address 0.0.0.0 root@controller:~# crudini --set $MYSQL_FILE mysqld default-storage-engine innodb root@controller:~# crudini --set $MYSQL_FILE mysqld innodb_file_per_table on root@controller:~# crudini --set $MYSQL_FILE mysqld collation-server utf8_general_ci root@controller:~# crudini --set $MYSQL_FILE mysqld character-set-server utf8 root@controller:~# crudini --set $MYSQL_FILE mysqld max_connections 4096 root@controller:~# systemctl enable mariadb root@controller:~# systemctl start mariadb root@controller:~# mysqladmin -u root password $DB_PASS
On the controller, install and configure the message queue service:
See also
The message queue service section of the OpenStack documentation.
root@controller:~# yum install -y rabbitmq-server root@controller:~# systemctl enable rabbitmq-server root@controller:~# systemctl start rabbitmq-server root@controller:~# rabbitmqctl add_user openstack $RABBIT_PASS root@controller:~# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Add the Identity service¶
This section describes how to install and configure the keystone OpenStack Identity service on the controller node.
See also
The identity service section of the OpenStack documentation.
Install and configure the OpenStack Identity service:
Create the
keystone
database, and grant access to thekeystone
user:root@controller:~# mysql -uroot -padmin -e "CREATE DATABASE keystone;" root@controller:~# mysql -uroot -padmin -e \ "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '${KEYSTONE_DBPASS}';" root@controller:~# mysql -uroot -padmin -e \ "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '${KEYSTONE_DBPASS}';" root@controller:~# mysql -uroot -padmin -e \ "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY '${KEYSTONE_DBPASS}';"
Install and configure keystone, Apache and the other required packages:
root@controller:~# yum install -y openstack-keystone httpd mod_wsgi root@controller:~# alias ks-conf=\ 'openstack-config --set /etc/keystone/keystone.conf' root@controller:~# ks-conf database connection \ mysql+pymysql://keystone:$KEYSTONE_DBPASS@controller/keystone root@controller:~# ks-conf token provider fernet root@controller:~# su -s /bin/sh -c "keystone-manage db_sync" keystone root@controller:~# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone root@controller:~# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone root@controller:~# keystone-manage bootstrap --bootstrap-password $ADMIN_PASS \ --bootstrap-admin-url http://controller:5000/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne
Configure the HTTP server:
Configure the ServerName option to reference the controller node:
root@controller:~# echo "ServerName controller" >> /etc/httpd/conf/httpd.conf
Enable the Identity service virtual host:
root@controller:~# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
Restart the HTTP server:
root@controller:~# systemctl enable httpd.service root@controller:~# systemctl start httpd.service
Create projects, users, and roles:
root@controller:~# openstack project create --domain default \ --description "Admin Project" admin root@controller:~# openstack user create --domain default \ --password $ADMIN_PASS admin root@controller:~# openstack role create admin root@controller:~# openstack role add --project admin --user admin admin root@controller:~# openstack project create --domain default \ --description "Service Project" service
Check that keystone service is correctly installed:
root@controller:~# openstack --os-auth-url http://controller:5000/v3 \ --os-project-domain-name default \ --os-user-domain-name default \ --os-project-name admin \ --os-username admin \ --os-auth-type password \ --os-password $ADMIN_PASS token issue +------------+----------------------------------+ | Field | Value | +------------+----------------------------------+ | expires | 2016-04-06T09:14:54.963320Z | | id | fbc6f7f0163f407c9b84cd067bc4acae | | project_id | 3e3b085aa3ea445f8beabc369963d63b | | user_id | 79aff2d16d54402c80b6594b8507b0d4 | +------------+----------------------------------+ root@controller:~# source /root/admin-openrc.sh
Create OpenStack client environment scripts:
Export the OpenStack credentials:
root@controller:~# echo "export OS_USERNAME=admin" > /root/admin-openrc.sh root@controller:~# echo "export OS_PASSWORD=$ADMIN_PASS" >> /root/admin-openrc.sh root@controller:~# echo "export OS_PROJECT_NAME=admin" >> /root/admin-openrc.sh root@controller:~# echo "export OS_USER_DOMAIN_NAME=default" >> /root/admin-openrc.sh root@controller:~# echo "export OS_PROJECT_DOMAIN_NAME=default" >> /root/admin-openrc.sh root@controller:~# echo "export OS_AUTH_URL=http://controller:5000/v3" >> /root/admin-openrc.sh root@controller:~# echo "export OS_IDENTITY_API_VERSION=3" >> /root/admin-openrc.sh
Check that the credentials file can be used to issue tokens:
root@controller:~# source /root/admin-openrc.sh root@controller:~# openstack token issue +------------+----------------------------------+ | Field | Value | +------------+----------------------------------+ | expires | 2016-04-06T09:15:01.912362Z | | id | 5db746258a6945c2a73b8f361c208874 | | project_id | 3e3b085aa3ea445f8beabc369963d63b | | user_id | 79aff2d16d54402c80b6594b8507b0d4 | +------------+----------------------------------+
Add the Image service¶
This section describes how to install and configure the glance OpenStack Image service on the controller node.
See also
The image service section of the OpenStack documentation.
Install and configure the OpenStack Image service:
Create the
glance
database, and grant access to theglance
user:root@controller:~# mysql -uroot -padmin -e "CREATE DATABASE glance;" root@controller:~# mysql -uroot -padmin -e \ "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '${GLANCE_DBPASS}';" root@controller:~# mysql -uroot -padmin -e \ "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '${GLANCE_DBPASS}';" root@controller:~# mysql -uroot -padmin -e \ "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY '${GLANCE_DBPASS}';"
Add the glance service to OpenStack:
root@controller:~# source /root/admin-openrc.sh root@controller:~# openstack user create --domain default \ --password $GLANCE_PASS glance root@controller:~# openstack role add --project service \ --user glance admin root@controller:~# openstack service create --name glance \ --description "OpenStack Image service" image root@controller:~# openstack endpoint create --region RegionOne \ image public http://controller:9292 root@controller:~# openstack endpoint create --region RegionOne \ image internal http://controller:9292 root@controller:~# openstack endpoint create --region RegionOne \ image admin http://controller:9292
Install and configure glance packages:
root@controller:~# yum install -y openstack-glance root@controller:~# alias glanceapi-conf=\ 'openstack-config --set /etc/glance/glance-api.conf' root@controller:~# glanceapi-conf database connection \ mysql+pymysql://glance:$GLANCE_DBPASS@controller/glance root@controller:~# glanceapi-conf keystone_authtoken auth_uri http://controller:5000 root@controller:~# glanceapi-conf keystone_authtoken auth_url http://controller:5000 root@controller:~# glanceapi-conf keystone_authtoken auth_type password root@controller:~# glanceapi-conf keystone_authtoken project_domain_name default root@controller:~# glanceapi-conf keystone_authtoken user_domain_name default root@controller:~# glanceapi-conf keystone_authtoken project_name service root@controller:~# glanceapi-conf keystone_authtoken username glance root@controller:~# glanceapi-conf keystone_authtoken password $GLANCE_PASS root@controller:~# glanceapi-conf paste_deploy flavor keystone root@controller:~# glanceapi-conf glance_store stores file,http root@controller:~# glanceapi-conf glance_store default_store file root@controller:~# glanceapi-conf glance_store filesystem_store_datadir \ /var/lib/glance/images root@controller:~# alias glancereg-conf=\ 'openstack-config --set /etc/glance/glance-registry.conf' root@controller:~# glancereg-conf database connection \ mysql+pymysql://glance:$GLANCE_DBPASS@controller/glance root@controller:~# glancereg-conf keystone_authtoken auth_uri http://controller:5000 root@controller:~# glancereg-conf keystone_authtoken auth_url http://controller:5000 root@controller:~# glancereg-conf keystone_authtoken auth_type password root@controller:~# glancereg-conf keystone_authtoken project_domain_name default root@controller:~# glancereg-conf keystone_authtoken user_domain_name default root@controller:~# glancereg-conf keystone_authtoken project_name service root@controller:~# glancereg-conf keystone_authtoken username glance root@controller:~# glancereg-conf keystone_authtoken password $GLANCE_PASS root@controller:~# glancereg-conf paste_deploy flavor keystone root@controller:~# su -s /bin/sh -c "glance-manage db_sync" glance
Restart the Image services and remove the SQLite database file:
root@controller:~# systemctl enable openstack-glance-api openstack-glance-registry root@controller:~# systemctl start openstack-glance-api openstack-glance-registry
Check that glance is correctly installed by adding CirrOS image:
Note
CirrOS is a small Linux image that helps you test your OpenStack deployment.
root@controller:~# source /root/admin-openrc.sh root@controller:~# url=http://download.cirros-cloud.net/0.3.4 root@controller:~# curl -O $url/cirros-0.3.4-x86_64-disk.img root@controller:~# openstack image create "cirros-0.3.4-x86_64" \ --file cirros-0.3.4-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2016-04-06T08:16:33Z | | disk_format | qcow2 | | id | 074eb443-5eab-45bb-9c6e-16124154e671 | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.4-x86_64 | | owner | 3e3b085aa3ea445f8beabc369963d63b | | protected | False | | size | 13287936 | | status | active | | tags | [] | | updated_at | 2016-04-06T08:16:33Z | | virtual_size | None | | visibility | public | +------------------+--------------------------------------+ root@controller:~# openstack image list +--------------------------------------+---------------------+ | ID | Name | +--------------------------------------+---------------------+ | 074eb443-5eab-45bb-9c6e-16124154e671 | cirros-0.3.4-x86_64 | +--------------------------------------+---------------------+
Add the Placement service¶
This section describes how to install and configure the OpenStack Placement service on controller node.
See also
The Placement service section of the OpenStack documentation.
Install and configure the OpenStack Placement service:
Create the
placement
database, and grant access to theplacement
user:root@controller:~# mysql -uroot -padmin -e "CREATE DATABASE placement;" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '${PLACEMENT_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '${PLACEMENT_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'controller' IDENTIFIED BY '${PLACEMENT_DBPASS}';"
Add the placement service to OpenStack:
root@controller:~# openstack user create --domain default \ --password $PLACEMENT_PASS placement root@controller:~# openstack role add --project service \ --user placement admin root@controller:~# openstack service create --name placement \ --description "OpenStack Placement service" placement root@controller:~# openstack endpoint create --region RegionOne \ placement public http://controller:8778 root@controller:~# openstack endpoint create --region RegionOne \ placement internal http://controller:8778 root@controller:~# openstack endpoint create --region RegionOne \ placement admin http://controller:8778
Install and configure placement packages:
root@controller:~# yum install -y openstack-placement-api root@controller:~# alias placement-conf='openstack-config --set /etc/placement/placement-api.conf' root@controller:~# placement-conf placement_database connection mysql+pymysql://placement:$PLACEMENT_DBPASS@controller/placement root@controller:~# placement-conf api auth_strategy keystone root@controller:~# placement-conf keystone_authtoken auth_uri http://controller:5000/v3 root@controller:~# placement-conf keystone_authtoken auth_url http://controller:5000/v3 root@controller:~# placement-conf keystone_authtoken memcached_servers controller:11211 root@controller:~# placement-conf keystone_authtoken auth_type password root@controller:~# placement-conf keystone_authtoken project_domain_name default root@controller:~# placement-conf keystone_authtoken user_domain_name default root@controller:~# placement-conf keystone_authtoken project_name service root@controller:~# placement-conf keystone_authtoken username placement root@controller:~# placement-conf keystone_authtoken password $PLACEMENT_PASS root@controller:~# su -s /bin/sh -c "placement-manage db sync" placement
Restart the HTTP server:
Warning
A known bug with Stein requires to add the XML code below inside the <virtualhost> section of the default HTTP configuration file
root@controller:~# cat /etc/httpd/conf.d/00-placement-api.conf ... <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> ... root@controller:~# systemctl restart httpd
Check that Placement service is correctly installed:
placement-status upgrade check
Add the Compute service¶
This section describes how to install and configure the nova OpenStack Compute service on controller and compute nodes.
See also
The Compute service section of the OpenStack documentation.
On
controller
node, install and configure Compute service:Create the
nova
database, and grant access to thenova
user:root@controller:~# mysql -uroot -padmin -e "CREATE DATABASE nova;" root@controller:~# mysql -uroot -padmin -e "CREATE DATABASE nova_api;" root@controller:~# mysql -uroot -padmin -e "CREATE DATABASE nova_cell0;" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '${NOVA_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '${NOVA_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY '${NOVA_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '${NOVA_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '${NOVA_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY '${NOVA_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '${NOVA_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '${NOVA_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY '${NOVA_DBPASS}';"
Add the nova service to OpenStack:
root@controller:~# openstack user create --domain default --password $NOVA_PASS nova root@controller:~# openstack role add --project service --user nova admin root@controller:~# openstack service create --name nova --description "OpenStack Compute" compute root@controller:~# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 root@controller:~# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 root@controller:~# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
Install and configure all the needed nova packages for controller node:
root@controller:~# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler root@controller:~# alias nova-conf=\ 'openstack-config --set /etc/nova/nova.conf' root@controller:~# nova-conf api_database connection mysql+pymysql://nova:$NOVA_DBPASS@controller/nova_api root@controller:~# nova-conf database connection mysql+pymysql://nova:$NOVA_DBPASS@controller/nova root@controller:~# nova-conf DEFAULT transport_url rabbit://openstack:$RABBIT_PASS@controller root@controller:~# nova-conf DEFAULT enabled_apis osapi_compute,metadata root@controller:~# nova-conf api auth_strategy keystone root@controller:~# nova-conf keystone_authtoken auth_uri http://controller:5000/v3 root@controller:~# nova-conf keystone_authtoken auth_url http://controller:5000/v3 root@controller:~# nova-conf keystone_authtoken memcached_servers controller:11211 root@controller:~# nova-conf keystone_authtoken auth_type password root@controller:~# nova-conf keystone_authtoken project_domain_name default root@controller:~# nova-conf keystone_authtoken user_domain_name default root@controller:~# nova-conf keystone_authtoken project_name service root@controller:~# nova-conf keystone_authtoken username nova root@controller:~# nova-conf keystone_authtoken password $NOVA_PASS root@controller:~# nova-conf DEFAULT my_ip $MGMT_IP root@controller:~# nova-conf DEFAULT use_neutron True root@controller:~# nova-conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver root@controller:~# nova-conf vnc enabled true root@controller:~# nova-conf vnc server_listen $MGMT_IP root@controller:~# nova-conf vnc server_proxyclient_address $MGMT_IP root@controller:~# nova-conf glance api_servers http://controller:9292 root@controller:~# nova-conf oslo_concurrency lock_path /var/lib/nova/tmp root@controller:~# nova-conf placement os_region_name RegionOne root@controller:~# nova-conf placement project_domain_name default root@controller:~# nova-conf placement user_domain_name default root@controller:~# nova-conf placement project_name service root@controller:~# nova-conf placement auth_type password root@controller:~# nova-conf placement auth_url http://controller:5000/v3 root@controller:~# nova-conf placement username placement root@controller:~# nova-conf placement password $PLACEMENT_PASS root@controller:~# su -s /bin/sh -c "nova-manage api_db sync" nova root@controller:~# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova root@controller:~# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova root@controller:~# su -s /bin/sh -c "nova-manage db sync" nova root@controller:~# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
Restart the nova services:
root@controller:~# systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy root@controller:~# systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
On compute node, install and configure the Compute service:
Install and configure all the needed nova packages for compute node:
root@compute:~# yum install -y openstack-nova-compute root@computes:~# alias nova-conf=\ 'openstack-config --set /etc/nova/nova.conf' root@compute:~# nova-conf placement username placement root@compute:~# nova-conf DEFAULT transport_url rabbit://openstack:$RABBIT_PASS@controller root@compute:~# nova-conf DEFAULT enabled_apis osapi_compute,metadata root@compute:~# nova-conf api auth_strategy keystone root@compute:~# nova-conf keystone_authtoken auth_uri http://controller:5000/v3 root@compute:~# nova-conf keystone_authtoken auth_url http://controller:5000/v3 root@compute:~# nova-conf keystone_authtoken memcached_servers controller:11211 root@compute:~# nova-conf keystone_authtoken auth_type password root@compute:~# nova-conf keystone_authtoken project_domain_name default root@compute:~# nova-conf keystone_authtoken user_domain_name default root@compute:~# nova-conf keystone_authtoken project_name service root@compute:~# nova-conf keystone_authtoken username nova root@compute:~# nova-conf keystone_authtoken password $NOVA_PASS root@compute:~# nova-conf DEFAULT my_ip $MGMT_IP root@compute:~# nova-conf DEFAULT use_neutron True root@compute:~# nova-conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver root@compute:~# nova-conf vnc enabled true root@compute:~# nova-conf vnc vncserver_listen 0.0.0.0 root@compute:~# nova-conf vnc vncserver_proxyclient_address $MGMT_IP root@compute:~# nova-conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html root@compute:~# nova-conf glance api_servers http://controller:9292 root@compute:~# nova-conf oslo_concurrency lock_path /var/lib/nova/tmp root@compute:~# nova-conf placement os_region_name RegionOne root@compute:~# nova-conf placement project_domain_name default root@compute:~# nova-conf placement user_domain_name default root@compute:~# nova-conf placement project_name service root@compute:~# nova-conf placement auth_type password root@compute:~# nova-conf placement auth_url http://controller:5000/v3 root@compute:~# nova-conf placement username placement root@compute:~# nova-conf placement password $PLACEMENT_PASS
Restart libvirtd and the nova-compute service:
root@compute:~# systemctl enable libvirtd openstack-nova-compute root@compute:~# systemctl start libvirtd openstack-nova-compute
From the controller, check that nova is correctly installed on all nodes:
Discover compute node:
root@controller:~# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Verify that all components are successfully started:
root@controller:~# openstack compute service list +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-cert | controller | internal | enabled | up | 2016-04-06T08:20:44.000000 | - | | 2 | nova-consoleauth | controller | internal | enabled | up | 2016-04-06T08:20:49.000000 | - | | 3 | nova-scheduler | controller | internal | enabled | up | 2016-04-06T08:20:44.000000 | - | | 4 | nova-conductor | controller | internal | enabled | up | 2016-04-06T08:20:50.000000 | - | | 5 | nova-compute | compute | nova | enabled | up | 2016-04-06T08:20:48.000000 | - | | 6 | nova-compute | compute | nova | enabled | up | 2016-04-06T08:20:52.000000 | - | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ root@controller:~# openstack catalog list +-----------+-----------+-----------------------------------------+ | Name | Type | Endpoints | +-----------+-----------+-----------------------------------------+ | glance | image | RegionOne | | | | admin: http://controller:9292 | | | | RegionOne | | | | internal: http://controller:9292 | | | | RegionOne | | | | public: http://controller:9292 | | | | | | placement | placement | RegionOne | | | | internal: http://controller:8778 | | | | RegionOne | | | | public: http://controller:8778 | | | | RegionOne | | | | admin: http://controller:8778 | | | | | | keystone | identity | RegionOne | | | | public: http://controller:5000/v3/ | | | | RegionOne | | | | internal: http://controller:5000/v3/ | | | | RegionOne | | | | admin: http://controller:5000/v3/ | | | | | | neutron | network | RegionOne | | | | internal: http://controller:9696 | | | | RegionOne | | | | public: http://controller:9696 | | | | RegionOne | | | | admin: http://controller:9696 | | | | | | nova | compute | RegionOne | | | | admin: http://controller:8774/v2.1 | | | | RegionOne | | | | public: http://controller:8774/v2.1 | | | | RegionOne | | | | internal: http://controller:8774/v2.1 | | | | | +-----------+-----------+-----------------------------------------+
Verify connectivity with the OpenStack Image service:
root@controller:~# openstack image list +--------------------------------------+---------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------+--------+--------+ | 074eb443-5eab-45bb-9c6e-16124154e671 | cirros-0.3.4-x86_64 | ACTIVE | | +--------------------------------------+---------------------+--------+--------+
Verify connectivity with the OpenStack Placement service:
root@controller:~# nova-status upgrade check +--------------------------------+ | Upgrade Check Results | +--------------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +--------------------------------+ | Check: Placement API | | Result: Success | | Details: None | +--------------------------------+ | Check: Ironic Flavor Migration | | Result: Success | | Details: None | +--------------------------------+ | Check: Request Spec Migration | | Result: Success | | Details: None | +--------------------------------+ | Check: Console Auths | | Result: Success | | Details: None | +--------------------------------+
Add the Networking service¶
This section describes how to install and configure the neutron OpenStack Networking service on controller and compute nodes.
See also
The networking service section of the OpenStack documentation.
On
controller
node, install and configure neutron service:Create the
neutron
database, and grant access to theneutron
user:root@controller:~# mysql -uroot -padmin -e "CREATE DATABASE neutron;" root@controller:~# mysql -uroot -padmin -e \ "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '${NEUTRON_DBPASS}';" root@controller:~# mysql -uroot -padmin -e \ "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '${NEUTRON_DBPASS}';" root@controller:~# mysql -uroot -padmin -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY '${NEUTRON_DBPASS}';"
Add the neutron service to OpenStack:
root@controller:~# source /root/admin-openrc.sh root@controller:~# openstack user create --domain default \ --password $NEUTRON_PASS neutron root@controller:~# openstack role add --project service \ --user neutron admin root@controller:~# openstack service create --name neutron \ --description "OpenStack Networking" network root@controller:~# openstack endpoint create --region RegionOne \ network public http://controller:9696 root@controller:~# openstack endpoint create --region RegionOne \ network internal http://controller:9696 root@controller:~# openstack endpoint create --region RegionOne \ network admin http://controller:9696
Configure networking options, i.e Open vSwitch mechanism driver and VXLAN tenant type:
Install all the needed neutron packages for controller node:
root@controller:~# yum install -y openstack-neutron openstack-neutron-ml2
Configure the neutron server component:
root@controller:~# alias neutron-conf=\ 'openstack-config --set /etc/neutron/neutron.conf' root@controller:~# neutron-conf database connection mysql+pymysql://neutron:$NEUTRON_DBPASS@controller/neutron root@controller:~# neutron-conf DEFAULT core_plugin ml2 root@controller:~# neutron-conf DEFAULT service_plugins router root@controller:~# neutron-conf DEFAULT allow_overlapping_ips True root@controller:~# neutron-conf DEFAULT transport_url rabbit://openstack:$RABBIT_PASS@controller root@controller:~# neutron-conf DEFAULT auth_strategy keystone root@controller:~# neutron-conf keystone_authtoken auth_uri http://controller:5000 root@controller:~# neutron-conf keystone_authtoken auth_url http://controller:5000 root@controller:~# neutron-conf keystone_authtoken memcached_servers controller:11211 root@controller:~# neutron-conf keystone_authtoken auth_type password root@controller:~# neutron-conf keystone_authtoken project_domain_name default root@controller:~# neutron-conf keystone_authtoken user_domain_name default root@controller:~# neutron-conf keystone_authtoken project_name service root@controller:~# neutron-conf keystone_authtoken username neutron root@controller:~# neutron-conf keystone_authtoken password $NEUTRON_PASS root@controller:~# neutron-conf DEFAULT notify_nova_on_port_status_changes True root@controller:~# neutron-conf DEFAULT notify_nova_on_port_data_changes True root@controller:~# neutron-conf nova auth_url http://controller:5000 root@controller:~# neutron-conf nova auth_type password root@controller:~# neutron-conf nova project_domain_name default root@controller:~# neutron-conf nova user_domain_name default root@controller:~# neutron-conf nova region_name RegionOne root@controller:~# neutron-conf nova project_name service root@controller:~# neutron-conf nova username nova root@controller:~# neutron-conf nova password $NOVA_PASS root@controller:~# neutron-conf oslo_concurrency lock_path /var/lib/neutron/tmp
Configure the Modular Layer 2 (ML2) plug-in:
Note
The Modular Layer 2 plug-in is used to build layer-2 (bridging and switching) virtual networking infrastructure.
root@controller:~# alias ml2-conf=\ 'openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini' root@controller:~# ml2-conf ml2 type_drivers flat,vxlan root@controller:~# ml2-conf ml2 tenant_network_types vxlan root@controller:~# ml2-conf ml2 mechanism_drivers openvswitch,l2population root@controller:~# ml2-conf ml2 extension_drivers port_security root@controller:~# ml2-conf ml2_type_flat flat_networks '*' root@controller:~# ml2-conf ml2_type_vxlan vni_ranges 1:1000 root@controller:~# ml2-conf securitygroup enable_ipset True
Configure the Open vSwitch agent:
root@controller:~# ovs-vsctl add-br br-ex root@controller:~# alias ml2ovs-conf=\ 'openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini' root@controller:~# ml2ovs-conf ovs local_ip $CTRL_TUN_IP root@controller:~# ml2ovs-conf agent tunnel_types vxlan root@controller:~# ml2ovs-conf agent l2_population True root@controller:~# ml2ovs-conf securitygroup enable_security_group False root@controller:~# ml2ovs-conf securitygroup firewall_driver openvswitch root@controller:~# ml2ovs-conf ovs bridge_mappings public:br-ex
Configure the Layer-3 (L3) agent:
Note
The layer-3 agent provides routing and NAT services for virtual networks.
root@controller:~# alias l3-conf=\ 'openstack-config --set /etc/neutron/l3_agent.ini' root@controller:~# l3-conf DEFAULT interface_driver \ neutron.agent.linux.interface.OVSInterfaceDriver root@controller:~# l3-conf DEFAULT external_network_bridge
Configure the DHCP agent:
Note
The DHCP agent provides DHCP services for virtual networks.
root@controller:~# alias dhcp-conf=\ 'openstack-config --set /etc/neutron/dhcp_agent.ini' root@controller:~# dhcp-conf DEFAULT interface_driver \ neutron.agent.linux.interface.OVSInterfaceDriver root@controller:~# dhcp-conf DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
Configure the metadata agent:
Note
The metadata agent provides configuration information such as credentials to instances.
root@controller:~# alias metadata-conf=\ 'openstack-config --set /etc/neutron/metadata_agent.ini' root@controller:~# metadata-conf DEFAULT nova_metadata_ip controller root@controller:~# metadata-conf DEFAULT metadata_proxy_shared_secret $METADATA_SECRET
Configure nova to use the neutron Networking service:
root@controller:~# alias nova-conf=\ 'openstack-config --set /etc/nova/nova.conf' root@controller:~# nova-conf neutron url http://controller:9696 root@controller:~# nova-conf neutron auth_url http://controller:5000 root@controller:~# nova-conf neutron auth_type password root@controller:~# nova-conf neutron project_domain_name default root@controller:~# nova-conf neutron user_domain_name default root@controller:~# nova-conf neutron region_name RegionOne root@controller:~# nova-conf neutron project_name service root@controller:~# nova-conf neutron username neutron root@controller:~# nova-conf neutron password $NEUTRON_PASS root@controller:~# nova-conf neutron service_metadata_proxy True root@controller:~# nova-conf neutron metadata_proxy_shared_secret $METADATA_SECRET
Populate the database and start the Networking services on controller node:
root@controller:~# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini root@controller:~# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron root@controller:~# systemctl restart openstack-nova-api root@controller:~# systemctl enable neutron-server neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent root@controller:~# systemctl start neutron-server neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
On compute node, install and configure the neutron service:
Install the neutron service and the neutron openvswitch agent:
root@controller:~# yum install -y openstack-neutron openstack-neutron-openvswitch
Configure the Networking common component:
root@compute:~# alias neutron-conf=\ 'openstack-config --set /etc/neutron/neutron.conf' root@compute:~# neutron-conf DEFAULT transport_url rabbit://openstack:$RABBIT_PASS@controller root@compute:~# neutron-conf DEFAULT auth_strategy keystone root@compute:~# neutron-conf keystone_authtoken auth_uri http://controller:5000 root@compute:~# neutron-conf keystone_authtoken auth_url http://controller:5000 root@compute:~# neutron-conf keystone_authtoken memcached_servers controller:11211 root@compute:~# neutron-conf keystone_authtoken auth_type password root@compute:~# neutron-conf keystone_authtoken project_domain_name default root@compute:~# neutron-conf keystone_authtoken user_domain_name default root@compute:~# neutron-conf keystone_authtoken project_name service root@compute:~# neutron-conf keystone_authtoken username neutron root@compute:~# neutron-conf keystone_authtoken password $NEUTRON_PASS root@compute:~# neutron-conf oslo_concurrency lock_path /var/lib/neutron/tmp root@computes:~# alias ml2ovs-conf=\ 'openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini' root@compute:~# ml2ovs-conf ovs local_ip $CMPT_TUN_IP root@compute:~# ml2ovs-conf agent tunnel_types vxlan root@compute:~# ml2ovs-conf agent l2_population True
Configure nova to use the neutron Networking service:
root@computes:~# alias nova-conf=\ 'openstack-config --set /etc/nova/nova.conf' root@compute:~# nova-conf neutron url http://controller:9696 root@compute:~# nova-conf neutron auth_url http://controller:5000 root@compute:~# nova-conf neutron auth_type password root@compute:~# nova-conf neutron project_domain_name default root@compute:~# nova-conf neutron user_domain_name default root@compute:~# nova-conf neutron region_name RegionOne root@compute:~# nova-conf neutron project_name service root@compute:~# nova-conf neutron username neutron root@compute:~# nova-conf neutron password $NEUTRON_PASS root@compute:~# nova-conf serial_console enabled true root@compute:~# nova-conf serial_console base_url ws://127.0.0.1:6083/ root@compute:~# nova-conf serial_console listen 0.0.0.0 root@compute:~# nova-conf serial_console proxyclient_address 0.0.0.0
Restart the Networking services on compute nodes:
root@compute:~# systemctl start openvswitch root@compute:~# systemctl restart openstack-nova-compute root@compute:~# systemctl enable neutron-openvswitch-agent root@compute:~# systemctl start neutron-openvswitch-agent
On controller node, check that neutron is correctly installed:
Verify successful launch of the neutron agents:
root@controller:~# openstack network agent list +--------------------------------------+-----------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+-----------------------+------------+-------------------+-------+-------+---------------------------+ | 483632ee-0fca-4b22-9bc8-fe4cacb9a1e1 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | | 4caa6ca8-12f5-46f0-8d11-49f7f34db73b | 6WIND Fast Path agent | compute | None | :-) | UP | neutron-fastpath-agent | | 77c2603d-0e06-4e86-a3f7-9fa2a2561fae | L3 agent | controller | nova | :-) | UP | neutron-l3-agent | | a094744a-e924-40da-b3be-ddcf590013cb | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | afd593c3-1c1e-4b37-af02-5022a3138177 | Metadata agent | compute | None | XXX | UP | neutron-metadata-agent | | b4a0f145-9077-4e27-966d-6598d17fa661 | 6WIND Fast Path agent | controller | None | XXX | UP | neutron-fastpath-agent | | e5106ed1-0d26-4d36-85ed-74740af01781 | Open vSwitch agent | compute | None | :-) | UP | neutron-openvswitch-agent | | ea01a095-7d39-487b-a203-4faa96852c31 | Open vSwitch agent | controller | None | :-) | UP | neutron-openvswitch-agent | +--------------------------------------+-----------------------+------------+-------------------+-------+-------+---------------------------+
OpenStack extra setups¶
Create VM image without multiqueue:
See also
The OpenStack section of this guide
We assume that we have a VM configured in monoqueue.
On controller node, create a flavor enabling hugepages. This is required for Virtual Accelerator to work correctly with OpenStack VMs:
See also
The Hugepages section
root@controller:~# openstack flavor create m1.vm_huge --ram 8192 --disk 32 --vcpus 4 root@controller:~# openstack flavor set --property hw:mem_page_size=large m1.vm_huge
OpenStack installation test¶
Note
This step is not mandatory. It is just to ensure that the OpenStack installation is working before installing and starting Virtual Accelerator.
On
controller
node, create private network:See also
The launch instance section of the OpenStack documentation.
root@controller:~# openstack network create private root@controller:~# openstack subnet create --network private --subnet-range 11.0.0.0/24 private_subnet root@controller:~# openstack router create router root@controller:~# openstack router add subnet router private_subnet
Then, deploy VMs from
controller
node:Note
The –image must be created beforehand using ‘openstack image create’
Note
In the following command, you can provide your own cloud.cfg. See an example in the OpenStack section of this guide
root@controller:~# openstack server create --flavor m1.vm_huge \ --image ubuntu-virtio \ --nic net-id=$(openstack network list -- name private (c ID -f value) \ --user-data cloud.cfg \ vm1 root@controller:~# openstack server create --flavor m1.vm_huge \ --image ubuntu-virtio \ --nic net-id=$(openstack network list -- name private (c ID -f value) \ --user-data cloud.cfg \ vm2
On
controller
node, check that the VMs areACTIVE
:root@controller:~# openstack server list +--------------------------------------+------+--------+------------------+---------------+---------------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+------+--------+------------------+---------------+---------------+ | f490981d-db92-4305-8563-be5a4f0dab29 | vm1 | ACTIVE | private=11.0.0.3 | ubuntu-virtio | m1.turbo_huge | | c2d3dd46-d7ba-412e-a755-1ee1b406d0fe | vm2 | ACTIVE | private=11.0.0.4 | ubuntu-virtio | m1.turbo_huge | +--------------------------------------+------+--------+------------------+---------------+---------------+
Note
If your VMs are not
ACTIVE
, please check the Troubleshooting Guide for help.Log in the VMs from each compute node using
telnet
:See also
The Connection to the VM section of this guide
root@computes:~# telnet 127.0.0.1 10000
Note
In order to get the VM port, use the following command:
root@compute:~# ps ax | grep -e qemu -e port=
Note
Login informations are located in
cloud.cfg
file. In our case, we use root for login and password.Ping between the two launched VMs:
root@vm1:~# ping 11.0.0.4 root@vm2:~# ping 11.0.0.3
Virtual Accelerator installation¶
Install Virtual Accelerator:
Mandatory steps for libvirt usage:
root@computes:~# virsh net-destroy default root@computes:~# virsh net-undefine default
Note
This use case does not implement it, but if you wish to do multiqueue, you should install qemu/libvirt using the QEMU and libvirt section of this guide.
Virtual Accelerator installation:
root@compute:~# dpkg -i 6wind-authentication-credentials-${COMPANY}_1.0-1_all.deb root@compute:~# PRODUCT=virtual-accelerator root@compute:~# VERSION=2.0 root@compute:~# DISTRIB=redhat-8 root@compute:~# ARCH=$(dpkg --print-architecture) root@compute:~# PKG=6wind-${PRODUCT}-${DISTRIB}-repository_${VERSION}-1_${ARCH}.deb root@compute:~# SUBDIR=${PRODUCT}/${DISTRIB}/${ARCH}/${VERSION} root@compute:~# curl --cacert /etc/certs/6wind_ca.crt \ --key /etc/certs/6wind_client.key \ --cert /etc/certs/6wind_client.crt \ -O https://repo.6wind.com/${SUBDIR}/${PKG} root@compute:~# rpm -i ${PKG} root@compute:~# yum update root@compute:~# rpm -e openvswitch dpdk --nodeps root@compute:~# yum install -y --allow-unauthenticated virtual-accelerator
See also
Install 6WIND OpenStack plugins:
root@compute:~# yum install -y fp-vdev-remote-stein-last.rpm root@compute:~# yum install -y networking-6wind-stein-last.rpm root@compute:~# yum install -y os-vif-6wind-plugin-stein-last.rpm
Configure and start Virtual Accelerator. Mind the CPU isolation for performance:
root@compute:~# FP_PORTS='0000:0b:00.1' fast-path.sh config --update root@compute:~# FP_MASK='6-13' fast-path.sh config --update root@compute:~# systemctl start virtual-accelerator.target
Restart the Open vSwitch control plane.
root@compute:~# systemctl restart openvswitch
Restart libvirtd.
root@compute:~# systemctl restart libvirtd
Warning
If you restart Virtual Accelerator, you must restart openvswitch and libvirt (and its VMs) as well.
On compute node, do the CPU isolation operations:
See also
The CPU isolation section of this guide
root@compute:~# openstack-config --set /etc/nova/nova.conf \ DEFAULT vcpu_pin_set 0,1 root@compute:~# systemctl restart openstack-nova-compute
On compute node, configure neutron:
root@compute:~# openstack-config --set /etc/neutron/neutron.conf agent root_helper 'sudo neutron-rootwrap /etc/neutron/rootwrap.conf' root@compute:~# openstack-config --set /etc/neutron/neutron.conf agent root_helper_daemon 'sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf' root@compute:~# systemctl restart openvswitch neutron-openvswitch-agent
On compute node, start Neutron fast-path agent:
root@compute:~# systemctl enable neutron-fastpath-agent root@compute:~# systemctl start neutron-fastpath-agent
Traffic test¶
On
controller
node, create private network (if not already done in the OpenStack installation test section of this guide):See also
The launch instance section of the OpenStack documentation.
root@controller:~# openstack network create private root@controller:~# openstack subnet create --network private --subnet-range 11.0.0.0/24 private_subnet root@controller:~# openstack router create router root@controller:~# openstack router add subnet router private_subnet
Then, deploy VMs from
controller
node:Note
The –image must be created beforehand using ‘openstack image create’
Note
In the following command, you can provide your own cloud.cfg. See an example in the OpenStack section of this guide
root@controller:~# openstack server create --flavor m1.vm_huge \ --image ubuntu-virtio \ --nic net-id=$(openstack network list -- name private (c ID -f value) \ --user-data cloud.cfg \ vm1 root@controller:~# openstack server create --flavor m1.vm_huge \ --image ubuntu-virtio \ --nic net-id=$(openstack network list -- name private (c ID -f value) \ --user-data cloud.cfg \ vm2
On controller node, check that the VMs are
ACTIVE
:root@controller:~# openstack server list +--------------------------------------+------+--------+------------------+---------------+---------------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+------+--------+------------------+---------------+---------------+ | f490981d-db92-4305-8563-be5a4f0dab29 | vm1 | ACTIVE | private=11.0.0.3 | ubuntu-virtio | m1.turbo_huge | | c2d3dd46-d7ba-412e-a755-1ee1b406d0fe | vm2 | ACTIVE | private=11.0.0.4 | ubuntu-virtio | m1.turbo_huge | +--------------------------------------+------+--------+------------------+---------------+---------------+
Note
If your VMs are not
ACTIVE
, please check the Troubleshooting Guide for help.On compute node, check that vNICs created by nova are accelerated:
root@compute:~# fp-shmem-ports -d [...] port 3: tap258e7c1d-44-vrf0 fpvi 4 numa no numa bus_info dev_port 0 mac 02:09:c0:ad:a7:ca mtu 1450 driver net_6wind_vhost GRO timeout 10us args: profile=endpoint,sockmode=client,txhash=l3l4,verbose=0,sockname=/tmp/vhost-socket-258e7c1d-445c-4ee1-94b4-591a603983c0,macaddr=02:09:c0:ad:a7:ca rx_cores: all RX queues: 1 (max: 1) TX queues: 8 (max: 64) RX TCP checksum on RX UDP checksum on RX GRO on RX LRO on RX MPLS IP off TX TCP checksum on TX UDP checksum on TX TSO on [...] port 5: tap09295875-92-vrf0 fpvi 6 numa no numa bus_info dev_port 0 mac 02:09:c0:e4:ab:92 mtu 1450 driver net_6wind_vhost GRO timeout 10us args: profile=endpoint,sockmode=client,txhash=l3l4,verbose=0,sockname=/tmp/vhost-socket-09295875-9218-4aa9-9da9-d71d7c495626,macaddr=02:09:c0:e4:ab:92 rx_cores: all RX queues: 1 (max: 1) TX queues: 8 (max: 64) RX TCP checksum on RX UDP checksum on RX GRO on RX LRO on RX MPLS IP off TX TCP checksum on TX UDP checksum on TX TSO on
Log in the VMs from controller using vnc:
See also
The Connection to the VM section of this guide
root@compute:~# openstack console url show vm1 +-------+-------------------------------------------------------------------------------------------+ | Field | Value | +-------+-------------------------------------------------------------------------------------------+ | type | novnc | | url | http://controller:6080/vnc_auto.html?path=%3Ftoken%3D7eee4b37-d4b2-46e2-9e13-58c071bfadcd | +-------+-------------------------------------------------------------------------------------------+ root@compute:~# openstack console url show vm2 +-------+-------------------------------------------------------------------------------------------+ | Field | Value | +-------+-------------------------------------------------------------------------------------------+ | type | novnc | | url | http://controller:6080/vnc_auto.html?path=%3Ftoken%3D73a972d7-8057-442a-9000-6d60eb4a06d3 | +-------+-------------------------------------------------------------------------------------------+
Note
Login informations are located in
cloud.cfg
file. In our case, we use root for login and password.
iperf3
test is launched between the two launched VMs:root@vm2:~# iperf3 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 11.0.0.3, port 36673 [ 5] local 11.0.0.4 port 5201 connected to 11.0.0.3 port 36674 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 1017 MBytes 8.53 Gbits/sec [ 5] 1.00-2.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 2.00-3.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 3.00-4.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 4.00-5.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 5.00-6.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 6.00-7.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 7.00-8.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 8.00-9.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 9.00-10.00 sec 1.06 GBytes 9.08 Gbits/sec [ 5] 10.00-10.05 sec 59.1 MBytes 9.09 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 5] 0.00-10.05 sec 10.6 GBytes 9.03 Gbits/sec 5 sender [ 5] 0.00-10.05 sec 10.6 GBytes 9.03 Gbits/sec receiver root@vm1:~# iperf3 -c 11.0.0.4 Connecting to host 11.0.0.4, port 5201 [ 4] local 11.0.0.3 port 36674 connected to 11.0.0.4 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 1.05 GBytes 9.03 Gbits/sec 2 429 KBytes [ 4] 1.00-2.00 sec 1.06 GBytes 9.08 Gbits/sec 0 479 KBytes [ 4] 2.00-3.00 sec 1.06 GBytes 9.08 Gbits/sec 1 388 KBytes [ 4] 3.00-4.00 sec 1.06 GBytes 9.08 Gbits/sec 0 446 KBytes [ 4] 4.00-5.00 sec 1.06 GBytes 9.09 Gbits/sec 0 497 KBytes [ 4] 5.00-6.00 sec 1.06 GBytes 9.09 Gbits/sec 1 408 KBytes [ 4] 6.00-7.00 sec 1.06 GBytes 9.08 Gbits/sec 0 461 KBytes [ 4] 7.00-8.00 sec 1.06 GBytes 9.08 Gbits/sec 1 365 KBytes [ 4] 8.00-9.00 sec 1.06 GBytes 9.08 Gbits/sec 0 425 KBytes [ 4] 9.00-10.00 sec 1.06 GBytes 9.08 Gbits/sec 0 476 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 10.6 GBytes 9.08 Gbits/sec 5 sender [ 4] 0.00-10.00 sec 10.6 GBytes 9.08 Gbits/sec receiver iperf Done.
See below
fp-cpu-usage
output during the test:root@compute:~# fp-cpu-usage Fast path CPU usage: cpu: %busy cycles cycles/packet 11: 87% 453106399 2709 23: <1% 4072666 0 average cycles/packets received from NIC: 2733 (457179065/167238)*