Table of Contents
Obsolete
Control Node
- http://docs.openstack.org/icehouse/install-guide/install/yum/content/keystone-install.html
yum install bridge-utils ntp mariadb
MySQL
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf vim /etc/mysql/my.cn.d/server.cnf
Under the [mysqld] section, set the following keys to enable InnoDB, UTF-8 character set, and UTF-8 collation by default:
[mysqld] #skip-networking #bind-address = 127.0.0.1 default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8
systemctl start mysqld.service
Create database:
mysqladmin -u root password mysqlroot ## maybe mysql_install_db mysql_secure_installation yum install MySQL-python
MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON neutron.* TO 'neutronUser'@'%' IDENTIFIED BY 'neutronPass'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> quit Bye
Installing packages
yum install yum-plugin-priorities yum install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm yum install openstack-utils yum upgrade
RabbitMQ
yum install -y rabbitmq-server
Installing Keystone:
yum install openstack-keystone python-keystoneclient
openstack-config --set /etc/keystone/keystone.conf \ database connection mysql://keystoneUser:keystonePass@controller/keystone MariaDB [(none)]> CREATE DATABASE keystone; Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass'; Query OK, 0 rows affected (0.00 sec)
Create the database tables for the Identity Service: ERROR version should be an integer
su -s /bin/sh -c "keystone-manage db_sync" keystone
It goes like this:
sudo chmod 777 /var/log/keystone/keystone.log sudo /usr/bin/openstack-db --drop --service keystone sudo openstack-db --init --service keystone --password keystonePass
Define an authorization token to use as a shared secret between the Identity Service and other OpenStack services. Use openssl to generate a random token and store it in the configuration file:
ADMIN_TOKEN=$(openssl rand -hex 10) echo $ADMIN_TOKEN openstack-config --set /etc/keystone/keystone.conf DEFAULT \ admin_token $ADMIN_TOKEN
By default, Keystone uses PKI tokens. Create the signing keys and certificates and restrict access to the generated data:
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone chown -R keystone:keystone /etc/keystone/ssl chmod -R o-rwx /etc/keystone/ssl service openstack-keystone start chkconfig openstack-keystone on
Define users, tenants, and roles
After you install the Identity Service, set up users, tenants, and roles to authenticate against. These are used to allow access to services and endpoints, described in the next section.
Typically, you would indicate a user and password to authenticate with the Identity Service. At this point, however, you have not created any users, so you have to use the authorization token created in an earlier step, see the section called “Install the Identity Service” for further details. You can pass this with the –os-token option to the keystone command or set the OSSERVICETOKEN environment variable. Set OSSERVICETOKEN, as well as OSSERVICEENDPOINT to specify where the Identity Service is running. Replace ADMIN_TOKEN with your authorization token.
$ export OS_SERVICE_TOKEN=ADMIN_TOKEN $ export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
Create an administrative user:
Create the admin user:
$ keystone user-create --name=admin --pass=ADMIN_PASS --email=ADMIN_EMAIL
Replace ADMIN_PASS with a secure password and replace ADMIN_EMAIL with an email address to associate with the account.
Create the admin role:
$ keystone role-create --name=admin
Create the admin tenant:
$ keystone tenant-create --name=admin --description="Admin Tenant"
You must now link the admin user, admin role, and admin tenant together using the user-role-add option:
$ keystone user-role-add --user=admin --tenant=admin --role=admin
Link the admin user, _member_ role, and admin tenant:
$ keystone user-role-add --user=admin --role=_member_ --tenant=admin
Create a normal user
Follow these steps to create a normal user and tenant, and link them to the special member role. You will use this account for daily non-administrative interaction with the OpenStack cloud. You can also repeat this procedure to create additional cloud users with different usernames and passwords. Skip the tenant creation step when creating these users.
Create the demo user: $ keystone user-create --name=demo --pass=DEMO_PASS --email=DEMO_EMAIL Replace DEMO_PASS with a secure password and replace DEMO_EMAIL with an email address to associate with the account.
Create the demo tenant Do not repeat this step when adding additional users:
$ keystone tenant-create --name=demo --description="Demo Tenant" Link the demo user, _member_ role, and demo tenant: $ keystone user-role-add --user=demo --role=_member_ --tenant=demo
Create a service tenant
OpenStack services also require a username, tenant, and role to access other OpenStack services. In a basic installation, OpenStack services typically share a single tenant named service.
You will create additional usernames and roles under this tenant as you install and configure each service.
Create the service tenant: $ keystone tenant-create --name=service --description="Service Tenant"
Define services and API endpoints
So that the Identity Service can track which OpenStack services are installed and where they are located on the network, you must register each service in your OpenStack installation. To register a service, run these commands:
keystone service-create. Describes the service. keystone endpoint-create. Associates API endpoints with the service.
You must also register the Identity Service itself. Use the OSSERVICETOKEN environment variable, as set previously, for authentication.
- Create a service entry for the Identity Service:
keystone service-create –name=keystone –type=identity \ –description=“OpenStack Identity”
- Specify an API endpoint for the Identity Service by using the returned service ID. When you specify an endpoint, you provide URLs for the public API, internal API, and admin API. In this guide, the controller host name is used. Note that the Identity Service uses a different port for the admin API.
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://controller:5000/v2.0 \
--internalurl=http://controller:5000/v2.0 \
--adminurl=http://controller:35357/v2.0
Verify the Identity Service installation
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
You can now use regular user name-based authentication.
Request a authentication token by using the admin user and the password you chose for that user:
keystone --os-username=admin --os-password=ADMIN_PASS \ --os-auth-url=http://controller:35357/v2.0 token-get
In response, you receive a token paired with your user ID. This verifies that the Identity Service is running on the expected endpoint and that your user account is established with the expected credentials.
Verify that authorization behaves as expected. To do so, request authorization on a tenant:
keystone --os-username=admin --os-password=ADMIN_PASS \ --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \ token-get
In response, you receive a token that includes the ID of the tenant that you specified. This verifies that your user account has an explicitly defined role on the specified tenant and the tenant exists as expected.
You can also set your –os-* variables in your environment to simplify command-line usage. Set up a admin-openrc.sh file with the admin credentials and admin endpoint: Select Text
export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0
Source this file to read in the environment variables:
source admin-openrc.sh
Verify that your admin-openrc.sh file is configured correctly. Run the same command without the –os-* arguments:
keystone token-get
The command returns a token and the ID of the specified tenant. This verifies that you have configured your environment variables correctly.
Verify that your admin account has authorization to perform administrative commands:
keystone user-list keystone user-role-list --user admin --tenant admin
Openstack Python clients
Service Client Package Description Block Storage cinder python-cinderclient Create and manage volumes. Compute nova python-novaclient Create and manage images, instances, and flavors. Database Service trove python-troveclient Create and manage databases. Identity keystone python-keystoneclient Create and manage users, tenants, roles, endpoints, and credentials. Image Service glance python-glanceclient Create and manage images. Networking neutron python-neutronclient Configure networks for guest servers. This client was previously called quantum. Object Storage swift python-swiftclient Gather statistics, list items, update metadata, and upload, download, and delete files stored by the Object Storage service. Gain access to an Object Storage installation for ad hoc processing. Orchestration heat python-heatclient Launch stacks from templates, view details of running stacks including events and resources, and update and delete stacks. Telemetry ceilometer python-ceilometerclient Create and collect measurements across OpenStack.
Install with pip
pip install python-PROJECTclient
Source default env for user, password:
vim mystack-openrc.sh cat admin-openrc.sh export OS_USERNAME=admin export OS_PASSWORD=adminPass export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0
source admin-openrc.sh
Create demo-openrc.sh
Image Service Glance:
Installation:
The Image Service provides the glance-api and glance-registry services, each with its own configuration file.
yum install openstack-glance python-glanceclient openstack-config --set /etc/glance/glance-api.conf database \ connection mysql://glanceUser:glancePass@controller/glance openstack-config --set /etc/glance/glance-registry.conf database \ connection mysql://glanceUser:glancePass@controller/glance
Create Database user:
mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass'; Query OK, 0 rows affected (0.00 sec)
Create Database tables:
su -s /bin/sh -c "glance-manage db_sync" glance
BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1090648
Workaround:
Set db_enforce_mysql_charset=False in /etc/glance/glance-api.conf glance-manage db_sync glance
Glance user
Create a glance user that the Image Service can use to authenticate with the Identity service. Choose a password and specify an email address for the glance user. Use the service tenant and give the user the admin role:
Note: source admin-openrc.sh
keystone user-create --name=glance --pass=glancePass \ --email=glance@example.com keystone user-role-add --user=glance --tenant=service --role=admin
Configure the Image Service to use the Identity Service for authentication.
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_uri http://controller:5000 # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_host controller # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_user glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_password glancePass openstack-config --set /etc/glance/glance-api.conf paste_deploy \ flavor keystone openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_user glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_password glancePass openstack-config --set /etc/glance/glance-registry.conf paste_deploy \ flavor keystone
Register the Image Service with the Identity service so that other OpenStack services can locate it. Register the service and create the endpoint:
$ keystone service-create --name=glance --type=image \
--description="OpenStack Image Service"
$ keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://controller:9292 \
--internalurl=http://controller:9292 \
--adminurl=http://controller:9292
Start the glance-api and glance-registry services and configure them to start when the system boots:
# service openstack-glance-api start # service openstack-glance-registry start # chkconfig openstack-glance-api on # chkconfig openstack-glance-registry on
Verify the Image Service installation
if errors check /var/log/glance/{api,registry}.log for details.
Download the image into a dedicated directory using wget or curl:
$ mkdir /tmp/images $ cd /tmp/images/ $ wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Upload the image to the Image Service:
glance image-create --name=IMAGELABEL --disk-format=FILEFORMAT \ --container-format=CONTAINERFORMAT --is-public=ACCESSVALUE < IMAGEFILE $ source admin-openrc.sh $ glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \ --container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img
Alternative:
$ glance image-create --name="cirros-0.3.2-x86_64" --disk-format=qcow2 \ --container-format=bare --is-public=true \ --copy-from http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Confirm image upload:
glance image-list rm -r /tmp/images
Nova: Install Compute controller services
This section installs compute controller services. A / more compute node will be configure (hypervisor) separately later and added to scale horizontally.
Install the Compute packages necessary for the controller node.
yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \ openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \ python-novaclient
Compute stores information in a database. In this guide, we use a MySQL database on the controller node. Configure Compute with the database location and credentials. Replace NOVA_DBPASS with the password for the database that you will create in a later step.
openstack-config --set /etc/nova/nova.conf \ database connection mysql://novaUser:novaPass@controller/nova
Set these configuration keys to configure Compute to use the Qpid message broker:
openstack-config --set /etc/nova/nova.conf \ DEFAULT rpc_backend qpid openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
Set these configuration keys to configure Compute to use the RabbitMQ message broker:
openstack-config --set /etc/nova/nova.conf \ DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
Set the myip, vncserverlisten, and vncserverproxyclientaddress configuration options to the management interface IP address of the controller node:
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.10.10.207 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 10.10.10.207 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.10.10.207
Use the password you created previously to log in as root. Create a nova database user:
mysql -u root -p mysql> CREATE DATABASE nova; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'novaUser'@'localhost' \ IDENTIFIED BY 'novaPass'; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'novaUser'@'%' \ IDENTIFIED BY 'novaPass';
Create the Compute service tables:
su -s /bin/sh -c "nova-manage db sync" nova
ERROR again
nova-manage --debug db sync nova cat /var/log/nova/nova-manage.log
Old trick:
sudo /usr/bin/openstack-db --drop --service nova sudo openstack-db --init --service nova --password novaPass nova-manage db sync
Create a nova user that Compute uses to authenticate with the Identity Service. Use the service tenant and give the user the admin role:
source admin-openrc.sh $ keystone user-create --name=nova --pass=novaPass --email=nova@example.com $ keystone user-role-add --user=nova --tenant=service --role=admin
Configure Compute to use these credentials with the Identity Service running on the controller. Replace NOVAPASS with your Compute password. # su openstack-config –set /etc/nova/nova.conf DEFAULT authstrategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password novaPass
You must register Compute with the Identity Service so that other OpenStack services can locate it. Register the service and specify the endpoint:
keystone service-create --name=nova --type=compute \
--description="OpenStack Compute"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://controller:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller:8774/v2/%\(tenant_id\)s
Start Compute services and configure them to start when the system boots:
service openstack-nova-api start service openstack-nova-cert start service openstack-nova-consoleauth start service openstack-nova-scheduler start service openstack-nova-conductor start service openstack-nova-novncproxy start chkconfig openstack-nova-api on chkconfig openstack-nova-cert on chkconfig openstack-nova-consoleauth on chkconfig openstack-nova-scheduler on chkconfig openstack-nova-conductor on chkconfig openstack-nova-novncproxy on
To verify your configuration, list available images:
nova --debug image-list
ERROR The management IP above were wrong. Fix them then restart all nova-services!!!
Configure compute node
After you configure the Compute service on the controller node, you must configure another system as a compute node. The compute node receives requests from the controller node and hosts virtual machine instances. You can run all services on a single node, but the examples in this guide use separate systems. This makes it easy to scale horizontally by adding additional Compute nodes following the instructions in this section.
The Compute service relies on a hypervisor to run virtual machine instances. OpenStack can use various hypervisors, but this guide uses KVM.
Install the Compute packages: yum install openstack-nova-compute
Edit the /etc/nova/nova.conf configuration file:
openstack-config --set /etc/nova/nova.conf database connection mysql://novaUser:novaPass@controller/nova openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password novaPass
Configure the Compute service to use the Qpid Rabbid message broker by setting these configuration keys:
# openstack-config --set /etc/nova/nova.conf \ DEFAULT rpc_backend rabbit <del>qpid</del> # openstack-config --set /etc/nova/nova.conf DEFAULT rabbid_host <del>qpid_hostname</del> controller
Configure Compute to provide remote console access to instances.
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.10.10.210 openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.10.10.210 openstack-config --set /etc/nova/nova.conf \ DEFAULT novncproxy_base_url http://controller:6080/vnc_auto.html
Specify the host that runs the Image Service.
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller
You must determine whether your system's processor and/or hypervisor support hardware acceleration for virtual machines.
Run the following command:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your system supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero, your system does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
Run the following command:
# openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
Start the Compute service and its dependencies. Configure them to start automatically when the system boots.
For RHEL or CentOS:
# service libvirtd start
# service messagebus start
# service openstack-nova-compute start
# chkconfig libvirtd on
# chkconfig messagebus on
# chkconfig openstack-nova-compute on
For Fedora:
service libvirtd start
service dbus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig dbus on
chkconfig openstack-nova-compute on
Neutron: Network on controller node
Before you configure OpenStack Networking (neutron), you must create a database and Identity service credentials including a user and service.
Connect to the database as the root user, create the neutron database, and grant the proper access to it: Replace NEUTRON_DBPASS with a suitable password. $ mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutronUser'@'localhost' \ IDENTIFIED BY 'neutronPass'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutronUser'@'%' \ IDENTIFIED BY 'neutronPass';
Create Identity service credentials for Networking:
Create the neutron user:
Replace NEUTRON_PASS with a suitable password and neutron@example.com with a suitable e-mail address.
keystone user-create --name neutron --pass neutronPass --email thuydang.de@gmail.com
Link the neutron user to the service tenant and admin role:
keystone user-role-add --user neutron --tenant service --role admin
Create the neutron service:
keystone service-create --name neutron --type network --description "OpenStack Networking"
Create the service endpoint:
$ keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696
To install the Networking components
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient
To configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message broker, topology change notifier, and plug-in.
Configure Networking to use the database:
Replace NEUTRON_DBPASS with a suitable password.
# openstack-config --set /etc/neutron/neutron.conf database connection \ mysql://neutronUser:neutronPass@controller/neutron
Configure Networking to use the Identity service for authentication:
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password neutronPass
Configure Networking to use the message broker:
RabbitMQ
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
rpc_backend neutron.openstack.common.rpc.impl_kombu
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
rabbit_host controller
QPID
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname controller
Configure Networking to notify Compute about network topology changes:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_url http://controller:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_password novaPass
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_auth_url http://controller:35357/v2.0
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
service_plugins router
[Note] Note
We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
verbose True
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS agent or service because it does not handle instance network traffic.
Run the following commands: openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Run the following commands: Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service. openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://controller:9696 openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_password netronPass openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutron [Note] Note By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
To finalize installation
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using ML2, for example, the symbolic link must point to /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands: ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Restart the Compute services:
service openstack-nova-api restart service openstack-nova-scheduler restart service openstack-nova-conductor restart
Start the Networking service and configure it to start when the system boots:
service neutron-server start
chkconfig neutron-server on
[Note] Note
Unlike other services, Networking typically does not require a separate step to populate the database because the neutron-server service populates it automatically. However, the packages for these distributions sometimes require running the neutron-db-manage command prior to starting the neutron-server service. We recommend attempting to start the service before manually populating the database. If the service returns database errors, perform the following operations:
Configure Networking to use long plug-in names:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
core_plugin neutron.plugins.ml2.plugin.Ml2Plugin
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
service_plugins neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
Populate the database:
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugin.ini upgrade head" neutron
Attempt to start the neutron-server service again. You can return the core_plugin and service_plugins configuration keys to short plug-in names.
Configure Network Node
Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.
Edit /etc/sysctl.conf to contain the following: net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.ipv4.tcp_window_scaling = 0 net.ipv4.tcp_rfc1337 = 1 Implement the changes: # sysctl -p
To install the Networking components
# yum install openstack-neutron openstack-neutron-ml2 \ openstack-neutron-openvswitch
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
Configure Networking to use the Identity service for authentication:
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password neutronPass
Configure Networking to use the message broker:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ <del>rpc_backend neutron.openstack.common.rpc.impl_qpid</del> openstack-config --set /etc/neutron/neutron.conf DEFAULT \ <del>qpid_hostname controller</del> openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router
[Note] Note
We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
To configure the Layer-3 (L3) agent
The Layer-3 (L3) agent provides routing services for instance virtual networks.
Run the following commands: openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ use_namespaces True [Note] Note
We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with troubleshooting.
To configure the DHCP agent
The DHCP agent provides DHCP services for instance virtual networks.
Run the following commands: openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ dhcp_driver neutron.agent.linux.dhcp.Dnsmasq openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ use_namespaces True [Note] Note
We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/dhcp_agent.ini to assist with troubleshooting.
Tunneling protocols such as generic routing encapsulation (GRE) include additional packet headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes. Internet protocol (IP) networks contain the path MTU discovery (PMTUD) mechanism to detect end-to-end MTU and adjust packet size accordingly. However, some operating systems and networks block or otherwise lack support for PMTUD causing performance degradation or connectivity failure.
Ideally, you can prevent these problems by enabling jumbo frames on the physical network that contains your tenant virtual networks. Jumbo frames support MTUs up to approximately 9000 bytes which negates the impact of GRE overhead on virtual networks. However, many network devices lack support for jumbo frames and OpenStack administrators often lack control of network infrastructure. Given the latter complications, you can also prevent MTU problems by reducing the instance MTU to account for GRE overhead. Determining the proper MTU value often takes experimentation, but 1454 bytes works in most environments. You can configure the DHCP server that assigns IP addresses to your instances to also adjust the MTU.
[Note] Note
Some cloud images such as CirrOS ignore the DHCP MTU option.
Run the following command:
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \
dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
Create and edit the /etc/neutron/dnsmasq-neutron.conf file and add the following keys:
dhcp-option-force=26,1454
Kill any existing dnsmasq processes:
# killall dnsmasq
To configure the metadata agent
The metadata agent provides configuration information such as credentials for remote access to instances.
Run the following commands:
Replace NEUTRONPASS with the password you chose for the neutron user in the Identity service. Replace METADATASECRET with a suitable secret for the metadata proxy.
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_url http://controller:5000/v2.0 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_region regionOne openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_tenant_name service openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_user neutron openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_password neutronPass openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ nova_metadata_ip controller openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ metadata_proxy_shared_secret metadataSecret
[Note] Note
We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with troubleshooting. [Note] Note
Perform the next two steps on the controller node.
On the controller node, configure Compute to use the metadata service: Replace METADATA_SECRET with the secret you chose for the metadata proxy. openstack-config --set /etc/nova/nova.conf DEFAULT \ service_neutron_metadata_proxy true openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_metadata_proxy_shared_secret metadataSecret On the controller node, restart the Compute API service: service openstack-nova-api restart
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build virtual networking framework for instances.
Run the following commands: Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface on the network node. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ local_ip 10.0.1.21 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ tunnel_type gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ enable_tunneling True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS. The external bridge br-ex handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port bridges the virtual and physical external networks in your environment.
Start the OVS service and configure it to start when the system boots: service openvswitch start chkconfig openvswitch on
Configure Interfaces
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-ens5 DEVICE=ens5 BOOTPROTO=static #NM_CONTROLLED=no ONBOOT=yes TYPE=Ethernet EOF cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex BOOTPROTO=static ONBOOT=yes IPADDR=192.169.200.215 NETMASK=255.255.255.0 GATEWAY=192.169.200.1 EOF
Add the integration bridge: ovs-vsctl add-br br-int Add the external bridge: ovs-vsctl add-br br-ex
ovs-vsctl show
c993ff93-7d03-42e2-8566-331d10442686
Bridge br-int
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port "ens2"
Interface "ens2"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.0.1"
Add a port to the external bridge that connects to the physical external network interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256. the API netwokr 192.168.x.x
# ovs-vsctl add-port br-ex INTERFACE_NAME ifup br-ex
Hardcode birdge with ifcfg-xxx: http://acidborg.wordpress.com/2010/01/20/how-to-configure-a-network-bridge-in-red-hat-fedora/
[Note] Note
Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network. To temporarily disable GRO on the external network interface while testing your environment: # ethtool -K INTERFACE_NAME gro off
Network settings on Neutron Node
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROT=static IPADDR=192.168.200.215 NETMASK=255.255.255.0 ONBOOT=yes NM_CONTROLLED=no
ifdown br-ex
ifup br-ex
# http://blog.oddbit.com/2014/05/20/fedora-and-ovs-bridge-interfac/
ovs-vsctl add-port br-ex ens5
ovs-vsctl show
e5283c3d-fae9-4030-bb03-b242ab77a1be
Bridge br-int
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port "ens5"
Interface "ens5"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.3.0"
ping 192.168.200.1 should work
Problems TODOs
br-ex not started automatically.
if-up br-ex
To finalize the installation
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using the ML2 plug-in, for example, the symbolic link must point to /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands: # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Fix start script to use ml2_conf.ini
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
On Fedora:
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
Start the Networking services and configure them to start when the system boots:
# service neutron-openvswitch-agent start # service neutron-l3-agent start # service neutron-dhcp-agent start # service neutron-metadata-agent start # chkconfig neutron-openvswitch-agent on # chkconfig neutron-l3-agent on # chkconfig neutron-dhcp-agent on # chkconfig neutron-metadata-agent on
Configure Compute Node
Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.
Edit /etc/sysctl.conf to contain the following: net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 Implement the changes: # sysctl -p
To install the Networking components
# yum install openstack-neutron-ml2 openstack-neutron-openvswitch
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
Configure Networking to use the Identity service for authentication:
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password neutronPass
Configure Networking to use the message broker:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router
[Note] Note
We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
Run the following commands: Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ local_ip 10.20.20.211 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ tunnel_type gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ enable_tunneling True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS.
Start the OVS service and configure it to start when the system boots: # service openvswitch start # chkconfig openvswitch on Add the integration bridge: # ovs-vsctl add-br br-int
To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Run the following commands: Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service. openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://controller:9696 openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_password neutronPass openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutron
[Note] Note
By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
To finalize the installation
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using the ML2 plug-in, for example, the symbolic link must point to /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands: # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Fix start script to use /etc/neutron/plugin.ini
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
On Fedora
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
Restart the Compute service:
# service openstack-nova-compute restart Start the Open vSwitch (OVS) agent and configure it to start when the system boots: # service neutron-openvswitch-agent start # chkconfig neutron-openvswitch-agent on
Create initial Network
Before launching your first instance, you must create the necessary virtual network infrastructure to which the instance will connect, including the external network and tenant network. See Figure 7.1, “Initial networks”. After creating this infrastructure, we recommend that you verify connectivity and resolve any issues before proceeding further.
External Network
The external network typically provides internet access for your instances. By default, this network only allows internet access from instances using Network Address Translation (NAT). You can enable internet access to individual instances using a floating IP address and suitable security group rules. The admin tenant owns this network because it provides external network access for multiple tenants. You must also enable sharing to allow access by those tenants.
Perform these commands on the controller node.
To create the external network
Source the admin tenant credentials:
$ source admin-openrc.sh
Create the network:
$ neutron net-create ext-net --shared --router:external=True Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | | name | ext-net | | provider:network_type | gre | | provider:physical_network | | | provider:segmentation_id | 1 | | router:external | True | | shared | True | | status | ACTIVE | | subnets | | | tenant_id | 54cd044c64d5408b83f843d63624e0d8 | +---------------------------+--------------------------------------+
Error
Connection to neutron failed: [Errno 111] Connection refused
Get other debugging info:
lsof -i :9696
neutron agent-list =⇒ connection to neutron failed maximum attempts reached
netstat -lnpt | grep 9696 =⇒ No output
ps -ef | grep 5249 ⇒ root 20407 3185 0 10:22 tty1 00:00:00 frep –color=auto 5249
service neutron-server status ⇒ neutron-server start/pre-start, process 4865
iptables-save | grep 9696 ⇒ no output
Solution
Disable l3-agent plugin from previous step (DB fix)
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
with router-plugin:
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
Error: rabbitMQ
Debug:
systemctl | grep failed rabbitmq-server.service loaded failed failed RabbitMQ broker
To create a subnet on the external network
Like a physical network, a virtual network requires a subnet assigned to it. The external network shares the same subnet and gateway associated with the physical network connected to the external interface on the network node. You should specify an exclusive slice of this subnet for router and floating IP addresses to prevent interference with other devices on the external network.
Replace FLOATINGIPSTART and FLOATINGIPEND with the first and last IP addresses of the range that you want to allocate for floating IP addresses. Replace EXTERNALNETWORKCIDR with the subnet associated with the physical network. Replace EXTERNALNETWORKGATEWAY with the gateway associated with the physical network, typically the “.1” IP address. You should disable DHCP on this subnet because instances do not connect directly to the external network and floating IP addresses require manual assignment.
Create the subnet: $ neutron subnet-create ext-net --name ext-subnet \ --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \ --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR
For example, using 203.0.113.0/24 with floating IP address range 203.0.113.101 to 203.0.113.200:
$ neutron subnet-create ext-net --name ext-subnet \ --allocation-pool start=203.0.113.101,end=203.0.113.200 \ --disable-dhcp --gateway 203.0.113.1 203.0.113.0/24
Our subnet:
neutron subnet-create ext-net --name ext-subnet \
--allocation-pool start=192.168.200.101,end=192.168.200.200 \
--disable-dhcp --gateway 192.168.200.1 192.168.200.0/24
Created a new subnet:
+------------------+--------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------+
| allocation_pools | {"start": "192.168.200.101", "end": "192.168.200.200"} |
| cidr | 192.168.200.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.200.1 |
| host_routes | |
| id | d7657925-00db-4beb-be07-f6e71338c3f1 |
| ip_version | 4 |
| name | ext-subnet |
| network_id | 3e466eec-a9e9-49e7-84e3-f3b13d9daada |
| tenant_id | 1131460a56374ec1b1cd542689c6e95c |
+------------------+--------------------------------------------------------+
Create Tenant Network
The tenant network provides internal network access for instances. The architecture isolates this type of network from other tenants. The demo tenant owns this network because it only provides network access for instances within it.
Perform these commands on the controller node.
To create the tenant network
Source the demo tenant credentials:
$ source demo-openrc.sh Create the network: $ neutron net-create demo-net Created a new network: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | admin_state_up | True | | id | ac108952-6096-4243-adf4-bb6615b3de28 | | name | demo-net | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +----------------+--------------------------------------+
Like the external network, your tenant network also requires a subnet attached to it. You can specify any valid subnet because the architecture isolates tenant networks. Replace TENANTNETWORKCIDR with the subnet you want to associate with the tenant network. Replace TENANTNETWORKGATEWAY with the gateway you want to associate with this network, typically the “.1” IP address. By default, this subnet will use DHCP so your instances can obtain IP addresses.
To create a subnet on the tenant network
Create the subnet:
$ neutron subnet-create demo-net --name demo-subnet \
--gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR
Example using 192.168.1.0/24:
$ neutron subnet-create demo-net --name demo-subnet \
--gateway 192.168.1.1 192.168.1.0/24
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} |
| cidr | 192.168.1.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.1.1 |
| host_routes | |
| id | 69d38773-794a-4e49-b887-6de6734e792d |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | demo-subnet |
| network_id | ac108952-6096-4243-adf4-bb6615b3de28 |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+-------------------+------------------------------------------------------+
To create a router on the tenant network and attach the external and tenant networks to it
A virtual router passes network traffic between two or more virtual networks. Each router requires one or more interfaces and/or gateways that provide access to specific networks. In this case, you will create a router and attach your tenant and external networks to it.
Create the router: $ neutron router-create demo-router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 635660ae-a254-4feb-8993-295aa9ec6418 | | name | demo-router | | status | ACTIVE | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +-----------------------+--------------------------------------+ Attach the router to the demo tenant subnet: $ neutron router-interface-add demo-router demo-subnet Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router. Attach the router to the external network by setting it as the gateway: $ neutron router-gateway-set demo-router ext-net Set gateway for router demo-router
Horizon
Install the dashboard
Before you can install and configure the dashboard, meet the requirements in the section called “System requirements”. [Note] Note
When you install only Object Storage and the Identity Service, even if you install the dashboard, it does not pull up projects and is unusable.
For more information about how to deploy the dashboard, see deployment topics in the developer documentation.
Install the dashboard on the node that can contact the Identity Service as root:
yum install memcached python-memcached mod_wsgi openstack-dashboard
Modify the value of CACHES['default']['LOCATION'] in /etc/openstack-dashboard/local_settings to match the ones set in /etc/sysconfig/memcached.
Open /etc/openstack-dashboard/local_settings and look for this line:
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}
[Note] Notes
The address and port must match the ones set in /etc/sysconfig/memcached.
If you change the memcached settings, you must restart the Apache web server for the changes to take effect.
You can use options other than memcached option for session storage. Set the session back-end through the SESSION_ENGINE option.
To change the timezone, use the dashboard or edit the /etc/openstack-dashboard/local_settings file.
Change the following parameter: TIME_ZONE = "UTC"
Update the ALLOWEDHOSTS in localsettings.py to include the addresses you wish to access the dashboard from.
Edit /etc/openstack-dashboard/localsettings: ALLOWEDHOSTS = ['localhost', 'my-desktop']
This guide assumes that you are running the Dashboard on the controller node. You can easily run the dashboard on a separate server, by changing the appropriate settings in local_settings.py. Edit /etc/openstack-dashboard/local_settings and change OPENSTACK_HOST to the hostname of your Identity Service: Select Text 1 OPENSTACK_HOST = "controller" Ensure that the SELinux policy of the system is configured to allow network connections to the HTTP server. # setsebool -P httpd_can_network_connect on Start the Apache web server and memcached: service httpd start service memcached start chkconfig httpd on chkconfig memcached on You can now access the dashboard at http://controller/dashboard . Login with credentials for any user that you created with the OpenStack Identity Service.
Cinder
Swift
Heat
Install the Orchestration service
Install the Orchestration module on the controller node:
yum install openstack-heat-api openstack-heat-engine \ openstack-heat-api-cfn
In the configuration file, specify the location of the database where the Orchestration service stores data. These examples use a MySQL database with a heat user on the controller node. Replace HEAT_DBPASS with the password for the database user:
openstack-config --set /etc/heat/heat.conf \ database connection mysql://heatUser:heatPass@controller/heat
Use the password that you set previously to log in as root and create a heat database user:
$ mysql -u root -p mysql> CREATE DATABASE heat; mysql> GRANT ALL PRIVILEGES ON heat.* TO 'heatUser'@'localhost' \ IDENTIFIED BY 'heatPass'; mysql> GRANT ALL PRIVILEGES ON heat.* TO 'heatUser'@'%' \ IDENTIFIED BY 'heatPass';
Create the heat service tables:
su -s /bin/sh -c "heat-manage db_sync" heat [Note] Note Ignore DeprecationWarning errors. Configure the Orchestration Service to use the Qpid message broker: openstack-config --set /etc/heat/heat.conf DEFAULT qpid_hostname controller
Configure the Orchestration Service to use the Rabbit message broker:
openstack-config --set /etc/heat/heat.conf DEFAULT \
rpc_backend heat.openstack.common.rpc.impl_kombu
openstack-config --set /etc/heat/heat.conf DEFAULT \
rabbit_host controller
Create a heat user that the Orchestration service can use to authenticate with the Identity Service. Use the service tenant and give the user the admin role:
keystone user-create --name=heat --pass=heatPass \ --email=thuydang.de@gmail.com keystone user-role-add --user=heat --tenant=service --role=admin
Run the following commands to configure the Orchestration service to authenticate with the Identity service:
openstack-config --set /etc/heat/heat.conf keystone_authtoken \ auth_uri http://controller:5000/v2.0 openstack-config --set /etc/heat/heat.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/heat/heat.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/heat/heat.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/heat/heat.conf keystone_authtoken \ admin_user heat openstack-config --set /etc/heat/heat.conf keystone_authtoken \ admin_password heatPass openstack-config --set /etc/heat/heat.conf ec2authtoken \ auth_uri http://controller:5000/v2.0
Register the Heat and CloudFormation APIs with the Identity Service so that other OpenStack services can locate these APIs. Register the services and specify the endpoints:
keystone service-create --name=heat --type=orchestration \
--description="Orchestration"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ orchestration / {print $2}') \
--publicurl=http://controller:8004/v1/%\(tenant_id\)s \
--internalurl=http://controller:8004/v1/%\(tenant_id\)s \
--adminurl=http://controller:8004/v1/%\(tenant_id\)s
keystone service-create --name=heat-cfn --type=cloudformation \
--description="Orchestration CloudFormation"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ cloudformation / {print $2}') \
--publicurl=http://controller:8000/v1 \
--internalurl=http://controller:8000/v1 \
--adminurl=http://controller:8000/v1
Create the heat_stack_user role.
This role is used as the default role for users created by the Orchestration module. Run the following command to create the heat_stack_user role: keystone role-create --name heat_stack_user
Configure the metadata and waitcondition servers' URLs.
Run the following commands to modify the [DEFAULT] section of the /etc/heat/heat.conf file:
openstack-config --set /etc/heat/heat.conf \
DEFAULT heat_metadata_server_url http://controller:8000
openstack-config --set /etc/heat/heat.conf \
DEFAULT heat_waitcondition_server_url http://controller:8000/v1/waitcondition
[Note] Note
The example uses the IP address of the controller (10.0.0.11) instead of the controller host name since our example architecture does not include a DNS setup. Make sure that the instances can resolve the controller host name if you choose to use it in the URLs.
Start the heat-api, heat-api-cfn and heat-engine services and configure them to start when the system boots:
service openstack-heat-api start service openstack-heat-api-cfn start service openstack-heat-engine start chkconfig openstack-heat-api on chkconfig openstack-heat-api-cfn on chkconfig openstack-heat-engine on
Error
If heat-api not started:
chmod 777 /var/log/heat/heat-manage.log su -s /bin/sh -c 'heat-manage --debug db_sync' heat
Verify the Orchestration service installation
To verify that the Orchestration service is installed and configured correctly, make sure that your credentials are set up correctly in the demo-openrc.sh file. Source the file, as follows:
source demo-openrc.sh
The Orchestration Module uses templates to describe stacks. To learn about the template languages, see the Template Guide in the Heat developer documentation.
Create a test template in the test-stack.yml file with the following content:
heat_template_version: 2013-05-23
description: Test Template
parameters:
ImageID:
type: string
description: Image use to boot a server
NetID:
type: string
description: Network ID for the server
resources:
server1:
type: OS::Nova::Server
properties:
name: "Test server"
image: { get_param: ImageID }
flavor: "m1.tiny"
networks:
- network: { get_param: NetID }
outputs:
server1_private_ip:
description: IP address of the server in the private network
value: { get_attr: [ server1, first_address ] }
Use the heat stack-create command to create a stack from this template:
$ NET_ID=$(nova net-list | awk '/ demo-net / { print $2 }')
$ heat stack-create -f test-stack.yml \
-P "ImageID=cirros-0.3.2-x86_64;NetID=$NET_ID" testStack
+--------------------------------------+------------+--------------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+------------+--------------------+----------------------+
| 477d96b4-d547-4069-938d-32ee990834af | testStack | CREATE_IN_PROGRESS | 2014-04-06T15:11:01Z |
+--------------------------------------+------------+--------------------+----------------------+
Verify that the stack was created successfully with the heat stack-list command:
$ heat stack-list +--------------------------------------+------------+-----------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+-----------------+----------------------+ | 477d96b4-d547-4069-938d-32ee990834af | testStack | CREATE_COMPLETE | 2014-04-06T15:11:01Z | +--------------------------------------+------------+-----------------+----------------------+
Troubleshooting stack-create
heat stack-create -f test-stack.yml -P "ImageID=cirros-0.3.2-x86_64;NetID=$NET_ID" testStack
+--------------------------------------+------------+--------------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+--------------------+----------------------+ | e39e7cff-f9d9-4854-87bf-78e809cb2303 | testStack | CREATE_IN_PROGRESS | 2014-10-13T14:14:50Z | +--------------------------------------+------------+--------------------+----------------------+
less /var/log/heat/engine.log heat resource-list testStack
+---------------+--------------------------------------+------------------+-----------------+----------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +---------------+--------------------------------------+------------------+-----------------+----------------------+ | server1 | d18714ca-8ad2-41ca-9d65-b9b8b21a4f70 | OS::Nova::Server | CREATE_FAILED | 2014-10-13T14:14:51Z | +---------------+--------------------------------------+------------------+-----------------+----------------------+
heat resource-show testStack server1
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+ | description | | | links | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303/resources/server1 (self) | | | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303 (stack) | | logical_resource_id | server1 | | physical_resource_id | d18714ca-8ad2-41ca-9d65-b9b8b21a4f70 | | required_by | | | resource_name | server1 | | resource_status | CREATE_FAILED | | resource_status_reason | Error: Creation of server Test server failed. | | resource_type | OS::Nova::Server | | updated_time | 2014-10-13T14:14:51Z | +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
heat event-list testStack
+---------------+--------------------------------------+-----------------------------------------------+--------------------+----------------------+ | resource_name | id | resource_status_reason | resource_status | event_time | +---------------+--------------------------------------+-----------------------------------------------+--------------------+----------------------+ | server1 | 4cbad49a-7a8e-41d9-9ea1-c7b9bd9de2db | state changed | CREATE_IN_PROGRESS | 2014-10-13T14:14:51Z | | server1 | bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb | Error: Creation of server Test server failed. | CREATE_FAILED | 2014-10-13T14:14:52Z | +---------------+--------------------------------------+-----------------------------------------------+--------------------+----------------------+
heat event-show server1 bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb usage: heat event-show <NAME or ID> <RESOURCE> <EVENT> heat event-show: error: too few arguments heat event-show testStack server1 bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| event_time | 2014-10-13T14:14:52Z |
| id | bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb |
| links | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303/resources/server1/events/bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb (self) |
| | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303/resources/server1 (resource) |
| | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303 (stack) |
| logical_resource_id | server1 |
| physical_resource_id | d18714ca-8ad2-41ca-9d65-b9b8b21a4f70 |
| resource_name | server1 |
| resource_properties | { |
| | "admin_pass": null, |
| | "user_data_format": "HEAT_CFNTOOLS", |
| | "admin_user": null, |
| | "name": "Test server", |
| | "block_device_mapping": null, |
| | "key_name": null, |
| | "image": "cirros-0.3.2-x86_64", |
| | "availability_zone": null, |
| | "image_update_policy": "REPLACE", |
| | "software_config_transport": "POLL_SERVER_CFN", |
| | "diskConfig": null, |
| | "metadata": null, |
| | "personality": {}, |
| | "user_data": "", |
| | "flavor_update_policy": "RESIZE", |
| | "flavor": "m1.tiny", |
| | "config_drive": null, |
| | "reservation_id": null, |
| | "networks": [ |
| | { |
| | "uuid": null, |
| | "fixed_ip": null, |
| | "network": "ec3b34ac-8890-4a2e-b223-29e914888d7b", |
| | "port": null |
| | } |
| | ], |
| | "security_groups": [], |
| | "scheduler_hints": null |
| | } |
| resource_status | CREATE_FAILED |
| resource_status_reason | Error: Creation of server Test server failed. |
| resource_type | OS::Nova::Server |
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Troubleshooting 2
[root@controller openstack]# heat stack-list
+--------------------------------------+------------+---------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+------------+---------------+----------------------+
| 28195295-be4e-4c89-9d6e-a00e6d51ff58 | testStack | CREATE_FAILED | 2014-10-13T14:30:54Z |
+--------------------------------------+------------+---------------+----------------------+
[root@controller openstack]# nova list
+--------------------------------------+-------------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------+--------+------------+-------------+----------+
| 5230eb6e-c152-459f-8a58-15b9ae21844e | Test server | ERROR | - | NOSTATE | |
+--------------------------------------+-------------+--------+------------+-------------+----------+
[root@controller openstack]#
[root@controller openstack]#
[root@controller openstack]# nova show 5230eb6e-c152-459f-8a58-15b9ae21844e
+--------------------------------------+------------------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2014-10-13T14:30:56Z |
| fault | {"message": "No valid host was found. ", "code": 500, "created": "2014-10-13T14:30:57Z"} |
| flavor | m1.tiny (1) |
| hostId | |
| id | 5230eb6e-c152-459f-8a58-15b9ae21844e |
| image | cirros-0.3.2-x86_64 (4e05adaa-bc94-45be-9ed0-7e3931df553b) |
| key_name | - |
| metadata | {} |
| name | Test server |
| os-extended-volumes:volumes_attached | [] |
| status | ERROR |
| tenant_id | 53a50dea852e4714bc51c4946cf7ff71 |
| updated | 2014-10-13T14:30:57Z |
| user_id | ec912e9a345a41e28c9781aeda1e97fb |
+--------------------------------------+------------------------------------------------------------------------------------------+
no authorized image
On compute node:
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
"Unexpected vif_type=%s") % vif_type)\n', u'NovaException: Unexpected vif_type=binding_failed\n']
This is showed in controller
tail -n40 /var/log/nova/nova-scheduler.log
Try 1
Goto Compute
systemctl stop neutron-openvswitch-agent.service ovs-vsctl del-br br-int ovs-vsctl add-br br-int systemctl restart openstack-nova-compute systemctl start neutron-openvswitch-agent.service
try 2
On network node
neutron agent-show neutron agent-list
