Table of Contents
Openstack Testbed with kvm VMs
This tutorial guides the setup of OpenStack Nodes on VMs.
Topology
Nodes
- Host
- Controller Node
- Compute Node
- Network Node
Subnets
Subnets: Private network
10.10.10.0/24:
Openstack management network. All nodes have static IP configured (iface1, ens1)
10.20.20.0/24:
Openstack Data network. Bridging Network and Compute nodes. All nodes have static IP configured (iface1, ens1)
OVS on Compute Node
options: {inkey=flow, localip=“10.20.20.211”, outkey=flow, remoteip=“10.20.20.214”}
[root@compute fedora]# ovs-vsctl show
7a58a918-49f3-4a4c-898e-10b933265b38
Bridge br-tun
Port "gre-0a1414d6"
Interface "gre-0a1414d6"
type: gre
options: {in_key=flow, local_ip="10.20.20.211", out_key=flow, remote_ip="10.20.20.214"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
fail_mode: secure
Port "qvoce788b97-6e"
tag: 1
Interface "qvoce788b97-6e"
Port br-int
Interface br-int
type: internal
Port "qvo99842140-53"
tag: 4095
Interface "qvo99842140-53"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qvod24021c7-92"
tag: 1
Interface "qvod24021c7-92"
ovs_version: "2.3.0"
OVS on Network node:
options: {inkey=flow, localip=“10.20.20.214”, outkey=flow, remoteip=“10.20.20.211”}
[root@network fedora]# ovs-vsctl show
e5283c3d-fae9-4030-bb03-b242ab77a1be
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "gre-10.20.20.211"
Interface "gre-10.20.20.211"
type: gre
options: {in_key=flow, local_ip="10.20.20.214", out_key=flow, remote_ip="10.20.20.211"}
Bridge br-int
Port br-int
Interface br-int
type: internal
Port "qr-7e7bb376-ed"
tag: 1
Interface "qr-7e7bb376-ed"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "tap23971a4c-58"
tag: 1
Interface "tap23971a4c-58"
type: internal
Bridge br-ex
Port "ens5"
Interface "ens5"
Port br-ex
Interface br-ex
type: internal
Port "qg-d7989507-bd"
Interface "qg-d7989507-bd"
type: internal
ovs_version: "2.3.0"
Subnets: External Network
192.168.200.0/24
Subnet for accessing Controller's Horizon dashboard and Network-Node's Floating IPs. Floating IPs are assigned to Openstack VMs so they can be accessed from external Network. Network node controll the flows to VMs (NAT). VMs are not aware of their floating IPs.
The IPs can be provided by DHCP service.
VM on Host Machine
Subnets
The host machine will be configured with 3 virtual subnets (bridges). Two of them can be private virtual bridges for management and data network between Openstack nodes. One of them is the public bridge for connections between Openstack nodes and external network.
Private virtual bridge: allows VMs connect with each others and with the internet. The host bridge has an IP assigned automatically and acts as gateway. It is the bridge interface and (virtual) host interface at the same time.
Public bridge: allows VMs connect with each others and with real hosts in the host's subnet. One real interface from host (eth0) is used as bridge interace for forwarding packets to and form the external subnet to and from VMs. The bridge interface has no IP and the host's IP is assigned to the virtual bridge (DHCP). All VMs access to the Internet through the external gateway, just like all external host in the external subnet.
More details about the bridge here: http://www.linux-kvm.org/page/Networking
VMs
Three VMs are setup as Openstack nodes.
Workspace on host machine:
[dang@localhost mystack]$ tree . ├── create_image.sh ├── images │ ├── Fedora-x86_64-20-20141008-sda-compute2.qcow2 │ ├── Fedora-x86_64-20-20141008-sda-compute.qcow2 │ ├── Fedora-x86_64-20-20141008-sda-controller.qcow2 │ ├── Fedora-x86_64-20-20141008-sda-network.qcow2 │ └── Fedora-x86_64-20-20141008-sda.qcow2 ├── isos │ └── Fedora-20-x86_64-netinst.iso ├── keys │ ├── authorized_keys │ ├── id_dsa │ └── id_dsa.pub ├── scripts │ ├── kvm_network.sh │ ├── qemu-ifdown.sh │ ├── qemu-ifup.sh │ ├── qemu-ifup-stackbr0.sh │ ├── qemu-ifup-stackbr1.sh │ └── qemu-ifup-stackbr2.sh └── stack.sh
stack.sh
#!/bin/sh set -x DIR=$( cd "$( dirname "$0" )" && pwd ) # SWITCH 1 Host only network 10.10.10.1 SWITCH_0=stackbr0 # SWITCH 2 Host only network 10.20.20.1 SWITCH_1=stackbr1 # SWITCH 3 Host only network 192.168.200.1 SWITCH_2=stackbr2 # "$@" an array of arguments. I.e, "do_brctl aaa bbb" results in "brctl aaa bbb". do_brctl() { sudo brctl "$@" } do_ifconfig() { sudo ifconfig "$@" } do_dnsmasq() { sudo dnsmasq "$@" } do_iptables_restore() { sudo iptables-restore "$@" } #1: BRIDGE=kvmbr0 #2: GATEWAY=192.168.101.1 #3: DHCPRANGE=192.168.101.2,192.168.101.254 start_dnsmasq() { do_dnsmasq \ --strict-order \ --except-interface=lo \ --interface=$1 \ --listen-address=$2 \ --bind-interfaces \ --dhcp-range=$3 \ --conf-file="" \ --pid-file=/var/run/qemu-dnsmasq-$1.pid \ --dhcp-leasefile=/var/run/qemu-dnsmasq-$1.leases \ --dhcp-no-override \ ${TFTPROOT:+"--enable-tftp"} \ ${TFTPROOT:+"--tftp-root=$TFTPROOT"} \ ${BOOTP:+"--dhcp-boot=$BOOTP"} } stop_dnsmasq() { # auto stop when bridge killed : } check_bridge_status() { modprobe kvm modprobe kvm_intel modprobe tun echo "Check existence... bridge device "$1 BR_STATUS=$(ifconfig | grep "$1") if [ test '${BR_STATUS}' = '' ]; then #if [ -z "$BR_STATUS" ]; then return 1 else return 0 fi } create_bridge() { if check_bridge_status "$1" then do_brctl addbr "$1" do_brctl stp "$1" off do_brctl setfd "$1" 0 do_ifconfig "$1" "$2" netmask "$3" up #ip a a 2001:db8:1234:5::1:1/64 dev kvmbr0 sleep 0.5s else echo "Bridge $1 already exist" fi } del_bridge() { echo "Destroying bridges "$1 do_ifconfig $1 down do_brctl delbr $1 } ############ add_filter_rules() { BRIDGE=$1 NETWORK=$2 NETMASK=$3 sudo iptables -F sudo iptables -t nat -F # cat <<EOF do_iptables_restore <<EOF *nat :PREROUTING ACCEPT [61:9671] :POSTROUTING ACCEPT [121:7499] :OUTPUT ACCEPT [132:8691] -A POSTROUTING -s $NETWORK/$NETMASK -j MASQUERADE COMMIT # Completed on Fri Aug 24 15:20:25 2007 # Generated by iptables-save v1.3.6 on Fri Aug 24 15:20:25 2007 *filter :INPUT ACCEPT [1453:976046] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1605:194911] -A INPUT -i $BRIDGE -p tcp -m tcp --dport 67 -j ACCEPT -A INPUT -i $BRIDGE -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i $BRIDGE -p tcp -m tcp --dport 53 -j ACCEPT -A INPUT -i $BRIDGE -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -i $1 -o $1 -j ACCEPT -A FORWARD -s $NETWORK/$NETMASK -i $BRIDGE -j ACCEPT -A FORWARD -d $NETWORK/$NETMASK -o $BRIDGE -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o $BRIDGE -j REJECT --reject-with icmp-port-unreachable -A FORWARD -i $BRIDGE -j REJECT --reject-with icmp-port-unreachable COMMIT EOF } ########### test_bridge() { create_bridge "$SWITCH_0" "10.10.10.1" "255.255.255.0" create_bridge "$SWITCH_1" "10.20.20.1" "255.255.255.0" create_bridge "$SWITCH_2" "192.168.200.1" "255.255.255.0" } test_nat() { add_filter_rules "$SWITCH_2" "192.168.200.1" "255.255.255.0" #add_filter_rules "$SWITCH_0" "10.10.10.1" "255.255.255.0" } destroy_bridges() { del_bridge $SWITCH_0 del_bridge $SWITCH_1 del_bridge $SWITCH_2 } test_dnsmasq() { #start_dnsmasq "$SWITCH_0" "10.10.10.1" "10.10.10.2,10.10.10.254" #start_dnsmasq "$SWITCH_1" "10.20.20.1" "10.20.20.2,10.20.20.254" start_dnsmasq "$SWITCH_2" "192.168.200.1" "192.168.200.2,192.168.200.254" : } destroy_dnsmasq() { sudo killall dnsmasq } random_mac() { return printf 'DE:AD:BE:EF:%02X:%02X\n' $((RANDOM%256)) $((RANDOM%256)) } test_vm() { sudo qemu-kvm -hda $DIR/images/Fedora-x86_64-20-20141008-sda-controller.qcow2 \ -m 1024 -vnc :0 \ -device e1000,netdev=snet0,mac=DE:AD:BE:EF:10:01 -netdev tap,id=snet0,script=$DIR/scripts/qemu-ifup-stackbr0.sh \ -device e1000,netdev=snet1,mac=DE:AD:BE:EF:10:02 -netdev tap,id=snet1,script=$DIR/scripts/qemu-ifup-stackbr1.sh \ -device e1000,netdev=snet2,mac=DE:AD:BE:EF:10:03 -netdev tap,id=snet2,script=$DIR/scripts/qemu-ifup-stackbr2.sh & } test_compute_vm () { # Compute Node sudo qemu-kvm -hda $DIR/images/Fedora-x86_64-20-20141008-sda-compute2.qcow2 \ -cpu host \ -smp cpus=2 \ -m 4096 -vnc :1 \ -device e1000,netdev=snet0,mac=DE:AD:BE:EF:10:04 -netdev tap,id=snet0,script=$DIR/scripts/qemu-ifup-stackbr0.sh \ -device e1000,netdev=snet1,mac=DE:AD:BE:EF:10:05 -netdev tap,id=snet1,script=$DIR/scripts/qemu-ifup-stackbr1.sh \ -device e1000,netdev=snet2,mac=DE:AD:BE:EF:10:06 -netdev tap,id=snet2,script=$DIR/scripts/qemu-ifup-stackbr2.sh & } test_controller_vm () { # Controller Node #sudo qemu-kvm -hda $DIR/images/Fedora-x86_64-20-20140618-sda-controller.qcow2 # the login account is cirros. The password is cubswin:) sudo qemu-kvm -hda $DIR/images/Fedora-x86_64-20-20141008-sda-controller2.qcow2 \ -m 1024 -vnc :0 \ -device e1000,netdev=snet0,mac=DE:AD:BE:EF:10:01 -netdev tap,id=snet0,script=$DIR/scripts/qemu-ifup-stackbr0.sh \ -device e1000,netdev=snet1,mac=DE:AD:BE:EF:10:02 -netdev tap,id=snet1,script=$DIR/scripts/qemu-ifup-stackbr1.sh \ -device e1000,netdev=snet2,mac=DE:AD:BE:EF:10:03 -netdev tap,id=snet2,script=$DIR/scripts/qemu-ifup-stackbr2.sh & } test_network_vm () { # Network Node sudo qemu-kvm -hda $DIR/images/Fedora-x86_64-20-20141008-sda-network.qcow2 \ -m 512 -vnc :2 \ -device e1000,netdev=snet0,mac=DE:AD:BE:EF:10:07 -netdev tap,id=snet0,script=$DIR/scripts/qemu-ifup-stackbr0.sh \ -device e1000,netdev=snet1,mac=DE:AD:BE:EF:10:08 -netdev tap,id=snet1,script=$DIR/scripts/qemu-ifup-stackbr1.sh \ -device e1000,netdev=snet2,mac=DE:AD:BE:EF:10:09 -netdev tap,id=snet2,script=$DIR/scripts/qemu-ifup-stackbr2.sh & } start_vms() { test_controller_vm sleep 5s test_compute_vm sleep 5s test_network_vm sleep 1s } ########### Main ########### case $1 in start) test_bridge test_dnsmasq test_nat start_vms ;; stop) destroy_bridges #destroy_dnsmasq: killed with bridge destroy destroy_dnsmasq echo "Please login and shutdown each vm" ;; test-bridge) test_bridge ;; test-nat) test_nat ;; test-dnsmasq) test_dnsmasq ;; test-compute) test_bridge test_dnsmasq test_nat test_compute_vm ;; test-vm) test_bridge test_dnsmasq test_nat test_vm ;; *) echo "Usage: $(basename $0) (start | stop | test-[bridge,dnsmasq,nat,vm,compute])" esac
scripts/qemu-ifup-stackbr0.sh
#!/bin/sh set -x switch=stackbr0 echo "$1" if [ -n "$1" ];then /usr/bin/sudo /usr/sbin/tunctl -u `whoami` -t $1 /usr/bin/sudo /sbin/ip link set $1 up sleep 0.5s /usr/bin/sudo /usr/sbin/brctl addif $switch $1 exit 0 else echo "Error: no interface specified" exit 1 fi
Common Requirements
Configure Hosts, DNS on each node to find the others
vim /etc/hosts 192.168.200.209 controller 192.168.200.212 compute 192.168.200.215 network
Configure Network Interfaces
Network Node
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-ens3 # OpenStack Management Network DEVICE=ens3 ONBOOT=yes BOOTPROTO=static IPADDR=10.10.10.213 NETMASK=255.255.255.0 #GATEWAY=10.10.10.1 EOF cat <<EOF>/etc/sysconfig/network-scripts/ifcfg-ens4 # OpenStack Data & VMs DEVICE=ens4 ONBOOT=yes BOOTPROTO=static IPADDR=10.20.20.214 NETMASK=255.255.255.0 #GATEWAY=10.20.20.1 EOF cat <<EOF> /etc/sysconfig/network-scripts/ifcfg-ens5 DEVICE=ens5 BOOTPROTO=static ONBOOT=yes TYPE=Ethernet EOF cat <<EOF> /etc/sysconfig/network-scripts/ifcfg-br-ex #DEVICE=br-ex #BOOTPROTO=static #ONBOOT=yes #IPADDR=192.169.200.215 #NETMASK=255.255.255.0 #GATEWAY=192.169.200.1 DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROT=static IPADDR=192.168.200.215 NETMASK=255.255.255.0 ONBOOT=yes GATEWAY=192.168.200.1 DNS1=192.168.200.1 DNS2=8.8.8.8 #NM_CONTROLLED=yes EOF
Disable NetworkManager (on Fedora)
# this fix error rtlink file exists when start network.service rm /etc/sysconfig/network-script/ens3 systemctl stop NetworkManager.service systemctl disable NetworkManager.service systemctl start network.service systemctl enable network.service rm /etc/sysconfig/network-scripts/ifcfg-enp5s0f1
Disable firewalld (on Fedora)
systemctl stop firewalld systemctl disable firewalld yum install iptables-services # Create this below file, otherwise starting iptables will fail touch /etc/sysconfig/iptables systemctl enable iptables && systemctl start iptables
Disable selinux
setenforce 0 # Fix selinux conf... cat /etc/selinux/config SELINUX=disabled SELINUXTYPE=targeted SETLOCALDEFS=0
Standard group
The “standard” group includes utilities like wget, unzip etc..
sudo yum -y install @standard sudo yum -y install java-1.7.0-openjdk sudo yum -y install @c-development sudo yum -y libxml2-devel libxslt-devel bc
Maven:
yum install maven -y # java etc also installed sudo cd /opt wget http://apache.openmirror.de/maven/maven-3/3.2.3/binaries/apache-maven-3.2.3-bin.tar.gz tar -xvzf apache-maven
JAVA_HOME
vim .bash_profile
# User specific environment and startup programs export M2_HOME=/opt/apache-maven-3.2.3 export M2=$M2_HOME/bin export MAVEN_OPTS='-Xms256m -XX:MaxPermSize=1024m -Xmx1024m' export JVM_ROOT=/usr/lib/jvm export JAVA_HOME=$JVM_ROOT/java PATH=$PATH:$HOME/.local/bin:$HOME/bin export PATH=$JAVA_HOME:$M2:$PATH
source .bash_profile
Copy to other nodes
sudo scp -rp root@controller:/opt/apache-maven* /opt/ scp -rp fedora@controller:~/.bash_profile ./.bash_profile source .bash_profile
Misc
sudo yum install python-pip.noarch pip install git-review
Controll Service
On Controller Node
yum install bridge-utils ntp mariadb su -c "yum install @virtualization"
Mysql
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf #or vim /etc/mysql/my.cn.d/server.cnf [mysqld] #skip-networking #bind-address = 127.0.0.1 default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8 systemctl start mysqld.service
mysqladmin -u root password mysqlroot
Create OpenStack Databases:
MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON neutron.* TO 'neutronUser'@'%' IDENTIFIED BY 'neutronPass'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> quit Bye
Install OpenStack
yum install yum-plugin-priorities yum install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm yum install openstack-utils yum upgrade
Install RabbitMQ
yum install -y rabbitmq-server
Keystone
yum install openstack-keystone python-keystoneclient
openstack-config --set /etc/keystone/keystone.conf \
database connection mysql://keystoneUser:keystonePass@controller/keystone
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL ON glance.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass'; Query OK, 0 rows affected (0.00 sec) keystone-manage db_sync keystone
BUG If there is error initializing keystone: version should be an integer keystone-manage --debug db_sync keystone sudo chmod 777 /var/log/keystone/keystone.log sudo /usr/bin/openstack-db --drop --service keystone sudo openstack-db --init --service keystone --password keystonePass
Define an authorization token to use as a shared secret between the Identity Service and other OpenStack services. Use openssl to generate a random token and store it in the configuration file:
ADMIN_TOKEN=$(openssl rand -hex 10) echo $ADMIN_TOKEN openstack-config --set /etc/keystone/keystone.conf DEFAULT \ admin_token $ADMIN_TOKEN
By default, Keystone uses PKI tokens. Create the signing keys and certificates and restrict access to the generated data:
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone chown -R keystone:keystone /etc/keystone/ssl chmod -R o-rwx /etc/keystone/ssl service openstack-keystone start chkconfig openstack-keystone on
Define users, tenants, and roles
After you install the Identity Service, set up users, tenants, and roles to authenticate against. These are used to allow access to services and endpoints, described in the next section.
Typically, you would indicate a user and password to authenticate with the Identity Service. At this point, however, you have not created any users, so you have to use the authorization token created in an earlier step, see the section called “Install the Identity Service” for further details. You can pass this with the –os-token option to the keystone command or set the OSSERVICETOKEN environment variable. Set OSSERVICETOKEN, as well as OSSERVICEENDPOINT to specify where the Identity Service is running. Replace ADMIN_TOKEN with your authorization token.
export OS_SERVICE_TOKEN=ADMIN_TOKEN export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
Create an administrative user
$ keystone user-create --name=admin --pass=ADMIN_PASS --email=ADMIN_EMAIL
Replace ADMINPASS with a secure password and replace ADMINEMAIL with an email address to associate with the account.
Create the admin role
$ keystone role-create --name=admin
Create the admin tenant:
$ keystone tenant-create --name=admin --description="Admin Tenant"
You must now link the admin user, admin role, and admin tenant together using the user-role-add option:
$ keystone user-role-add --user=admin --tenant=admin --role=admin
Link the admin user, member role, and admin tenant:
$ keystone user-role-add --user=admin --role=_member_ --tenant=admin
Create a normal user
Follow these steps to create a normal user and tenant, and link them to the special member role. You will use this account for daily non-administrative interaction with the OpenStack cloud. You can also repeat this procedure to create additional cloud users with different usernames and passwords. Skip the tenant creation step when creating these users.
Create the demo user:
$ keystone user-create --name=demo --pass=DEMO_PASS --email=DEMO_EMAIL
Replace DEMOPASS with a secure password and replace DEMOEMAIL with an email address to associate with the account.
Create the demo tenant
Do not repeat this step when adding additional users:
$ keystone tenant-create --name=demo --description="Demo Tenant"
Link the demo user, _member_ role, and demo tenant
$ keystone user-role-add --user=demo --role=_member_ --tenant=demo
Create a service tenant
OpenStack services also require a username, tenant, and role to access other OpenStack services. In a basic installation, OpenStack services typically share a single tenant named service.
You will create additional usernames and roles under this tenant as you install and configure each service.
Create the service tenant:
$ keystone tenant-create --name=service --description="Service Tenant"
Define services and API endpoints
So that the Identity Service can track which OpenStack services are installed and where they are located on the network, you must register each service in your OpenStack installation. To register a service, run these commands:
keystone service-create. Describes the service. keystone endpoint-create. Associates API endpoints with the service.
You must also register the Identity Service itself. Use the OSSERVICETOKEN environment variable, as set previously, for authentication.
1. Create a service entry for the Identity Service:
keystone service-create --name=keystone --type=identity \ --description="OpenStack Identity"
2. Specify an API endpoint for the Identity Service by using the returned service ID
When you specify an endpoint, you provide URLs for the public API, internal API, and admin API. In this guide, the controller host name is used. Note that the Identity Service uses a different port for the admin API.
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://controller:5000/v2.0 \
--internalurl=http://controller:5000/v2.0 \
--adminurl=http://controller:35357/v2.0
Verify the Identity Service installation
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
You can now use regular user name-based authentication.
Request a authentication token by using the admin user and the password you chose for that user:
keystone --os-username=admin --os-password=ADMIN_PASS \ --os-auth-url=http://controller:35357/v2.0 token-get
In response, you receive a token paired with your user ID. This verifies that the Identity Service is running on the expected endpoint and that your user account is established with the expected credentials.
Verify that authorization behaves as expected. To do so, request authorization on a tenant:
keystone --os-username=admin --os-password=ADMIN_PASS \ --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \ token-get
In response, you receive a token that includes the ID of the tenant that you specified. This verifies that your user account has an explicitly defined role on the specified tenant and the tenant exists as expected.
You can also set your –os-* variables in your environment to simplify command-line usage. Set up a admin-openrc.sh file with the admin credentials and admin endpoint:
export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0
Source this file to read in the environment variables:
source admin-openrc.sh
Verify that your admin-openrc.sh file is configured correctly. Run the same command without the –os-* arguments:
keystone token-get
The command returns a token and the ID of the specified tenant. This verifies that you have configured your environment variables correctly.
Verify that your admin account has authorization to perform administrative commands:
keystone user-list keystone user-role-list --user admin --tenant admin
Openstack Python clients
Service Client Package Description Block Storage cinder python-cinderclient Create and manage volumes. Compute nova python-novaclient Create and manage images, instances, and flavors. Database Service trove python-troveclient Create and manage databases. Identity keystone python-keystoneclient Create and manage users, tenants, roles, endpoints, and credentials. Image Service glance python-glanceclient Create and manage images. Networking neutron python-neutronclient Configure networks for guest servers. This client was previously called quantum. Object Storage swift python-swiftclient Gather statistics, list items, update metadata, and upload, download, and delete files stored by the Object Storage service. Gain access to an Object Storage installation for ad hoc processing. Orchestration heat python-heatclient Launch stacks from templates, view details of running stacks including events and resources, and update and delete stacks. Telemetry ceilometer python-ceilometerclient Create and collect measurements across OpenStack.
Install with pip
pip install python-PROJECTclient
Source default env for user, password:
vim mystack-openrc.sh cat admin-openrc.sh export OS_USERNAME=admin export OS_PASSWORD=adminPass export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0
source admin-openrc.sh
Similarly, create demo-openrc.sh
Image Service Glance:
Installation:
The Image Service provides the glance-api and glance-registry services, each with its own configuration file.
yum install openstack-glance python-glanceclient openstack-config --set /etc/glance/glance-api.conf database \ connection mysql://glanceUser:glancePass@controller/glance openstack-config --set /etc/glance/glance-registry.conf database \ connection mysql://glanceUser:glancePass@controller/glance
Create Database user:
mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass'; Query OK, 0 rows affected (0.00 sec)
Create Database tables:
su -s /bin/sh -c "glance-manage db_sync" glance BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1090648 ** Check log files or --debug output Workaround: Set db_enforce_mysql_charset=False in /etc/glance/glance-api.conf glance-manage db_sync glance
Glance user
Create a glance user that the Image Service can use to authenticate with the Identity service. Choose a password and specify an email address for the glance user. Use the service tenant and give the user the admin role:
Note: source admin-openrc.sh
keystone user-create --name=glance --pass=glancePass \ --email=glance@example.com keystone user-role-add --user=glance --tenant=service --role=admin
Configure the Image Service to use the Identity Service for authentication.
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_user glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ admin_password glancePass openstack-config --set /etc/glance/glance-api.conf paste_deploy \ flavor keystone openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_user glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ admin_password glancePass openstack-config --set /etc/glance/glance-registry.conf paste_deploy \ flavor keystone
Register the service and create the endpoint:
Register the Image Service with the Identity service so that other OpenStack services can locate it.
$ keystone service-create --name=glance --type=image \
--description="OpenStack Image Service"
$ keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://controller:9292 \
--internalurl=http://controller:9292 \
--adminurl=http://controller:9292
Start the glance-api and glance-registry services
and configure them to start when the system boots:
sudo service openstack-glance-api start service openstack-glance-registry start chkconfig openstack-glance-api on chkconfig openstack-glance-registry on
Verify the Image Service installation
if errors check /var/log/glance/{api,registry}.log for details.
Download the image into a dedicated directory using wget or curl:
$ mkdir /tmp/images $ cd /tmp/images/ $ wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Upload the image to the Image Service:
glance image-create --name=IMAGELABEL --disk-format=FILEFORMAT \ --container-format=CONTAINERFORMAT --is-public=ACCESSVALUE < IMAGEFILE $ source admin-openrc.sh $ glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \ --container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img
Alternative:
$ glance image-create --name="cirros-0.3.2-x86_64" --disk-format=qcow2 \ --container-format=bare --is-public=true \ --copy-from http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Confirm image upload:
glance image-list rm -r /tmp/images
Nova Control
This section installs compute controller services. A / more compute node (hypervisor, nova-compute) will be configured separately later and added to scale horizontally.
Install the Compute packages for the controller node.
yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \ openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \ python-novaclient
Configure Nova
Compute stores information in a database. In this guide, we use a MySQL database on the controller node. Configure Compute with the database location and credentials. Replace NOVA_DBPASS with the password for the database that you will create in a later step.
openstack-config --set /etc/nova/nova.conf \ database connection mysql://novaUser:novaPass@controller/nova
Message brocker
Set these configuration keys to configure Compute to use the Qpid message broker:
openstack-config --set /etc/nova/nova.conf \ DEFAULT rpc_backend qpid openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
Set these configuration keys to configure Compute to use the RabbitMQ message broker:
openstack-config --set /etc/nova/nova.conf \ DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
Set the myip, vncserverlisten, and vncserverproxyclientaddress configuration options to the management interface IP address of the controller node:
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.10.10.207 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 10.10.10.207 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.10.10.207
Create a nova database user:
Use the password you created previously to log in as root.
mysql -u root -p mysql> CREATE DATABASE nova; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'novaUser'@'localhost' \ IDENTIFIED BY 'novaPass'; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'novaUser'@'%' \ IDENTIFIED BY 'novaPass';
Create the Compute service tables:
su -s /bin/sh -c "nova-manage db sync" nova
**ERROR again**
nova-manage --debug db sync nova
cat /var/log/nova/nova-manage.log
Resolution:
sudo /usr/bin/openstack-db --drop --service nova
sudo openstack-db --init --service nova --password novaPass
nova-manage db sync
Create a nova user
that Compute uses to authenticate with the Identity Service. Use the service tenant and give the user the admin role:
source admin-openrc.sh $ keystone user-create --name=nova --pass=novaPass --email=nova@example.com $ keystone user-role-add --user=nova --tenant=service --role=admin
Configure Compute to use these credentials with the Identity Service running on the controller
Replace NOVAPASS with your Compute password. # su openstack-config –set /etc/nova/nova.conf DEFAULT authstrategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password novaPass
Register Compute with the Identity Service
so that other OpenStack services can locate it. Register the service and specify the endpoint:
keystone service-create --name=nova --type=compute \
--description="OpenStack Compute"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://controller:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller:8774/v2/%\(tenant_id\)s
Start Compute services
and configure them to start when the system boots:
service openstack-nova-api start service openstack-nova-cert start service openstack-nova-consoleauth start service openstack-nova-scheduler start service openstack-nova-conductor start service openstack-nova-novncproxy start chkconfig openstack-nova-api on chkconfig openstack-nova-cert on chkconfig openstack-nova-consoleauth on chkconfig openstack-nova-scheduler on chkconfig openstack-nova-conductor on chkconfig openstack-nova-novncproxy on
To verify your configuration, list available images
nova --debug image-list ERROR The management IP above were wrong. Fix them then restart all nova-services!!!
Nova Compute / Hypervisor
Nova Compute / Hypervisor - Configure compute node
After you configure the Compute service on the controller node, you must configure another system as a compute node. The compute node receives requests from the controller node and hosts virtual machine instances. You can run all services on a single node, but the examples in this guide use separate systems. This makes it easy to scale horizontally by adding additional Compute nodes following the instructions in this section.
The Compute service relies on a hypervisor to run virtual machine instances. OpenStack can use various hypervisors, but this guide uses KVM.
Install the Compute packages:
yum install openstack-nova-compute
Edit the /etc/nova/nova.conf configuration file:
openstack-config --set /etc/nova/nova.conf database connection mysql://novaUser:novaPass@controller/nova openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password novaPass
Message Brocker
Configure the Compute service to use the Qpid Rabbid message broker by setting these configuration keys:
# openstack-config --set /etc/nova/nova.conf \ DEFAULT rpc_backend rabbit <del>qpid</del> # openstack-config --set /etc/nova/nova.conf DEFAULT rabbid_host <del>qpid_hostname</del> controller
Configure Compute to provide remote console access to instances.
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.10.10.210 openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.10.10.210 openstack-config --set /etc/nova/nova.conf \ DEFAULT novncproxy_base_url http://controller:6080/vnc_auto.html openstack-config --set /etc/nova/nova.conf \ compute_driver libvirt.LibvirtDriver
Specify the host that runs the Image Service.
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller
Set hypervisor type
Determine whether your system's processor and/or hypervisor support hardware acceleration for virtual machines ====
Run the following command:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your system supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero, your system does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
# openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
Start the Compute service and its dependencies
Configure them to start automatically when the system boots.
For RHEL or CentOS:
# service libvirtd start # service messagebus start # service openstack-nova-compute start # chkconfig libvirtd on # chkconfig messagebus on # chkconfig openstack-nova-compute on
For Fedora:
service libvirtd start service dbus start service openstack-nova-compute start chkconfig libvirtd on chkconfig dbus on chkconfig openstack-nova-compute on
Trouble shooting
Compute node status “down”
check ntp
run /usr/bin/nova-compute –debug –config-file /etc/nova/nova.conf –logfile /var/log/nova/compute.log
Fix /usr/lib/systemd/system/openstack-nova-compute.service to contain the about files.
cp * *.orig vim * systemctl daemon-reload
See /var/log/nova/compute.log for hints:
- add instancespath=$statepath
- add state_path
Neutron Networking
Neutron Networking - Controller node
Before you configure OpenStack Networking (neutron), you must create a database and Identity service credentials including a user and service.
Connect to the database as the root user, create the neutron database, and grant the proper access to it:
Replace NEUTRON_DBPASS with a suitable password.
$ mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutronUser'@'localhost' \ IDENTIFIED BY 'neutronPass'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutronUser'@'%' \ IDENTIFIED BY 'neutronPass';
Create Identity service credentials for Networking:
Create the neutron user:
Replace NEUTRON_PASS with a suitable password and neutron@example.com with a suitable e-mail address.
keystone user-create --name neutron --pass neutronPass --email thuydang.de@gmail.com
Link the neutron user to the service tenant and admin role:
keystone user-role-add --user neutron --tenant service --role admin
Create the neutron service:
keystone service-create --name neutron --type network --description "OpenStack Networking"
Create the service endpoint
$ keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696
Install the Networking components
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient
Configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message broker, topology change notifier, and plug-in.
Configure Networking to use the database:
Replace NEUTRON_DBPASS with a suitable password.
# openstack-config --set /etc/neutron/neutron.conf database connection \ mysql://neutronUser:neutronPass@controller/neutron
Configure Networking to use the Identity service for authentication:
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password neutronPass
Configure Networking to use the message broker:
RabbitMQ
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller
QPID
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
qpid_hostname controller
Configure Networking to notify Compute about network topology changes
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_url http://controller:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_password novaPass
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_auth_url http://controller:35357/v2.0
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router
[Note] We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ verbose True
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS agent or service because it does not handle instance network traffic.
Run the following commands:
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Run the following commands: Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service. # openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://controller:9696 # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_password NEUTRON_PASS # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 # openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutron
Restart the Compute services:
service openstack-nova-api restart service openstack-nova-scheduler restart service openstack-nova-conductor restart
Start the Networking service
and configure it to start when the system boots:
service neutron-server start chkconfig neutron-server on
[Note] Unlike other services, Networking typically does not require a separate step to populate the database because the neutron-server service populates it automatically. However, the packages for these distributions sometimes require running the neutron-db-manage command prior to starting the neutron-server service. We recommend attempting to start the service before manually populating the database. If the service returns database errors, perform the following operations:
Configure Networking to use long plug-in names:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
core_plugin neutron.plugins.ml2.plugin.Ml2Plugin
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
service_plugins neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
Populate the database:
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugin.ini upgrade head" neutron
Attempt to start the neutron-server service again. You can return the coreplugin and serviceplugins configuration keys to short plug-in names.
Neutron Networking - Network Node
Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.
Edit /etc/sysctl.conf to contain the following:
net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
Implement the changes:
# sysctl -p
To install the Networking components
yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-openvswitch
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
Configure Networking to use the Identity service for authentication:
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password neutronPass
Configure Networking to use the message broker:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ <del>rpc_backend neutron.openstack.common.rpc.impl_qpid</del> openstack-config --set /etc/neutron/neutron.conf DEFAULT \ <del>qpid_hostname controller</del> openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT \
service_plugins router
[Note] We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
To configure the Layer-3 (L3) agent
The Layer-3 (L3) agent provides routing services for instance virtual networks.
Run the following commands:
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ use_namespaces True
[Note] We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with troubleshooting.
To configure the DHCP agent
The DHCP agent provides DHCP services for instance virtual networks.
Run the following commands:
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ dhcp_driver neutron.agent.linux.dhcp.Dnsmasq openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ use_namespaces True
[Note] We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/dhcp_agent.ini to assist with troubleshooting.
Tunneling protocols such as generic routing encapsulation (GRE) include additional packet headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes. Internet protocol (IP) networks contain the path MTU discovery (PMTUD) mechanism to detect end-to-end MTU and adjust packet size accordingly. However, some operating systems and networks block or otherwise lack support for PMTUD causing performance degradation or connectivity failure.
Ideally, you can prevent these problems by enabling jumbo frames on the physical network that contains your tenant virtual networks. Jumbo frames support MTUs up to approximately 9000 bytes which negates the impact of GRE overhead on virtual networks. However, many network devices lack support for jumbo frames and OpenStack administrators often lack control of network infrastructure. Given the latter complications, you can also prevent MTU problems by reducing the instance MTU to account for GRE overhead. Determining the proper MTU value often takes experimentation, but 1454 bytes works in most environments. You can configure the DHCP server that assigns IP addresses to your instances to also adjust the MTU.
[Note] Some cloud images such as CirrOS ignore the DHCP MTU option.
Run the following command:
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
Create and edit the /etc/neutron/dnsmasq-neutron.conf file and add the following keys:
dhcp-option-force=26,1454 log-facility = /var/log/neutron/dnsmasq.log log-dhcp
Kill any existing dnsmasq processes:
killall dnsmasq
To configure the metadata agent
The metadata agent provides configuration information such as credentials for remote access to instances.
Run the following commands:
Replace NEUTRONPASS with the password you chose for the neutron user in the Identity service. Replace METADATASECRET with a suitable secret for the metadata proxy.
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_url http://controller:5000/v2.0 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_region regionOne openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_tenant_name service openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_user neutron openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_password neutronPass openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ nova_metadata_ip controller openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ metadata_proxy_shared_secret metadataSecret
[Note] We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with troubleshooting.
[Note] Perform the next two steps on the controller node.
On the controller node, configure Compute to use the metadata service: Replace METADATA_SECRET with the secret you chose for the metadata proxy. openstack-config --set /etc/nova/nova.conf DEFAULT \ service_neutron_metadata_proxy true openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_metadata_proxy_shared_secret metadataSecret On the controller node, restart the Compute API service: service openstack-nova-api restart
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build virtual networking framework for instances.
Run the following commands: Replace INSTANCETUNNELSINTERFACEIPADDRESS with the IP address of the instance tunnels network interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface on the network node.
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ local_ip 10.0.1.21 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ tunnel_type gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ enable_tunneling True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS. The external bridge br-ex handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port bridges the virtual and physical external networks in your environment.
Start the OVS service and configure it to start when the system boots:
service openvswitch start chkconfig openvswitch on
Configure Interfaces
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-ens5 DEVICE=ens5 BOOTPROTO=static ONBOOT=yes TYPE=Ethernet EOF cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex BOOTPROTO=static ONBOOT=yes IPADDR=192.169.200.215 NETMASK=255.255.255.0 GATEWAY=192.169.200.1 EOF
Add the integration bridge:
ovs-vsctl add-br br-int
Add the external bridge:
ovs-vsctl add-br br-ex
ovs-vsctl show
c993ff93-7d03-42e2-8566-331d10442686
Bridge br-int
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port "ens2"
Interface "ens2"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.0.1"
Add a port to the external bridge that connects to the physical external network interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256. the External network 192.168.x.x
# ovs-vsctl add-port br-ex INTERFACE_NAME ifup br-ex
Hardcode birdge with ifcfg-xxx: http://acidborg.wordpress.com/2010/01/20/how-to-configure-a-network-bridge-in-red-hat-fedora/
[Note] Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network.
To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K INTERFACE_NAME gro off
Network settings on Neutron Node
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROT=static IPADDR=192.168.200.215 NETMASK=255.255.255.0 ONBOOT=yes NM_CONTROLLED=no
ifdown br-ex
ifup br-ex
# http://blog.oddbit.com/2014/05/20/fedora-and-ovs-bridge-interfac/
ovs-vsctl add-port br-ex ens5
ovs-vsctl show
e5283c3d-fae9-4030-bb03-b242ab77a1be
Bridge br-int
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port "ens5"
Interface "ens5"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.3.0"
ping 192.168.200.1 should work
Problems TODOs
br-ex not started automatically.
if-up br-ex
To finalize the installation
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using the ML2 plug-in, for example, the symbolic link must point to /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands:
# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Fix start script to use ml2_conf.ini
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
On Fedora:
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
Start the Networking services
and configure them to start when the system boots:
service neutron-openvswitch-agent start service neutron-l3-agent start service neutron-dhcp-agent start service neutron-metadata-agent start chkconfig neutron-openvswitch-agent on chkconfig neutron-l3-agent on chkconfig neutron-dhcp-agent on chkconfig neutron-metadata-agent on
Neutron Networking - Compute Node
Configure Compute Node
Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.
Edit /etc/sysctl.conf to contain the following: net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 Implement the changes: # sysctl -p
To install the Networking components
# yum install openstack-neutron-ml2 openstack-neutron-openvswitch
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
Configure Networking to use the Identity service for authentication:
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_host controller openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password neutronPass
Configure Networking to use the message broker:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router
[Note] Note
We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
Run the following commands: Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node. openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ local_ip 10.20.20.211 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ tunnel_type gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ enable_tunneling True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS.
Start the OVS service and configure it to start when the system boots: # service openvswitch start # chkconfig openvswitch on Add the integration bridge: # ovs-vsctl add-br br-int
To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Run the following commands: Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service. openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://controller:9696 openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_password neutronPass openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutron
[Note] Note
By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
To finalize the installation
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using the ML2 plug-in, for example, the symbolic link must point to /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands: # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Fix start script to use /etc/neutron/plugin.ini
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
On Fedora
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
Restart the Compute service:
# service openstack-nova-compute restart Start the Open vSwitch (OVS) agent and configure it to start when the system boots: # service neutron-openvswitch-agent start # chkconfig neutron-openvswitch-agent on
Create initial Network
Before launching your first instance, you must create the necessary virtual network infrastructure to which the instance will connect, including the external network and tenant network. See Figure 7.1, “Initial networks”. After creating this infrastructure, we recommend that you verify connectivity and resolve any issues before proceeding further.
External Network
The external network typically provides internet access for your instances. By default, this network only allows internet access from instances using Network Address Translation (NAT). You can enable internet access to individual instances using a floating IP address and suitable security group rules. The admin tenant owns this network because it provides external network access for multiple tenants. You must also enable sharing to allow access by those tenants.
Perform these commands on the controller node.
To create the external network
Source the admin tenant credentials:
$ source admin-openrc.sh
Create the network:
$ neutron net-create ext-net --shared --router:external=True Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | | name | ext-net | | provider:network_type | gre | | provider:physical_network | | | provider:segmentation_id | 1 | | router:external | True | | shared | True | | status | ACTIVE | | subnets | | | tenant_id | 54cd044c64d5408b83f843d63624e0d8 | +---------------------------+--------------------------------------+
Error
Connection to neutron failed: [Errno 111] Connection refused
Get other debugging info:
lsof -i :9696
neutron agent-list =⇒ connection to neutron failed maximum attempts reached
netstat -lnpt | grep 9696 =⇒ No output
ps -ef | grep 5249 ⇒ root 20407 3185 0 10:22 tty1 00:00:00 frep –color=auto 5249
service neutron-server status ⇒ neutron-server start/pre-start, process 4865
iptables-save | grep 9696 ⇒ no output
Solution
Disable l3-agent plugin from previous step (DB fix)
openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
with router-plugin:
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
Error: rabbitMQ
Debug:
systemctl | grep failed rabbitmq-server.service loaded failed failed RabbitMQ broker
To create a subnet on the external network
Like a physical network, a virtual network requires a subnet assigned to it. The external network shares the same subnet and gateway associated with the physical network connected to the external interface on the network node. You should specify an exclusive slice of this subnet for router and floating IP addresses to prevent interference with other devices on the external network.
Replace FLOATINGIPSTART and FLOATINGIPEND with the first and last IP addresses of the range that you want to allocate for floating IP addresses. Replace EXTERNALNETWORKCIDR with the subnet associated with the physical network. Replace EXTERNALNETWORKGATEWAY with the gateway associated with the physical network, typically the “.1” IP address. You should disable DHCP on this subnet because instances do not connect directly to the external network and floating IP addresses require manual assignment.
Create the subnet: $ neutron subnet-create ext-net --name ext-subnet \ --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \ --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR
For example, using 203.0.113.0/24 with floating IP address range 203.0.113.101 to 203.0.113.200:
$ neutron subnet-create ext-net --name ext-subnet \ --allocation-pool start=203.0.113.101,end=203.0.113.200 \ --disable-dhcp --gateway 203.0.113.1 203.0.113.0/24
Our subnet:
neutron subnet-create ext-net --name ext-subnet \
--allocation-pool start=192.168.200.101,end=192.168.200.200 \
--disable-dhcp --gateway 192.168.200.1 192.168.200.0/24
Created a new subnet:
+------------------+--------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------+
| allocation_pools | {"start": "192.168.200.101", "end": "192.168.200.200"} |
| cidr | 192.168.200.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.200.1 |
| host_routes | |
| id | d7657925-00db-4beb-be07-f6e71338c3f1 |
| ip_version | 4 |
| name | ext-subnet |
| network_id | 3e466eec-a9e9-49e7-84e3-f3b13d9daada |
| tenant_id | 1131460a56374ec1b1cd542689c6e95c |
+------------------+--------------------------------------------------------+
Create Tenant Network
The tenant network provides internal network access for instances. The architecture isolates this type of network from other tenants. The demo tenant owns this network because it only provides network access for instances within it.
Perform these commands on the controller node.
To create the tenant network
Source the demo tenant credentials:
$ source demo-openrc.sh Create the network: $ neutron net-create demo-net Created a new network: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | admin_state_up | True | | id | ac108952-6096-4243-adf4-bb6615b3de28 | | name | demo-net | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +----------------+--------------------------------------+
Like the external network, your tenant network also requires a subnet attached to it. You can specify any valid subnet because the architecture isolates tenant networks. Replace TENANTNETWORKCIDR with the subnet you want to associate with the tenant network. Replace TENANTNETWORKGATEWAY with the gateway you want to associate with this network, typically the “.1” IP address. By default, this subnet will use DHCP so your instances can obtain IP addresses.
Optional: Want to make the network “shared”?
quantum net-update demo-net --shared
To create a subnet on the tenant network
Create the subnet:
$ neutron subnet-create demo-net --name demo-subnet \
--gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR
Example using 192.168.1.0/24:
$ neutron subnet-create demo-net --name demo-subnet \
--gateway 192.168.1.1 192.168.1.0/24
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} |
| cidr | 192.168.1.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.1.1 |
| host_routes | |
| id | 69d38773-794a-4e49-b887-6de6734e792d |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | demo-subnet |
| network_id | ac108952-6096-4243-adf4-bb6615b3de28 |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+-------------------+------------------------------------------------------+
To create a router on the tenant network and attach the external and tenant networks to it
A virtual router passes network traffic between two or more virtual networks. Each router requires one or more interfaces and/or gateways that provide access to specific networks. In this case, you will create a router and attach your tenant and external networks to it.
Create the router: $ neutron router-create demo-router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 635660ae-a254-4feb-8993-295aa9ec6418 | | name | demo-router | | status | ACTIVE | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +-----------------------+--------------------------------------+ Attach the router to the demo tenant subnet: $ neutron router-interface-add demo-router demo-subnet Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router. Attach the router to the external network by setting it as the gateway: $ neutron router-gateway-set demo-router ext-net Set gateway for router demo-router
Resulting topology
Test network
On network node
With the gateway set, it should be possible to ping at least the network gateway (192.168.200.1) from the router namespace, and possibly an Internet address (8.8.8.8) if upstream routing and NAT is properly configured:
ip netns
qrouter-ba605e63-3c54-402b-9bd7-3eba85d00080
qdhcp-9ce7f51f-dac6-453f-80ce-3391769d9990
[root@network ~]# ip netns exec qrouter-ba605e63-3c54-402b-9bd7-3eba85d00080 ip a
136: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
137: qg-8860da83-31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:57:0b:47 brd ff:ff:ff:ff:ff:ff
inet 192.168.101.2/22 brd 192.168.103.255 scope global qg-8860da83-31
inet6 fe80::f816:3eff:fe57:b47/64 scope link
valid_lft forever preferred_lft forever
[root@network ~]# ip netns exec qrouter-ba605e63-3c54-402b-9bd7-3eba85d00080 ping 192.168.100.1
PING 192.168.100.1 (192.168.100.1) 56(84) bytes of data.
64 bytes from 192.168.100.1: icmp_seq=1 ttl=255 time=2.22 ms
64 bytes from 192.168.100.1: icmp_seq=2 ttl=255 time=0.485 ms
[root@network ~]# ip netns exec qrouter-ba605e63-3c54-402b-9bd7-3eba85d00080 ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=47 time=13.3 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=47 time=12.6 ms
Horizon
Install the dashboard
Before you can install and configure the dashboard, meet the requirements in the section called “System requirements”. [Note] Note
When you install only Object Storage and the Identity Service, even if you install the dashboard, it does not pull up projects and is unusable.
For more information about how to deploy the dashboard, see deployment topics in the developer documentation.
Install the dashboard on the node that can contact the Identity Service as root:
yum install memcached python-memcached mod_wsgi openstack-dashboard
Modify the value of CACHES['default']['LOCATION'] in /etc/openstack-dashboard/local_settings to match the ones set in /etc/sysconfig/memcached.
Open /etc/openstack-dashboard/local_settings and look for this line:
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}
[Note] Notes
The address and port must match the ones set in /etc/sysconfig/memcached.
If you change the memcached settings, you must restart the Apache web server for the changes to take effect.
You can use options other than memcached option for session storage. Set the session back-end through the SESSION_ENGINE option.
To change the timezone, use the dashboard or edit the /etc/openstack-dashboard/local_settings file.
Change the following parameter: TIME_ZONE = "UTC"
Update the ALLOWEDHOSTS in localsettings.py to include the addresses you wish to access the dashboard from.
Edit /etc/openstack-dashboard/localsettings: ALLOWEDHOSTS = ['localhost', 'my-desktop']
This guide assumes that you are running the Dashboard on the controller node. You can easily run the dashboard on a separate server, by changing the appropriate settings in local_settings.py. Edit /etc/openstack-dashboard/local_settings and change OPENSTACK_HOST to the hostname of your Identity Service: Select Text 1 OPENSTACK_HOST = "controller" Ensure that the SELinux policy of the system is configured to allow network connections to the HTTP server. # setsebool -P httpd_can_network_connect on Start the Apache web server and memcached: service httpd start service memcached start chkconfig httpd on chkconfig memcached on You can now access the dashboard at http://controller/dashboard . Login with credentials for any user that you created with the OpenStack Identity Service.
Cinder
Swift
Heat
Install the Orchestration service
Install the Orchestration module on the controller node:
yum install openstack-heat-api openstack-heat-engine \ openstack-heat-api-cfn
In the configuration file, specify the location of the database where the Orchestration service stores data. These examples use a MySQL database with a heat user on the controller node. Replace HEAT_DBPASS with the password for the database user:
openstack-config --set /etc/heat/heat.conf \ database connection mysql://heatUser:heatPass@controller/heat
Use the password that you set previously to log in as root and create a heat database user:
$ mysql -u root -p mysql> CREATE DATABASE heat; mysql> GRANT ALL PRIVILEGES ON heat.* TO 'heatUser'@'localhost' \ IDENTIFIED BY 'heatPass'; mysql> GRANT ALL PRIVILEGES ON heat.* TO 'heatUser'@'%' \ IDENTIFIED BY 'heatPass';
Create the heat service tables:
su -s /bin/sh -c "heat-manage db_sync" heat [Note] Note Ignore DeprecationWarning errors. Configure the Orchestration Service to use the Qpid message broker: openstack-config --set /etc/heat/heat.conf DEFAULT qpid_hostname controller
Configure the Orchestration Service to use the Rabbit message broker:
openstack-config --set /etc/heat/heat.conf DEFAULT \
rpc_backend heat.openstack.common.rpc.impl_kombu
openstack-config --set /etc/heat/heat.conf DEFAULT \
rabbit_host controller
Create a heat user that the Orchestration service can use to authenticate with the Identity Service. Use the service tenant and give the user the admin role:
keystone user-create --name=heat --pass=heatPass \ --email=thuydang.de@gmail.com keystone user-role-add --user=heat --tenant=service --role=admin
Run the following commands to configure the Orchestration service to authenticate with the Identity service:
openstack-config --set /etc/heat/heat.conf keystone_authtoken \ auth_uri http://controller:5000/v2.0 openstack-config --set /etc/heat/heat.conf keystone_authtoken \ auth_port 35357 openstack-config --set /etc/heat/heat.conf keystone_authtoken \ auth_protocol http openstack-config --set /etc/heat/heat.conf keystone_authtoken \ admin_tenant_name service openstack-config --set /etc/heat/heat.conf keystone_authtoken \ admin_user heat openstack-config --set /etc/heat/heat.conf keystone_authtoken \ admin_password heatPass openstack-config --set /etc/heat/heat.conf ec2authtoken \ auth_uri http://controller:5000/v2.0
Register the Heat and CloudFormation APIs with the Identity Service so that other OpenStack services can locate these APIs. Register the services and specify the endpoints:
keystone service-create --name=heat --type=orchestration \
--description="Orchestration"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ orchestration / {print $2}') \
--publicurl=http://controller:8004/v1/%\(tenant_id\)s \
--internalurl=http://controller:8004/v1/%\(tenant_id\)s \
--adminurl=http://controller:8004/v1/%\(tenant_id\)s
keystone service-create --name=heat-cfn --type=cloudformation \
--description="Orchestration CloudFormation"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ cloudformation / {print $2}') \
--publicurl=http://controller:8000/v1 \
--internalurl=http://controller:8000/v1 \
--adminurl=http://controller:8000/v1
Create the heat_stack_user role.
This role is used as the default role for users created by the Orchestration module. Run the following command to create the heat_stack_user role: keystone role-create --name heat_stack_user
Configure the metadata and waitcondition servers' URLs.
Run the following commands to modify the [DEFAULT] section of the /etc/heat/heat.conf file:
openstack-config --set /etc/heat/heat.conf \
DEFAULT heat_metadata_server_url http://controller:8000
openstack-config --set /etc/heat/heat.conf \
DEFAULT heat_waitcondition_server_url http://controller:8000/v1/waitcondition
[Note] Note
The example uses the IP address of the controller (10.0.0.11) instead of the controller host name since our example architecture does not include a DNS setup. Make sure that the instances can resolve the controller host name if you choose to use it in the URLs.
Start the heat-api, heat-api-cfn and heat-engine services and configure them to start when the system boots:
service openstack-heat-api start service openstack-heat-api-cfn start service openstack-heat-engine start chkconfig openstack-heat-api on chkconfig openstack-heat-api-cfn on chkconfig openstack-heat-engine on
Error
If heat-api not started:
chmod 777 /var/log/heat/heat-manage.log su -s /bin/sh -c 'heat-manage --debug db_sync' heat
Verify the Orchestration service installation
To verify that the Orchestration service is installed and configured correctly, make sure that your credentials are set up correctly in the demo-openrc.sh file. Source the file, as follows:
source demo-openrc.sh
The Orchestration Module uses templates to describe stacks. To learn about the template languages, see the Template Guide in the Heat developer documentation.
Create a test template in the test-stack.yml file with the following content:
heat_template_version: 2013-05-23
description: Test Template
parameters:
ImageID:
type: string
description: Image use to boot a server
NetID:
type: string
description: Network ID for the server
resources:
server1:
type: OS::Nova::Server
properties:
name: "Test server"
image: { get_param: ImageID }
flavor: "m1.tiny"
networks:
- network: { get_param: NetID }
outputs:
server1_private_ip:
description: IP address of the server in the private network
value: { get_attr: [ server1, first_address ] }
Use the heat stack-create command to create a stack from this template:
$ NET_ID=$(nova net-list | awk '/ demo-net / { print $2 }')
$ heat stack-create -f test-stack.yml \
-P "ImageID=cirros-0.3.2-x86_64;NetID=$NET_ID" testStack
+--------------------------------------+------------+--------------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+------------+--------------------+----------------------+
| 477d96b4-d547-4069-938d-32ee990834af | testStack | CREATE_IN_PROGRESS | 2014-04-06T15:11:01Z |
+--------------------------------------+------------+--------------------+----------------------+
Verify that the stack was created successfully with the heat stack-list command:
$ heat stack-list +--------------------------------------+------------+-----------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+-----------------+----------------------+ | 477d96b4-d547-4069-938d-32ee990834af | testStack | CREATE_COMPLETE | 2014-04-06T15:11:01Z | +--------------------------------------+------------+-----------------+----------------------+
Troubleshooting stack-create
heat stack-create -f test-stack.yml -P "ImageID=cirros-0.3.2-x86_64;NetID=$NET_ID" testStack
+--------------------------------------+------------+--------------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+--------------------+----------------------+ | e39e7cff-f9d9-4854-87bf-78e809cb2303 | testStack | CREATE_IN_PROGRESS | 2014-10-13T14:14:50Z | +--------------------------------------+------------+--------------------+----------------------+
less /var/log/heat/engine.log heat resource-list testStack
+---------------+--------------------------------------+------------------+-----------------+----------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +---------------+--------------------------------------+------------------+-----------------+----------------------+ | server1 | d18714ca-8ad2-41ca-9d65-b9b8b21a4f70 | OS::Nova::Server | CREATE_FAILED | 2014-10-13T14:14:51Z | +---------------+--------------------------------------+------------------+-----------------+----------------------+
heat resource-show testStack server1
+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+ | description | | | links | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303/resources/server1 (self) | | | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303 (stack) | | logical_resource_id | server1 | | physical_resource_id | d18714ca-8ad2-41ca-9d65-b9b8b21a4f70 | | required_by | | | resource_name | server1 | | resource_status | CREATE_FAILED | | resource_status_reason | Error: Creation of server Test server failed. | | resource_type | OS::Nova::Server | | updated_time | 2014-10-13T14:14:51Z | +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
heat event-list testStack
+---------------+--------------------------------------+-----------------------------------------------+--------------------+----------------------+ | resource_name | id | resource_status_reason | resource_status | event_time | +---------------+--------------------------------------+-----------------------------------------------+--------------------+----------------------+ | server1 | 4cbad49a-7a8e-41d9-9ea1-c7b9bd9de2db | state changed | CREATE_IN_PROGRESS | 2014-10-13T14:14:51Z | | server1 | bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb | Error: Creation of server Test server failed. | CREATE_FAILED | 2014-10-13T14:14:52Z | +---------------+--------------------------------------+-----------------------------------------------+--------------------+----------------------+
heat event-show server1 bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb usage: heat event-show <NAME or ID> <RESOURCE> <EVENT> heat event-show: error: too few arguments heat event-show testStack server1 bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| event_time | 2014-10-13T14:14:52Z |
| id | bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb |
| links | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303/resources/server1/events/bb0bfdf5-1ac4-4fdf-a1c7-f5b0eae5c5fb (self) |
| | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303/resources/server1 (resource) |
| | http://controller:8004/v1/53a50dea852e4714bc51c4946cf7ff71/stacks/testStack/e39e7cff-f9d9-4854-87bf-78e809cb2303 (stack) |
| logical_resource_id | server1 |
| physical_resource_id | d18714ca-8ad2-41ca-9d65-b9b8b21a4f70 |
| resource_name | server1 |
| resource_properties | { |
| | "admin_pass": null, |
| | "user_data_format": "HEAT_CFNTOOLS", |
| | "admin_user": null, |
| | "name": "Test server", |
| | "block_device_mapping": null, |
| | "key_name": null, |
| | "image": "cirros-0.3.2-x86_64", |
| | "availability_zone": null, |
| | "image_update_policy": "REPLACE", |
| | "software_config_transport": "POLL_SERVER_CFN", |
| | "diskConfig": null, |
| | "metadata": null, |
| | "personality": {}, |
| | "user_data": "", |
| | "flavor_update_policy": "RESIZE", |
| | "flavor": "m1.tiny", |
| | "config_drive": null, |
| | "reservation_id": null, |
| | "networks": [ |
| | { |
| | "uuid": null, |
| | "fixed_ip": null, |
| | "network": "ec3b34ac-8890-4a2e-b223-29e914888d7b", |
| | "port": null |
| | } |
| | ], |
| | "security_groups": [], |
| | "scheduler_hints": null |
| | } |
| resource_status | CREATE_FAILED |
| resource_status_reason | Error: Creation of server Test server failed. |
| resource_type | OS::Nova::Server |
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Troubleshooting 2
[root@controller openstack]# heat stack-list
+--------------------------------------+------------+---------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+------------+---------------+----------------------+
| 28195295-be4e-4c89-9d6e-a00e6d51ff58 | testStack | CREATE_FAILED | 2014-10-13T14:30:54Z |
+--------------------------------------+------------+---------------+----------------------+
[root@controller openstack]# nova list
+--------------------------------------+-------------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------+--------+------------+-------------+----------+
| 5230eb6e-c152-459f-8a58-15b9ae21844e | Test server | ERROR | - | NOSTATE | |
+--------------------------------------+-------------+--------+------------+-------------+----------+
[root@controller openstack]#
[root@controller openstack]#
[root@controller openstack]# nova show 5230eb6e-c152-459f-8a58-15b9ae21844e
+--------------------------------------+------------------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2014-10-13T14:30:56Z |
| fault | {"message": "No valid host was found. ", "code": 500, "created": "2014-10-13T14:30:57Z"} |
| flavor | m1.tiny (1) |
| hostId | |
| id | 5230eb6e-c152-459f-8a58-15b9ae21844e |
| image | cirros-0.3.2-x86_64 (4e05adaa-bc94-45be-9ed0-7e3931df553b) |
| key_name | - |
| metadata | {} |
| name | Test server |
| os-extended-volumes:volumes_attached | [] |
| status | ERROR |
| tenant_id | 53a50dea852e4714bc51c4946cf7ff71 |
| updated | 2014-10-13T14:30:57Z |
| user_id | ec912e9a345a41e28c9781aeda1e97fb |
+--------------------------------------+------------------------------------------------------------------------------------------+
no authorized image
On compute node:
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
"Unexpected vif_type=%s") % vif_type)\n', u'NovaException: Unexpected vif_type=binding_failed\n']
This is showed in controller
tail -n40 /var/log/nova/nova-scheduler.log
Try 1
Goto Compute
systemctl stop neutron-openvswitch-agent.service ovs-vsctl del-br br-int ovs-vsctl add-br br-int systemctl restart openstack-nova-compute systemctl start neutron-openvswitch-agent.service
try 2
On network node
neutron agent-show neutron agent-list
Heat
====== Compute Service ======
Network Service
Troubleshooting
Rabbitmq not start at boot
Rabbitmq not start because epmd (erlang) was killed.
Workaround:
rpm -ql epmd vim /usr/lib/systemd/system/epmd.socket # replace 127.0.0.1 [Socket] ListenStream=0.0.0.0:4369 vim /usr/lib/systemd/system/rabbitmq-server.service [Unit] ... After=epmd.socket Requires=epmd.socket
Host may not be resolved:
Nov 26 15:04:51 devcontroller.localdomain su[2409]: pam_unix(su:session): session opened for user rabbitmq by (uid=0) Nov 26 15:04:51 devcontroller.localdomain su[2411]: (to rabbitmq) root on none <--------- HERE Nov 26 15:04:51 devcontroller.localdomain su[2411]: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
vim /etc/hosts 127.0.0.1 devcontroller <---- HERE localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.201.29 devcontroller devcontroller.localdomain 10.10.11.2 devcontroller devcontroller.localdomain
Ext-gateway on Router DOWN
* https://bugzilla.redhat.com/show_bug.cgi?id=1054857
[root@puma05 ~]# ovs-vsctl show
9386551f-4143-424b-a94a-e68f75dcd024
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
Port "qg-7c5ae8e7-06"
Interface "qg-7c5ae8e7-06"
type: internal
[root@puma05 ~]# ovs-ofctl dump-ports-desc br-ex
OFPST_PORT_DESC reply (xid=0x2):
2(eth3.195): addr:80:c1:6e:07:d2:4c
config: 0
state: 0
current: 10GB-FD FIBER
advertised: FIBER
supported: FIBER AUTO_PAUSE
speed: 10000 Mbps now, 0 Mbps max
4(phy-br-ex): addr:22:39:48:fa:2a:c4
config: 0
state: 0
current: 10GB-FD COPPER
speed: 10000 Mbps now, 0 Mbps max
6(qg-7c5ae8e7-06): addr:7c:01:00:00:00:00
config: PORT_DOWN <-- HERE
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-ex): addr:80:c1:6e:07:d2:4c
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
Solution
recreate br-ex or bring it up or... better solution following
How it should work
Further clarification following Bob's comment: There's two different ways to connect a router to its external network. The old approach, using br-ex. The router's external leg is connected to br-ex, and br-ex is connected to some NIC which is connected to an external network. With this old approach you cannot hook up multiple external networks to a L3 agent (So that each router may be connected to a different external network). The new approach uses provider networks, so that the external leg of a router is connected back to br-int, and flows are installed on br-int connecting it to a bridge, which is connected to a physical NIC. This way, you can create multiple external networks on a L3 agent. This code was backported to RHOS 4.0.
To conclude, the old approach is no longer being worked on, and the new approach doesn't have this bug. We just have to make sure that the deployment tools are setting the correct values so that we work with the new approach by default (External bridge should be empty or 'provider', and the provider network fields have to be filled out for the L3 agent conf).


