Table of Contents
Opendaylight and OpenStack
Opendaylight has a bunch of things. Important for the integration with openstack:
- The controller: SDN controller framework and basic feature (Cisco)
- OpenFlow Protocol Library: Library supporting OpenFlow 1.3 and above versions (pantheon)
- OVSDB Integration: Configuration/management feature of OVSDB mediated OVS(Open vSwitch) (kentucky univ)
- Openstack servivce: support OVSDB, OpneDOVE, VTN
- (maybe) VTN multi sdn controller.
Opendaylight & Openstack integration
Helium
Karaf guide
Extract helium distribution
export JAVA_PERM_MEM=1024m export JAVA_MAX_PERM_MEM=1500m export JAVA_MAX_MEM=1700m export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=1024m" ./bin/karaf
Load features:
feature:install odl-ovsdb-openstack odl-ovsdb-northbound odl-restconf odl-mdsal-apidocs odl-adsal-all odl-adsal-northbound odl-dlux-core
Make sure features are loaded:
feature:list -i | grep odl-l2switch-switch odl-l2switch-switch | 0.1.0-Helium-RC0 | x | l2switch-0.1.0-Helium-RC0 | OpenDaylight :: L2Switch :: Switch feature:list -i | grep odl-restconf odl-restconf | 1.1-Helium-RC0 | x | odl-mdsal-1.1-Helium-RC0 OpenDaylight :: Restconf bundle:list web:list
Configure features at startup (Optional)
Features can be installed automatically when karaf starts by editing features file:
vi karaf-distro/etc/org.apache.karaf.features.cfg
modify this line:
featuresBoot=config,standard,region,package,kar,ssh,management
to be:
featuresBoot=config,standard,region,package,kar,ssh,management,odl-ovsdb-openstack,odl-ovsdb-northbound,odl-restconf,odl-mdsal-apidocs,odl-adsal-all,odl-adsal-northbound,odl-dlux-core
Tricks
Commands
feature:list (get all apps available) opendaylight-user@root> feature:install odl-dlux-core opendaylight-user@root> feature:install odl-openflowplugin-all opendaylight-user@root> feature:install odl-l2switch-all opendaylight-user@root> bundle:list | grep Active
Debugging Karaf
Set root logger to ERROR:
log:set ERROR
Set bundle to debug to TRACE:
log:set TRACE org.opendaylight.l2switch
See log in karaf:
log:display
or see log outside karaf:
tail -f karaf-distro/data/log/karaf.log
Other
web:list
Openstack
OVS compute, network
- Troubleshooting: http://www.yet.org/2014/09/openvswitch-troubleshooting/
Network Node
Stop Service:
service neutron-server stop service neutron-openvswitch-agent stop
Create neutron_ml2 database:
mysql -e "drop database if exists neutron_ml2;" mysql -e "create database neutron_ml2 character set utf8;" mysql -e "grant all on neutron_ml2.* to 'neutron'@'%';" neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf \ --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugin.ini upgrade head
ml2 odl_mechanism.py driver to enable Neutron-ODL northbound communication.
[root@controller openstack]# cat /etc/neutron/plugins/ml2/ml2_conf.ini | egrep -v "^\s*(#|$)" [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = opendaylight [ml2_type_flat] [ml2_type_vlan] [ml2_type_gre] tunnel_id_ranges = 1:1000 [ml2_type_vxlan] [database] sql_connection = mysql://neutronUser:neutronPass@controller/neutron_ml2 [odl] nodes = network_vlan_ranges = 1000:2000 tunnel_id_ranges = 1:1000 tun_peer_patch_port = patch-int int_peer_patch_port = patch-tun tenant_network_type = vlan tunnel_bridge = br-tun integration_bridge = br-int controllers = http://10.10.10.216:8080:admin:admin [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True [agent] minimize_polling = True [ml2_odl] password = admin username = admin url = http://10.10.10.216:8080/controller/nb/v2/neutron
Get OVS to be controlled by ODL
ovs-vsctl set-controller br-int tcp:10.10.10.216:6640
Configure the ovsdb instance to connect to OpenDaylight:
sudo ovs-vsctl set-manager tcp:192.168.120.1:6640
Check config:
ovs-vsctl list Manager ovs-vsctl list Open_vSwitch .
Set up GRE tunnel btw Compute and Network??? or only compute?
ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid sudo ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=$local_ip
Set up external network
Working with OpenDaylight OpenStack
Web address:
openstack: http://devcontroller.localdomain admin admin odl ovsdb: http://devopendaylight:8181 admin admin
Setup ODL OVSDB
Add management addr of ODL node:
The next step is to modify the “of.address” variable in the “configuration/config.ini” file. This file is relative to the odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage directiory. Fire up vi and add the management IP address for your ODL instance as the value for of.address.
Turn off simple forwarding
osgi> lb | grep simple 132|Active | 4|samples.simpleforwarding (0.4.1.SNAPSHOT) true osgi> stop 132 osgi> lb | grep simple 132|Resolved | 4|samples.simpleforwarding (0.4.1.SNAPSHOT) true osgi>
Setup Troubleshooting
Recreate ovsdb on Controller and Compute nodes:
neutron-ovs-cleanup ovs-vsctl del-br br-tun ovs-vsctl del-br br-int ovs-vsctl add-br br-int
OVSDB should see 2 nodes with no port.
Create VMs
neutron net-create admin-net neutron subnet-create admin-net --name admin-subnet --gateway 192.168.1.1 192.168.1.0/24 neutron router-create admin-router neutron router-interface-add admin-router admin-subnet
neutron security-group-rule-create --protocol icmp \ --direction ingress --remote-ip-prefix 0.0.0.0/0 default
neutron security-group-rule-create --protocol tcp \ --port-range-min 22 --port-range-max 22 \ --direction ingress --remote-ip-prefix 0.0.0.0/0 default
ovs-vsctl show still show no Port
nova boot --flavor m2.tiny --image $(nova image-list | grep 'cirros-0.3.2-x86_64\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep admin-net | awk '{print $2}') --num-instances 2 vm
After this gre-ports are created on br-int in both nodes and…
On Compute node 2 ports are created for the VMs.
Controller
ovs-vsctl show
2e9c1eb5-0660-4c7a-beb5-eb99a51dce0a
Manager "tcp:10.10.11.4:6640"
is_connected: true
Bridge br-int
Controller "tcp:10.10.11.4:6633"
is_connected: true
Port "gre-10.10.11.3"
Interface "gre-10.10.11.3"
type: gre
options: {key=flow, local_ip="10.10.11.2", remote_ip="10.10.11.3"}
Port br-int
Interface br-int
type: internal
ovs_version: "2.3.0"
Compute
[root@devcompute fedora]# ovs-vsctl show
8d0ff3b9-f1c4-464b-8ba0-2653c8737782
Manager "tcp:10.10.11.4:6640"
is_connected: true
Bridge br-int
Controller "tcp:10.10.11.4:6633"
is_connected: true
Port "tapfe2e47a7-b5"
Interface "tapfe2e47a7-b5"
Port "gre-10.10.11.2"
Interface "gre-10.10.11.2"
type: gre
options: {key=flow, local_ip="10.10.11.3", remote_ip="10.10.11.2"}
Port "tap7f7b1df7-fe"
Interface "tap7f7b1df7-fe"
Port br-int
Interface br-int
type: internal
ovs_version: "2.3.0"
Problems
- no dhcp
- can ping if ips are manually set but can't ping router interface
Background knowledge
Openstack ovsdb workflow
https://lists.opendaylight.org/pipermail/ovsdb-dev/2014-April/000364.html
[ovsdb-dev] How does instances live-migration work in openstack?
Brent Salisbury bsalisbu at redhat.com
Sun Apr 13 00:51:27 UTC 2014
Previous message: [ovsdb-dev] How does instances live-migration work in openstack?
Next message: [ovsdb-dev] How does instances live-migration work in openstack?
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hi Forseth, comments inline.
Q1:How does neutron ml2-plugin do after recive the request from nova-api?
The details of the port bindings are covered in this deck from Kyle and Bob.
http://www.openstack.org/assets/presentation-media/ML2-Past-Present-and-Future.pptx
Q2:How does opendaylight controller do about changing network configuration?
>From an ODL perspective we see a new port created in the ODL OVSDB plugin via the OVSDB protocol that has a port with the required metadata in the external_id field in the Interface table. That get correlated to the OpenStack an API call from Neutron w/ a new port/network etc. Then via OVSDB and OpenFlow we build out overlays keyed on network segments from Neutron w/ VXLAN and/or GRE and instantiate forwarding policy in the datapath via OF13.
Q3:If migration successed,how does network configuration will change between both of compute nodes ?
We don't have support for live migration yet. It's something we have looked at but no one has prioritized it over services and stability as of yet. Feel free to join the IRC channel or the weekly call if you would like to work on this, propose a solution or discuss an implementation. It's certainly something the team would be interested in.
Cheers,
-Brent