Table of Contents
ImoveFan Worklog
- Create Eclipse Luna workspace: /mnt/nfv/odl/10odldevws → /home/dang/data/src/50imovefanws/10odldevws
- GIT: git@git.dai-labor.de:odl_dmm/ima-controller.git
Back log
- 201502-02-20
- 201502-03-02
- 201502-03-03
2015-04-23 HW Call
- changes to openvswitch if necessary.
- HW sends specification of the mobility management app
201502-03-05
Huawei telco 201502-03-05
- Mininet + ODL mobility management
- compare with Huawei solution
- ODL is more scalable
- First step: write helloworld app with ODL
- Using OVSDB, CI, etc
201502-03-03
Relize iMoveFAN scenario
Trace package path from VM to EXT net: http://localhost/wiki/doku.php?id=work_dai_labor:projects:imovefan:imovefan_ws:openstack_odl:openvswitch_understanding&#which_qvo
201502-03-02
Relization simple mobility management with Openstack
Description:
guarantee running streaming service when one VM is migrated to other subnet. We migrate Server VM because we show video on client VM, which requires constant VNC connection.
How?
- Create 3 Tenant networks / 2 + 1 external network. One (ext) network for the client VM. The Server VM will roam btw. the other 2 tenant network.
- Client: Create a normal desktop VM for video playing
- Server: mmpeg udp/tcp stream
- Flow OVSDB modification
- What rules will be deleted, added?
- compare dump flow tables before and after migration.
- How? OVSDB cli interface
- Controller programming
- Identify information needed for flow rule / service management: host name, ip, MAC
- an agent send this info to controller periodically?
- Controller detects / VM triggers mobility event?
- Controller logic:
- detect mobility
- calculate rules
- program OVSDB
- GUI?
- django? js? auto update data.
[Start workin…]
compare dump flow tables before and after migration.
201502-02-20
Ref: 201502-02-17+18
20150219
Phd discussion
- Define the scale: biz logic – core — ran –> physical * SDN + Cog. Radio –> * Dynamic env., * Real migration (Product + HW VM) * Mobility management * network densification * ss * energy efficiency * resource allocation * –> cem, doruk * thuy * coexist of SDN, NFV, Cloud * Service chaining model * Service orchestration in SDN
- next: Select from above the contribution focus (from physical to biz)
- next: Identify performance indication
Make thesis title from those words, 1 sentence only!
Edit 20150220:
TODO: Read papers, now!
- Call an App: Agile/Self-organized network infrastructure for ondemand/next generation Service Delivery.
- Bring app to mobile user.
- Distributed, volatile service deployment.
End edit:
201502-02-17+18
TODO
- Integration ODL:
- Install devstack to get smt working first and learn the configurations
- Use ubuntu dev-stack provided by Flavio. Copy conf-files to
/mnt/nfv/devstack_ubuntu-etc_neutron * Get hello world ODL app and openstack neutron development --> Cem & Doruk
Issues
- vagrant can not destroy or up –> manually remove img /var/libvirt/images, restart libvirtd
- Host firewall is rewritten by vagrant/libvirtd NAT? This blocks access to host.
Ostack ODL with devstack
Questions:
Who creates br-ex? if ovs, how it assigns IP?
L3-agent config: br-ex is the ext-gateway. It has IP assigned. Ostack doesn't know about physical port on server.
Who setups bridge br-int, br-ex when openstack starts? Openstack has l3-agent and l3-interface-driver. ODL also has l3. Neutron uses whiteboard patterns for the configuration of OVS. Need to understand how. Use whatever available to configure OVS???
How Openstack and ODL sync?
——→ Start with OVSDB debug below:
OVSDB in Eclipse for Debug
-
- ODL app tutorial: http://sdnhub.org/tutorials/opendaylight/
- Debug ODL w Eclipse: https://alagalah.wordpress.com/2013/12/14/debugging-opendaylight-in-eclipse/
- Get code & import: https://wiki.opendaylight.org/view/OVSDB:Developer_Guide#Testing_patches
- Test with Ostack: https://wiki.opendaylight.org/view/OVSDB:OVSDB_OpenStack_Guide
Create Eclipse Luna workspace: /mnt/nfv/odl/10odldevws → /home/dang/data/src/50imovefanws/10odldevws
Setup eclipse & import controller: https://wiki.opendaylight.org/view/OpenDaylight_Controller:Eclipse_CLI_Setup
Pull ovsdb source: http://docs.inocybe.com/dev-guide/content/_pulling_code_via_git_cli.html
Create branch from Helium-SR2
cd ovsdb git tag -l git checkout -b td_helium-sr2 release/helium-sr2
Compile ovsdb
mvn clean install
Some Test Errors:
- read the README!
Solution: disable Karaf distribution in pom.xml:49 –
> us osgi distribution.
h
ttps:bugs.opendaylight.org/show_bug.cgi?id=1669
=== Problem with Java 8 ===
Many ODL project including Controller depend on Enunciate, which depends on APT. In java 8 apt is replaced with javac and removed.
* https://bugs.opendaylight.org/show_bug.cgi?id=2626
* https://jira.codehaus.org/browse/ENUNCIATE-701
====== —Passed Items— ======
====== 201502-02-12 ======
===== OPNFV - ODL interlock call =====
Tim, problem ODL bundles starts slow an can't provision OVS. ODL send 200ok w/o creating the tunnel.
which feature to look out when fixing it, Tim?. The network.
Ryan: neutron uses whiteboard pattern to allow plugins SB providers to modify OVS. SB Provider votes for Neutron request. If one of them say no, nothing happens. If all say yes, neutron waits for them to do its request. So they send 200ok to neutron and take care of the issue.
200Ok comes back when no SB available and no SB say no??. It's reasonable and probable.
Dirty:
1. No coordination btw Neutron and ODL except for Http msg fire and forget.
2. Neutron does not have NB protocol
idea: Its only modifying the DB, so why not let Neutron check the status?.
We can't tell neutron anything. ODL can mirror port and see if the request is done.
Add to the list:
everything was fine then ODL is gone and back again. what then?
ml2-agent poll for status after 200Ok. It checks for port binding status.
Neutron data-structure has a status. So ml2 polls and change the status.
summary: Neutron is not sync with what happens after its request to change network state.
ODL has a DB of the NEtwork state, while Neutron not. So they can ask if the state is true or not.
Solution:
* through a block if no SB agents is available
* No back channel fro ODL to Neutron to tell what's wrong.
* Timeer for each action?
*
===== 20150212 - HW Telco =====
* Make the relation btw Openstack and VMs instances, tenant networks.
* 2nd week of March PhD dimention.
* my work in iMoveFAN project.
====== 201502-02-07 ======
* Setup ODL L3 for openstack
* following this: https://wiki.opendaylight.org/view/OVSDB_Integration:L3Fwd
* Install the odl driver on controll, network node.
* remove brex, reset ens5. manually ovs-vsctl add-br br-ex
* Q1: Tennant network seems to work again. No qgxxxx is created on br-ex. Who's responsible for this? odl, l3-agent?
* Q2: running helium 0.2.2-SR2. l3agent.Nulldriver. l3 agent does not find routers (after create or del). netns for router is created but no port is created on OVS, ODL responsibility?
* TODO: l3_agent does not create port for router interface.
2015-02-10 19:37:11.194 1621 TRACE neutron.agent.l3_agent Stderr: 'Device “qr-47ad7a83-47” does not exist.\n'
====== 201502-02-03-04-05-06 ======
* Setup new Openstack multinode /mnt/nfv/mystack-201501 with puppet.. working
* Modified puppet Openstack: https://github.com/thuydang/puppetlabs-openstack/tree/td_up_5.0.2
* To USE do: puppet module install puppetlabs-openstack –version=5.0.2 then replace /module/openstack with the git module above.
* Clone /mnt/nfv/mystack-201501 to /mnt/nfv/mystack-20150203 to start with ODL integration.
* Make /mnt/nfv/mystack-20150203 stable Openstack with puppet.
* Install Openstack Puppet by Cem
* Clone /mnt/nfv/mystack-20150203 to /mnt/nfv/mystack-20150205 to start with ODL integration.
* /mnt/nfv/mystack-20150205 Openstack configured for ODL. Ext-net not properly configured!
* Clone /mnt/nfv/mystack-20150205 to /mnt/nfv/mystack-20150206
===== Modified puppet firewall/post.pp to open api interface =====
====== 20150127-28-29 ======
* backup Openstack to mystack-dev/bak_20150127
* Install 3 node setting with ODL
* Prepare slides: ovsdb, odl
===== Note =====
ovs-ofctl dump-flows br-int
2015-01-29T10:58:51Z|00001|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)
Need to enable OF13:
ovs-vsctl set bridge br-int protocols=OpenFlow10,OpenFlow11,OpenFlow12,OpenFlow13
[root@devcontroller fedora]# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
==== Clean Openstack isntallation ====
yum erase -y openstack* mariadb openvswitch
rm -rf /var/lib/mysql
rm /root/.my.cnf
==== update manage Openstack services ====
* https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Release_Notes/section_atomic-offline-upgrade.html
Update databases
<code>
Ensure the openstack-utils package is installed:
# yum install openstack-utils
Take down all the services on all the nodes. This step depends on how your services are distributed among your nodes.
To stop all the OpenStack services running on a host, run:
# openstack-service stop
Perform a complete upgrade of all packages, and then flush expired tokens in the Identity service (might decrease the time required to synchronize the database):
# yum upgrade
# keystone-manage token_flush
Upgrade the database schema for each service:
# openstack-db –service serviceName –update
For example:
# openstack-db –service keystone –update
For reference purposes, the following table contains the commands run by openstack-db.
Table 3.2. openstack-db commands
Service Comman
Identity (keystone)
On the Identity service host, run:
# keystone-manage db_sync
Block Storage (cinder)
On the Block Storage service host, run:
# cinder-manage db sync
Object Storage (swift)
Object Storage does not require an explicit schema upgrade.
Image Service (glance)
On the Image API host, run:
# glance-manage db_sync
Compute (nova)
On the Compute API host, run:
# nova-manage db sync
OpenStack Networking
On the Networking service host, run:
# neutron-db-manage \
–config-file /etc/neutron/neutron.conf \
–config-file /etc/neutron/plugin.ini upgrade head
Warning: These instructions require at least version 2013.2-9 of the openstack-neutron package.
Review the resulting configuration files. The upgraded packages will have installed .rpmnew files appropriate to the Havana version of the service.
Upgrade the Dashboard service on the Dashboard host:
# yum upgrade *horizon* *openstack-dashboard*
Manually configure the Dashboard configuration file: /etc/openstack-dashboard/local_settings.
In general, the Havana services will run using the configuration files from your Grizzly deployment. However, because Dashboard's file was substantially changed between versions, it must be manually configured before its services will work correctly:
Back up your existing localsettings file.
Replace localsettings with localsettings.rpmnew.
Update your new localsettings file with any necessary information from your old configuration file (for example, SECRETKEY or OPENSTACKHOST).
If you are running Django 1.5 (or later), you must ensure that there is a correctly configured ALLOWEDHOSTS setting in your localsettings file. ALLOWED_HOSTS contains a list of host names that can be used to contact your Dashboard service:
If people will be accessing the Dashboard service using “http://dashboard.example.com”, you would set:
ALLOWED_HOSTS=['dashboard.example.com']
If you are running the Dashboard service on your local system, you can use:
ALLOWED_HOSTS=['localhost']
If people will be using IP addresses instead of, or in addition to, hostnames, an example might be:
ALLOWED_HOSTS=['dashboard.example.com', '192.168.122.200']
Note
For more information about the ALLOWED_HOSTS setting, see the Django Documentation.
Start all OpenStack services on all nodes; on each host, run:
# openstack-service start
</code>
====== 20150123 ======
Objectives: Investigate Openstack & Opendaylight
Outcome: a tutorial to configure openstack to work with opendaylight and simple experiment.
Document here: http://localhost/wiki/doku.php?id=work_dai_labor:projects:ima:ima_ws:network:openstack_odl_tutorial_summary&#installation_openstack
Problem:
* gre-tunnel seems not to work
* dhcp not work
* Solve: change and commit iptables to accept traffic from DATA-Network
====== 20150122 ======
===== Sprint-Meeting with Cem Doruk =====
* Define PhD research problem (identify research interest)
* Mobility Management 2G to 5G
* Traditional: Home agent, foreign agent
* Applying the concept to LTE.
* Proxy MIP: Local Anchor Point, Mobile GW. well-known solution
* LAP is Serving GW in LTE.
* Control plane and data plane
* Questions that can be answered: virtualize LTE and turn them to PhD
* e.g. PhD: Interdomain Mobility Management
* We do things new, clean slate: paper, pattern, PhD
* Treat LTE/Core network as black box a.k.a controller that manages mobility
* UE moves and it is the core that care about connectivity –> a lot of innovations
* What we have: SDN/Controller controlls blacboxes to ensure connectivity
* Research question: the architecture that scales
* –> scalability of centralized controller in flat IP network
* Research question: Reliability: redundancy, failure, over provisioned
* –>
* Research on Mobility Management: Layers: Server space –> internet cloud –> core –> RAN –> UE
* User space layer: parameters
* network selection
* RAN schedules resources for UE (algorithm for optimal resource scheduling) based on channel quality (CQI). There are more parameters!!
* Core Network Layer:
* Routing algorithm: Handover Threshold, Location
* Performance measurement
* Latency
* QoE
* Load balancing
* Controller: logically centralized
* so controller at each layer.
* hierarchical distributed controller that represent the centralized controller.
* Discuss the vision - scrum story
* virtualization
* dynamic management (of virtualized resources): 3rd party - openstack + hypervisor (SDN)
* which SDN controller? OpenDaylight
* –> using that tools to archive the envisioned layers –> simple contribution
* Controller at UE, RAN, Core should talk to each other. Intercontroller interaction
* 2 UE controllers talk with RAN controller
*
===== Try ODL with mininet =====
ref:
* https://wiki.opendaylight.org/view/OVSDB:dlux_mininet
* https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation
* Can not use Fedora laptop directly cause problem with NetworkManager. Removing it may mess a working laptop up. –> try with VMs
* Using devopendaylight
==== 1. Download and unzip the release: ====
wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.2.1-Helium-SR1.1/distribution-karaf-0.2.1-Helium-SR1.1.zip
unzip
ln -s $dir odl-karaf-distribution
2. Start karaf:
cd into unzip dir
bin/karaf
3. Install the required features:
feature:install odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal-northbound odl-mdsal-apidocs odl-l2switch-switch odl-dlux-core
4. Build mininet to ensure we have OpenFlow13 support:
mininet, openvswitch are located in /home/fedora/opendaylight-test/mininet
To use the OpenFlow13 version I downloaded the latest code for mininet from 08/21/14. Anything later should be fine.
There were previous comments that required patching mininet to gain OpenFlow13 but that was not necessary with the latest code.
git clone git:github.com/mininet/mininet
git checkout -b 2.1.0p1 2.1.0p1 sudo ./mininet/util/install.sh -n
These were the previous instructions for modifying mininet to support OpenFlow13:
Look in '/opt/mininet/mininet/node.py' for something like “set bridge %s protocols=%s' % ( self, self.protocols )”
After I edit that file I copy it: sudo cp /opt/mininet/mininet/node.py /opt/mininet/build/lib/mininet/node.py
Then build via ./install.sh -n.
Probably don't need the cp to the build lib since I think the install does that but it doesn't hurt.
More details on this step can be obtained here: https://lists.opendaylight.org/pipermail/ovsdb-dev/2014-June/000490.html and here https://github.com/dave-tucker/mininet/commit/21323c27347f98fbb416e6310296cea9339fc945
- Start mininet: export ODLIP=127.0.0.1 sudo mn –mac –switch=ovsk,protocols=OpenFlow13 –controller=remote,ip=${ODLIP},port=6653 –test pingall - runs a ping sanity
sudo mn –mac –switch=ovsk,protocols=OpenFlow13 –controller=remote,ip=${ODL_IP},port=6653 –topo=tree,3
Creates a three layered topology of 7 switches and two attached hosts.
Note 1: Substitute ${ODL_IP} with the IP address or where ODL (from Karaf) is running.
Note 2: To in order for the above commands to work under Fedora 20, I had to first install and start openvswitch with the commands:
sudo yum install openvswitch sudo systemctl start openvswtich.service
6. Verify topology in DLUX:
http:///${ODL_IP}:8181/dlux/index.html
If ODL is running on localhost, you can use this: http://localhost:8181/dlux/index.html
The default user/pass is admin/admin.
At this point you should see the topology in the gui. Reload, wait for ODL topology detection
7. Play around
mn> link s1 s2 down link s1 s2 up
20150120 Kickoff meeting
20141217 Pre-kickoff Telco
- Internal management at DAI & Sync with Huawei
- Adapted Scrum management
- Regularly monitoring progress every 2-4 weeks depending on the tasks.
- tentative agenda for kickoff meeting
- When 20/01/2015
- Where in Munich
- Agenda
- intro Huawei
- intro Dai
- Task: slide with photo, introduction
- iMoveFAN scope & focus (DAI prepares presentation)
- iMoveFAN project management (DAI prepares presentation)
- first mile stone discussion (DAI prepares presentation)
- iMoveFAN period reviews
iMoveFAN project scope & focus: manzoor
- → we review with proposal
Where are we? regarding first mile stone.
- dimentions investigation, what was carried out
- what are the takeaways (most important): eg. MCN project, CRAN…