Table of Contents
Thesis
What? Why? How?
Choosing topic
- Contribution (Motivation)
- Real deployment?
- Industrie interest?
- Technology introduction.
- Academic interest?
- Community
- What problem the technology solves?
- Backward compatibility
- Future perspective of the technology
- Why this?
Model, formulars, maths to get paper accepted
I follow this research howto:
Topic - Realization of service-aware elastic network
workspace:
- /home/dang/mywork/00–current/00daipjsactual/80thesis/0proposalthesisimovefan * /home/dang/data/mydirectory/mywork/dailabor/00–current/99daiincu/99allpapers/new/thesistopic20150224
1: Title
Alternative titles:
- 1st title: Bringing App to Data: Enabling Service Delivery in Future Network Infrastructure
- Call an App: Agile/Self-organized network infrastructure for ondemand/next generation Service Delivery.
- Paths in the clouds: Service Mobility in Cloud data centers/ (Cloud networks).
More generic:
- Distributed Networking, SDN, CC.
More specific:
- Service Mobility: Mobility management, service orchestration, service chaining, nfv+cloud+sdn,
- Future Network Infrastructure, Cloud network:
- agent based, programmable network, SDN, NFV.
- Titles:
- Distributed controllers platform for service mobility in Cloud Networks.
- …
2: Keywords
Phd discussion keywords: SDN + Cog. Radio, Dynamic env., Real migration (Product + HW VM), Mobility management , network densification, energy efficiency, resource allocation, coexist of SDN, NFV, Cloud, Service chaining model, Service orchestration in SDN
SDN, Service Mobility, Service Delivery, Cloud network, cloud data center, Network virtualization, transport layer
3: Selected Papers
Check keywords in the results page!
Selected Areas in Communications, IEEE Journal on
Networking Challenges in Cloud Computing Systems and Applications
IEEE/ACM Transactions on Networking (ToN)
IEEE ICC
ACM SIGCOMM
IEEE INFOCOM
4: Read - Authors, Abstract, Introduction
1 to 5 stars to indicate relevance
(I) which problem it addresses,
(II) what solution it proposes,
(III) how the solution differs from previous solutions, and
(IV) what are the main contributions and conclusions.
SR-IOV Based Network Interrupt-Free Virtualization with Event Based Polling
Abstract
IO Virtualization suffers from hight overhead, especially Interrupt handling bay Virtual Machine Monitoring. The authors improved the Single Root IO Virtualization, which is the de facto standard for I/O virtualization solution for Virtual Machine (VM) based high performance computing. his paper is dedicated to finding an ideal solution of an interrupt free network processing scheme for over high speed network connections
Enhancing Survivability in Virtualized Data Centers: A Service-Aware Approach
Related keywords: virtual network embedding
Abstract
Survivable Virtual Infrastructure consists of VMs and their backup VMs to help services recover from failures. While most management approaches treat each VM individually, a fundamental problem in such a fault-tolerant system is to find a mapping from each SVI to the physical data center network (substrat network). Specifically, each VM is mapped to a server and each virtual link in the virtual graph is mapped to one or multiple paths between corresponding servers in the physical network. This problem shares some similarities with the Network Virtualization (NV) problems [4], [10], [14], [26], [28] (virtual network embedding). The key idea of NV is to build a diversified Internet to support a variety of network services and architectures through a shared substrate (physical) network by assigning physical resources (such as node, link, bandwidth, etc) to multiple virtual networks in an on-demand manner.
To the best of our knowledge, we are the first to address this survivable mapping problem in the context of virtualized data centers. Our contributions are summarized as follows:
- We establish a general optimization framework for the
SVI mapping problem, into which different VMP and VLM algorithms can be incorporated.
- We propose a polynomial-time optimal algorithm to solve
the VLM subproblem, which can guarantee sufficient link bandwidth for failover traffic after any single server failure with minimum reserved bandwidth. An efficient heuristic algorithm is developed for the VMP subprob- lem.
- We present extensive simulation results obtained based
on the real VM workload traces collected from the green data center at Syracuse University [6] to justify the effectiveness of the proposed algorithms.
Automating Cloud Network Optimization and Evolution
Abstract
With the ever-increasing number and complexity of applications deployed in data centers, the underlying network infrastructure can no longer sustain such a trend and exhibits several problems, such as resource fragmentation and low bi- section bandwidth. In pursuit of a real-world applicable cloud network (CN) optimization approach that continuously maintains balanced network performance with high cost effectiveness, we design a topology independent resource allocation and opti- mization approach, NetDEO. Based on a swarm intelligence optimization model, NetDEO improves the scalability of the CN by relocating virtual machines (VMs) and matching resource demand and availability. NetDEO is capable of (1) incrementally optimizing an existing VM placement in a data center; (2) deriving optimal deployment plans for newly added VMs; and (3) providing hardware upgrade suggestions, and allowing the CN to evolve as the workload changes over time.
Datacast: A Scalable and Efficient Reliable Group Data Delivery Service for Data Centers
*A Unified Unicast and Multicast Routing and Forwarding Algorithm for Software-Defined Datacenter Networks
A scalability problem associated with software-defined datacenter, of which the uni-cast/multicast routing states is proven to be NP-complete.
Abstract
The majority of these propositions tried to reduce the multicast states by using Bloom filter (BF) [7], which has indeed attracted much attention in the research community of both the computer algorithm and networking recently. We discuss a new approach, called Scalar-pair and Vectors Routing and Forwarding (SVRF) in this article. SVRF uses an efficient way to construct and query group memberships based on the prime theory such asChinese Remainder Theorem (CRT) [8]. Proposed scheme encodes the entire group addresses that traverse along the router/switch to a scalar-pair. A SDN control plane is introduced to inform each SDN switch regarding its scalar-pair, which is used for uni-cast/multicast packets delivery to determine the corresponding outgoing port(s) (a.k.a. vector) of each flow in the SDN routers/switches.
Data Centers as Software Defined Networks: Traffic Redundancy Elimination with Wireless Cards at Routers
Abstract
To design Traffic Redundancy Elimination TRE mechanisms for DCNs, the first challenge is how to handle the distributed information among servers and routers. A centralized control logic is much easier compared to the previous work done, but conflicts with the distributed environment. A similar problem arises on how to enable the network control logic to be well designed and operated on distributed control plane states in SDN (Software Defined Networks), and the ‘logically centralized’ thought has been applied and gains the most benefit. Inspired by this, we pro- posed a solution to achieve ‘logi cally centralized’ control over the distributed information among servers and routers in DCNs by importing wireless communication. Our work confirms the trade-off between the ‘logically centralized’ control optimality and the overhead on handling distributed information in DCN, which is also exploited in the recent work [23] in SDN.
Existing works [11,24–26] have introduced how to over- come the wireless limitations such as the link blockage, interference and reliability when the wireless technologies are applied into DCNs. We propose to add wireless network cards not only to each router but also to each server and therefore enable wireless communication among servers and routers. Since all the devices in DCNs are concentrated in a relatively small place, a wireless transmission from a server or router will inform other servers and routers about data units memorized and corresponding delivery counts, which is used for making better decisions afterward. Hence the cross-source redundancy can be identified and addressed.
*Scalable Multi-Class Traffic Management in Data Center Backbone Networks
Keywords: Traffic management
Abstract
Large online service providers (OSPs) often build private backbone networks to interconnect data centers in multiple locations. These data centers house numerous appli- cations that produce multiple classes of traffic with diverse performance objectives. Applications in the same class may also have differences in relative importance to the OSP’s core business. By controlling both the hosts and the routers, an OSP can perform both application rate-control and network routing
Semi-Centralized Scalable Design Choices In this paper, we explore scalable architectures that jointly optimize rate control, routing, and link scheduling. On one hand, our design choices are motivated by the advantages of centralized TE and its industry adoption. On the other hand, since we also wish to perform rate control, our approach distributes information and computation across multiple tiers of an optimization machinery. To this end, we examine two semi-centralized designs that both use a small number of management entities to optimally allocate resources, but differ in their degree of distributedness . The first design has two tiers with the management e ntities at the backbone and class levels, whereas the second design has three tiers with an additional management entity at the data center level. The entities exchange information with each other to optimally subdivide network bandwidth between applications of different classes. Using optimization theory, we show that both our designs provably maximize the aggregate utility over all applications and traffic classes. We summarize our two designs below: A 2-Tier Design: The 2-tier design has a single manage- ment entity called link coordinator (LC) on the first tier, and multiple management entities called class allocators (CAs) on the second tier. The LC computes class-level aggregate band- width for every link in the backbone and sends it to the CAs. Each CA then subdivides this bandwidth between different applications in its own class. Using primal decomposition, we show that this 2-tier design not only can compute optimal routing paths, but also map application-level sending rates on these paths. Simulations on realistic backbone topologies show that the system converges quickly (within a few tens of iterations for suitable choices of step sizes), though at the expense of moderate amount of message-passing between the management entities.
A 3-Tier Design: The 3-tier design has an additional type of management entity called data center allocator (DA) on the third tier. The LC, as before, computes class-level aggregate bandwidth and sends it to the CAs. Each CA, however, now subdivides this bandwidth not between different applications like in the 2-tier design, but across multiple data centers hosting traffic in its own class. The task of computing application-level sending rates is delegated to the DAs. We use a two-level primal decomposition to prove optimality of this 3-tier design. Simulations show that the system exchanges fewer messages compared to the 2-tier design, but at the expense of slightly slower convergence.
****A Framework for Cooperative Resource Management in Mobile Cloud Computing
Mobile cloud, resource allocation
Abstract
Mobile cloud computing is an emerging technology to improve the quality of mobile services. In this paper, we consider the resource (i.e., radio and computing resources) sharing problem to support mobile applications in a mobile cloud computing environment. In such an environment, mobile cloud service providers can cooperate (i.e., form a coalition) to create a resource pool to share their own resources with each other. As a result, the resources can be better utilized and the revenue of the mobile cloud service providers can be increased. To maximize the benefit of the mobile cloud service providers, we propose a framework for resource allocation to the mobile applications, and revenue management and cooperation formation among service providers. For resource allocation to the mobile applications, we formulate and solve optimization models to obtain the optimal number of application instances that can be supported to maximize the revenue of the service providers while meeting the resource requirements of the mobile applications. For sharing the revenue generated from the resource pool (i.e., revenue management) among the cooperative mobile cloud service providers in a coalition, we apply the concepts of core and Shapley value from cooperative game theory as a solution. Based on the revenue shares, the mobile cloud service providers can decide whether to cooperate and share the resources in the resource pool or not. Also, the provider can optimize the decision on the amount of resources to contribute to the resource pool
*Dynamic Request Splitting for Interactive Cloud Applications
Abstract
Deploying interactive applications in the cloud is a challenge due to the high variability in performance of cloud services. In this paper, we present Dealer - a system that helps geo-distributed, interactive and multi-tier applications meet their stringent requirements on response time despite such variability. Our approach is motivated by the fact that, at any time, only a small number of application components of large multi-tier applications experience poor performance. Dealer continually monitors the performance of individual components and communication latencies between them to build a global view of the application. In serving any given request, Dealer seeks to minimize user response times by picking the best combination of replicas (potentially located across different data centers).
****Min Flow Rate Maximization for Software Defined Radio Access Networks
Sfotware defind RAN, multi-layer
Abstract
We consider a cloud-based heterogeneous network of base stations (BSs) connected via a backhaul network of routers and wired/wireless links with limited capacity. The optimal provision of such networks requires proper resource allocation across the radio access links in conjunction with appropriate traffic engineering within the backhaul network. In this paper, we propose an efficient algorithm for joint resource allocation across the wireless links and flow control over the entire network. The proposed algorithm, which maximizes the min-rate among all the transmitted commodities, is based on a decomposition approach that leverages both the alternating direction method of multipliers (ADMM) and the weighted-MMSE (WMMSE) algorithm. We show that this algorithm is easily parallelizable and converges globally to a stationary solution of the joint optimization problem. The proposed algorithm can also be extended to networks with multi-antenna nodes and other utility functions
With the advent of cloud computing technologies and the mass deployment of low power base stations (BSs), the cellular radio access networks (RAN) has undergone a major structural change. The traditional high powered single- hop access mode between a serving BS and its users is being replaced by a mesh network consisting of a large number of wireless access points connected by backhaul links as well as network routers [2]. New concepts such as heterogeneous network (HetNet) or software defined air interface that capture these changes have been proposed and studied recently (see [3], [4] and references therein). Such cloud-based, software defined RAN (SD-RAN) architecture has been envisioned as a future 5G standard, and is expected to achieve 1000x performance improvement over the current 4G technology within the next ten years [4].
***Hierarchical Auction Mechanisms for Network Resource Allocation
Multi layer, resource allocation
Abstract
Motivated by allocation of bandwidth, wireless spectrum and cloud computing services in secondary network markets, we introduce a hierarchical auction model for network resource allocation. A Tier 1 provider owns a homogeneous network resource and holds an auction to allocate this resource among Tier 2 operators, who in turn allocate the acquired resource among Tier 3 entities. The Tier 2 operators play the role of middlemen, since their utilities for the resource depend on the revenues gained from resale. We first consider static hierarchical auction mechanisms for indivisible resources. We study a class of mechanisms wherein each sub-mechanism is either a first-price or VCG auction, and show that incentive compatibility and efficiency cannot be simultaneously achieved.
***Datacenter Applications in Virtualized Networks: A Cross-Layer Performance Study
Network virtualization, performance evaluation
Abstract
Datacenter-based Cloud computing has induced new disruptive trends in networking, key among which is network virtualization. Software-Defined Networking overlays aim to improve the efficiency of the next generation multitenant datacenters. While early overlay prototypes are already available, they focus mainly on core functionality, with little being known yet about their impact on the system level performance. Using query completion time as our primary performance metric, we evaluate the overlay network impact on two representative datacenter workloads, Partition/Aggregate and 3-Tier. We measure how much performance is traded for overlay's benefits in manageability, security and policing. Finally, we aim to assist the datacenter architects by providing a detailed evaluation of the key overlay choices, all made possible by our accurate cross-layer hybrid/mesoscale simulation platform.
IEEE/ACM Transactions on Networking
***Guaranteeing Heterogeneous Bandwidth Demand in Multitenant Data Center Networks
Tenant bandwith allocation, VM placement
Abstract
The ability to provide guaranteed network bandwidth for tenants is essential to the prosperity of cloud computing platforms, as it is a critical step for offering predictable performance to applications. Despite its importance, it is still an open problem for efficient network bandwidth sharing in a multitenant environment, especially when applications have diverse bandwidth requirements. More precisely, it is not only that different tenants have distinct demands, but also that one tenant may want to assign bandwidth differently across her virtual machines (VMs), i.e., the heterogeneous bandwidth requirements. In this paper, we tackle the problem of VM allocation with bandwidth guarantee in multitenant data center networks. We first propose an online VM allocation algorithm that improves on the accuracy of the existing work. Next, we develop a VM allocation algorithm under heterogeneous bandwidth demands. We conduct extensive simulations to demonstrate the efficiency of our method
***CloudNet: Dynamic Pooling of Cloud Resources by Live WAN Migration of Virtual Machines
VM migration
Abstract
Virtualization technology and the ease with which virtual machines (VMs) can be migrated within the LAN have changed the scope of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration of virtual machines to likewise transform the scope of provisioning resources from a single data center to multiple data centers spread across the country or around the world. In this paper, we present the CloudNet architecture consisting of cloud computing platforms linked with a virtual private network (VPN)-based network infrastructure to provide seamless and secure connectivity between enterprise and cloud data center sites. To realize our vision of efficiently pooling geographically distributed data center resources, CloudNet provides optimized support for live WAN migration of virtual machines. Specifically, we present a set of optimizations that minimize the cost of transferring storage and virtual machine memory during migrations over low bandwidth and high-latency Internet links. We evaluate our system on an operational cloud platform distributed across the continental US. During simultaneous migrations of four VMs between data centers in Texas and Illinois, CloudNet's optimizations reduce memory migration time by 65% and lower bandwidth consumption for the storage and memory transfer by 19 GB, a 50% reduction.
***Fair Network Bandwidth Allocation in IaaS Datacenters via a Cooperative Game Approach
Tenant bandwidth allocation
Abstract
With wide application of virtualization technology, tenants are able to access isolated cloud services by renting the shared resources in Infrastructure-as-a-Service (IaaS) datacenters. Unlike resources such as CPU and memory, datacenter network, which relies on traditional transport-layer protocols, suffers unfairness due to a lack of virtual machine (VM)-level bandwidth guarantees. In this paper, we model the datacenter bandwidth allocation as a cooperative game, toward VM-based fairness across the datacenter with two main objectives: 1) guarantee bandwidth for VMs based on their base bandwidth requirements, and 2) share residual bandwidth in proportion to the weights of VMs. Through a bargaining game approach, we propose a bandwidth allocation algorithm, Falloc, to achieve the asymmetric Nash bargaining solution (NBS) in datacenter networks, which exactly meets our objectives. The cooperative structure of the algorithm is exploited to develop an online algorithm for practical real-world implementation. We validate Falloc with experiments under diverse scenarios and show that by adapting to different network requirements of VMs, Falloc can achieve fairness among VMs and balance the tradeoff between bandwidth guarantee and proportional bandwidth sharing. Our large-scale trace-driven simulations verify that Falloc achieves high utilization while maintaining fairness among VMs in datacenters.
IEEE ICC
(****) SoftEPC — Dynamic instantiation of mobile core network entities for efficient resource utilization
NFV
Abstract
One common practice adopted by many Mobile Network Operators (MNOs) in order to cater to increasing traffic demand is the over provisioning of network and processing resources. However, this approach results in increased CAPEX and OPEX and in view of future load forecasts this approach is no longer considered feasible or economical. In this paper we address this specific issue and propose the concept of softEPC, which leverages the recent advances in cloud technology concept such as Infrastructure as a Service (IaaS) and virtualization techniques in the realm of mobile networks. A softEPC is considered as a virtual network of Evolved Packet Core (EPC) functions over a physical transport network topology. This will enable the on-demand and load-aware dynamic instantiation of network functions and services at appropriate locations in response to the actual traffic demand. The main objective of this approach is to increase the utilization of network resources by flexibly and dynamically placing the network functions/services where most appropriate to provide optimum service and where resources are available to increase the number of services provided to mobile users. The benefits and gains of this approach will be demonstrated by means of simulation results.
****Resilience options for provisioning anycast cloud services with virtual optical networks
Tenant bandwidth allocation, Virtual network partitioning, Service resilience
Abstract
This boils down to providing an extra level of abstraction, such that the same underlying physical infrastructure can be used by different entities, each in a virtually isolated environment (e.g., a virtual machine in a data center). Similarly, physical networking infrastructure (i.e., fibers and switching equipment) can thus be shared by various virtual network operators (VNOs) [2]. The logical partition under the control of the VNO amounts to a virtual network topology, denoted as virtual network (VNet), operated in iso- lation from other VNOs. The physical network and data center infrastructures are then managed by typically different entities, the physical infrastructure providers (PIPs) . (In practice VNOs and PIPs could indeed be different companies.) In this paper, we focus on the planning of the core network, in terms of the backbone network (e.g., the wavelength routing and OXCs) as well as allocation of server capacity at data centers. Both the assumed optical network and data centers are assumed to be virtualized, i.e., they will be partitioned into VNets: we consider a physical infrastructure (offered by a PIP) that will be shared to carry services offered by multiple VNOs. In particular, we will study how to resiliently provision VNets for cloud services: requests to be served by a VNO need to be allocated server capacity at a certain data center (DC) – whose physical location, i.e., mapping to a particular PIP’s infrastructure, can be decided by the VNO – and obvi- ously network connectivity from the VNO’s customer to their assigned DC(s). We focus on a planning problem addressing multiple VNets simultaneously. In this paper, we propose new models for end-to-end cloud services with different quality in terms of recovery times and availabilities, under both network and DC failures. Our contributions are: • Compared to earlier work by Barla et al. [3]–[5] (see Section II), our resilience approach explicitly includes the required network connectivity and associated bandwidth between a primary and backup data center. • We introduce a comprehensive qualitative overview of the various resilience options in choosing the aforementioned synchronization path (beyond the single simple choice adopted in our initial short paper [6] on this topic). • We provide full model details for four resilience ap- proaches (not covered in [6]), and a large scale case study (beyond the small problem instances covered by e.g., Barla et al. [3]) for two of them on a US topology
****Time-varying resilient virtual network mapping for multi-location cloud data centers
Virtual network mapping, multi-location data center
Abstract
Given the underlying anycast routing principle, the network operator has some freedom to which specific DC to allocate these resources. In this paper, we solve a resilient virtual network mapping problem that optimally decides on the mapping of both network and multi-location data center resources resiliently using anycast routing, considering time-varying traffic conditions. In terms of resilience, we consider the so-called VNO-resilience scheme, where resilience is provided in the virtual network layer. To minimize physical resource capacity requirements, we allow reuse of both network and DC resources. The failures we protect against include both network and DC resource failures: we hence allocate backup DC resources, and also account for synchronization between primary and backup DC. As optimization criteria, we not only consider resource usage minimization, but also aim to limit virtual network reconfigurations from one time period to the next. We propose a scalable column generation approach to solve the dynamic resilient virtual network mapping problem, and demonstrate it in a case study on a nationwide US backbone network.
****Live migration of virtual network functions in cloud-based edge networks
NFV migration, multi-location
Problem:
Previous researches:
Contribution:
Abstract
Emerging network paradigms, such as Software Defined Networking and Network Function Virtualization, represent the key enablers for efficient and cost-effective deployment and management of cloud-based edge networks, where a number of cooperating virtual machines can implement the tasks traditionally performed by expensive and disrupting network middleboxes. A key feature in such a scenario is the capability of live migrating a group of correlated virtual machines as a single entity representing a customer's profile. Multiple VM live migration is a topic that has not been thoroughly studied in the literature. This manuscript presents a relatively simple model that can be used to derive some performance indicators, such as the whole service downtime and the total migration time, and allows to properly design the cloud-based edge network. The model is used to compare some simple scheduling strategies for the VM migration and to provide guidelines to such implementation.
***An OpenFlow controller for cloud data centers: Experimental setup and validation
Abstract
Nowadays, Data Centers are the primary infrastructures for Cloud Computing services provisioning. In this challenging scenario, Data Centers have to deal with highly dynamic workloads, thus not only should they adopt advanced software virtualization solutions, but also network control capabilities. Elastic, agile network connectivity control should be integrated with the computing management infrastructure to make it achieve its goals. This paper presents an OF-based controller, called OFVN, that enables novel virtualization-aware networking functions in Cloud DCs. Virtual Machines allocations are performed considering the availability of computational resources on physical server, but also selecting a forwarding path for VMs data flows based on network links utilization. The focus of the paper is on the implementation and assessment of the controller in an experimental testbed. The behaviour of different allocation strategies is also investigated.
***Virtual topology mapping in elastic optical networks
Virtual network mapping, backbone
Abstract
Virtualization improves the efficiency of networks by allowing multiple virtual networks to share a single physical network's resources. Next-generation optical transport networks are expected to support virtualization by accommodating multiple virtual networks with different topologies and bit rate requirements. Meanwhile, Optical Orthogonal Frequency-Division Multiplexing (OOFDM) is emerging as a viable technique for efficiently using the optical fiber's bandwidth in an elastic manner. OOFDM partitions the fiber's bandwidth into hundreds or even thousands of OFDM subcarriers that may be allocated to services. In this paper, we consider an OOFDM-based optical network and formulate a virtual network mapping problem for both static and dynamic traffic. This problem has several natural applications, such as e-Science, Grid, and cloud computing. The objective for static traffic is to maximize the subcarrier utilization, while minimizing the blocking ratio is the aim for dynamic traffic. Two heuristics are proposed and compared. Simulation results are presented to demonstrate the effectiveness of the proposed approaches.
*****Fostering rapid, cross-domain service innovation in operator networks through Service Provider SDN
SDN, NFV network architecture, multi-tier controller, Service
Abstract
Software defined networking (SDN) and Network Function Virtualization (NFV) are new approaches to next generation network architecture for operator networks that have received much discussion in the research literature and in new forums organized to standardize their interfaces. While these approaches to network architecture are important, they only cover part of the problem. A fundamental property required from any next generation architecture is support for rapid, cross-domain service innovation. In this paper, we discuss Service Provider SDN (SP-SDN), an architectural approach to rapid service innovation based on exposure of functionality for cross-domain control at the service layer. The functionality is exposed through Web-style interfaces that feature abstractions crafted by suppressing detail irrelevant for the majority of the created services. The control spans mobile, fixed and cloud operator networks, allowing the rapid and flexible provisioning of services across all three domains. We compare this approach to SDN and NFV, and find that SP-SDN complements rather than competes with the two. We present two examples of prototype systems built with SP-SDN.
****Enhancing openflow with Media Independent Management capabilities
SDN Openflow in RAN
Abstract
The evergrowing availability of access technologies, on-line services and connected devices has been motivating the development of novel networking solutions, fostering new deployments of cloud computing, Network Function Virtualization and even mobile accesses. One of the core aspects supporting these innovative solutions is the Software Defined Networking (SDN) concept, which brings added configuration and management capabilities to the network fabric, aiding the support and introduction of new technologies and protocols. However, the base operations of SDN do not consider link conditions when employing their controlling mechanisms and mostly target fixed core links. These link conditions are important to optimizations involving wireless access links, which are paramount in today's plethora of wireless and mobile accesses. This paper enhances SDN mechanisms with Media Independent Management capabilities, deployable in both wired and wireless environments, optimizing link connectivity establishment and offering new perspectives to network developers for building new capabilities in the networks. The resulting framework was implemented over open-source software in a physical testbed, with results showing the benefits that this solution brings in terms of path optimization, featuring different monitoring schemes and allowing energy efficient scenarios.
****Software-defined infrastructure and the Future Central Office
Architecture for application platforms, Edge, Management
Abstract
This paper discusses the role of virtualization and software-defined infrastructure (SDI) in the design of future application platforms, and in particular the Future Central Office (CO). A multi-tier computing cloud is presented in which resources in the Smart Edge of the network play a crucial role in the delivery of low-latency and data-intensive applications. Resources in the Smart Edge are virtualized and managed using cloud computing principles, but these resources are more diverse than in conventional data centers, including programmable hardware, GPUs, etc. We propose an architecture for future application platforms, and we describe the SAVI Testbed (TB) design for the Smart Edge. The design features a novel Software-Defined Infrastructure manager that operates on top of OpenStack and OpenFlow. We conclude with a discussion of the implications of the Smart Edge design on the Future CO.
****A hypervisor for infrastructure-enabled sensing Clouds
IMA project.
Abstract
The lack of support and the shortcomings of Cloud computing in relation to pervasive applications can be addressed through the Sensing and Actuation as a Service (SAaaS) paradigm. In SAaaS, sensors and actuators, from both mobile devices and sensor networks, can be discovered, aggregated and elastically provided as a service according to the Cloud provisioning model. Nevertheless, managing a large set of sensing and actuation resources, characterized by volatility and heterogeneity, rises the need for specific mechanisms and strategies. In this paper we focus on management, abstraction and virtualization of sensing resources. More specifically, we describe the lowest level module of the SAaaS architecture, the hypervisor, that takes care of communication with devices and orchestrates their resources. The hypervisor operates according to policies and strategies coming from higher layers, and includes customization facilities that ease the integration of heterogeneous devices.
(****) A cloud-based content replication framework over multi-domain environments
Content replication, cloud infrastructure
Problem: end-to-end content replication problem over cloud-based multi-technology infrastructures.
Previous researches: classical model where every network node is a potential replica carrier and the link weights represent hops/delay
Contribution: we examine replication schemes for content that a) is requested by customers belonging in different virtual networks and b) depending on the requester there is different impact on the system operational cost. We examine both centralized and distributed content replication management policies and we evaluate their performance through extended simulations, by means of total cost, the number of object replacements and the number of iterations required
Abstract
Cloud service provisioning on top of virtual infrastructures is of major importance in modern ICT, since it is directly correlated to the way business models are designed and revenue is generated from the cloud service providers. In this work we examine an end-to-end content replication problem over cloud-based multi-technology infrastructures. We extend the classical model where every network node is a potential replica carrier and the link weights represent hops/delay and we examine replication schemes for content that a) is requested by customers belonging in different virtual networks and b) depending on the requester there is different impact on the system operational cost. We examine both centralized and distributed content replication management policies and we evaluate their performance through extended simulations, by means of total cost, the number of object replacements and the number of iterations required
****Dynamic correlative VM placement for quality-assured cloud service
Problem:
- VM Placement in Data center network for time-varying resource demands.
- Key problem is to increase the time-average utilization and decrease the overload ratio
Previous researches: did not consider the quality assurance and statistical multiplexing methods, which can greatly improve the effectiveness of VM placement
Contribution: a novel quality-assured VM placement scheme that dynamically places VMs to better multiplex time-varying resource demands. We firstly apply AutoRegressive Integrated Moving Average (ARIMA) and Generalized AutoRegressive Conditional Heteroskedasticity (GARCH model) to forecast the trend and volatility of the future demand, and then develop a Modern Portfolio Theory (MPT)-based method to enlarge DCN utilization and hedge the risk of server overloads. Extensive simulations and detailed analysis are conducted to validate the efficiency of our proposed scheme, which outperforms the previous works greatly.
Abstract
How to increase the utilization of data center networks (DCN) is a critical problem to ensure the quality of cloud services. Previous researches showed that the key is to increase the time-average utilization and decrease the overload ratio, and proposed many efficient virtual machine (VM) placement algorithms to achieve higher utilization. However, most of those works did not consider the quality assurance and statistical multiplexing methods, which can greatly improve the effectiveness of VM placement. In this paper, we propose a novel quality-assured VM placement scheme that dynamically places VMs to better multiplex time-varying resource demands. We firstly apply AutoRegressive Integrated Moving Average (ARIMA) and Generalized AutoRegressive Conditional Heteroskedasticity (GARCH model) to forecast the trend and volatility of the future demand, and then develop a Modern Portfolio Theory (MPT)-based method to enlarge DCN utilization and hedge the risk of server overloads. Extensive simulations and detailed analysis are conducted to validate the efficiency of our proposed scheme, which outperforms the previous works greatly.
**Virtual machines migration in a cloud data center scenario: An experimental analysis
VM Migration
Abstract
Server virtualization enables dynamic workload management in data centers. However, some aspects of virtualization software technologies, like Virtual Machines (VMs) migration or communications between VMs and storage resources, can lead to huge and unbalanced utilization of the intra-data center network. In this paper, we investigate such issues in an experimental testbed, focusing on the measurement of the traffic overhead due to VMs migration in different operating conditions. Our measurement campaigns highlight that performance is strongly affected by several factors, such as VMs placement, VMs active memory, usage of Jumbo frames instead of Ethernet frames. We also analyze and compare the traffic exchanged between hosts on which VMs are placed, when two different storage systems (i.e., NFS and iSCSI) are used.
****Network Virtualization for Future Mobile Networks: General Architecture and Applications
Virtual netwokr architecture, multi-domain controller
Abstract
Based on the expected future requirements this paper describes a general network architecture enabled by network virtualization. This architecture consists of three major building blocks which we call virtualized physical resources, virtual resource manager and virtual network controller. Such an architecture will facilitate network sharing deployments, which might exist in the form of network consolidation or service specific networks. Furthermore the ability of our framework to combine control over various domains allows a resource optimization across IT and network infrastructure, multiple network layers and heterogeneous
****Dynamic, software-defined service provider network infrastructure and cloud drivers for SDN adoption
SDN controller
Abstract
Network infrastructures for modern telecommunications and cloud service providers face a unique series of challenges. These services require networks which can respond dynamically to changes in workload or traffic profiles, and automate cloud network provisioning and commissioning,. This enables cost effective service offerings and also provides new revenue stream opportunities. This paper describes results of a software defined network (SDN) testbed which combines data center switching, long distance optical networking, and other functions under a common network controller. We demonstrate rapid reprovisioning and reuse of network resources (dynamic wavelength assignment) in response to changing application requirements, resource pooling, and other benefits of this approach.
IEEE Search: SDN Controller
****Application-layer traffic optimization in software-defined mobile networks: A proof-of-concept implementation
Abstract
Integration of application layer traffic optimization (ALTO) in software-defined mobile networks could have several benefits for orchestration of endpoint selection for distributed services. ALTO can provide guidance, e.g., in the redirection of end users to appropriate in-network cache, content distribution network server or virtual network function instance during service chaining. ALTO service provides appropriate level of abstraction of network and cost maps, enforcing the policies of mobile network operator and optionally other actors, but keeping the privacy of network topology information. SDN controllers can enforce flow redirection and can dynamically provide abstracted network and cost maps to ALTO server. In this paper we present the operation of ALTO in software-defined networks from the point of view of mobile network operators, and describe our proof-of-concept implementation.
**Realizing the Quality of Service (QoS) in Software-Defined Networking (SDN) based Cloud infrastructure
Abstract
For example, an administrator might select network bandwidth, path latency or other criteria as the optimal communication path for a specific data flow. We have implemented a system called QoS Controller, Q-Ctrl, for programmatically attaining users' required QoS constraints in a SDN based Cloud infrastructure. Q-Ctrl system is able to execute in a virtual overlay network via open vSwitch (OVS), physical network infrastructure equipped with SDN Controller, or simulated SDN environment via Mininet. In this paper, we detail i) the design and implementation of Q-Ctrl system, ii) how the network QoS for virtual machines are maintained through Q-Ctrl, and iii) a case-study on how a video streaming application leverages the Q-Ctrl system to achieve QoS in an SDN based Cloud infrastructure.
Performance Evaluation of a Scalable Software-Defined Networking Deployment
Abstract
SDN provides a level of flexibility that can accommodate network programming and management at scale. In this work we present the recent approaches, which are proposed to address scalability issue of SDN deployment. We particularly select a hierarchical approach for our performance evaluation study. A mathematical framework based on network calculus is presented and the performance of the selected scalable SDN deployment in terms of upper bound of event processing and buffer sizing of the root SDN controller is reported.
****Traffic engineering in software defined networks
SDN controller, traffic engineering
Abstract
It expects the SDN architecture to result in better network capacity utilization and improved delay and loss performance. The contribution of this paper is on the effective use of SDNs for traffic engineering especially when SDNs are incrementally introduced into an existing network. In particular, we show how to leverage the centralized controller to get significant improvements in network utilization as well as to reduce packet losses and delays. We show that these improvements are possible even in cases where there is only a partial deployment of SDN capability in a network. We formulate the SDN controller's optimization problem for traffic engineering with partial deployment and develop fast Fully Polynomial Time Approximation Schemes (FPTAS) for solving these problems. We show, by both analysis and ns-2 simulations, the performance gains that are achievable using these algorithms even with an incrementally deployed SDN.
****SDN-based architecture and procedures for 5G networks
SDN, NFV controller
Abstract
This article presents a plastic architecture for the advanced 5G infrastructure based on the latest advances of SDN, NFV and edge computing. The novel approach consists of three levels of control, i.e. Device, Edge and Orchestration Controllers, fully decupled from the user plane and backwards compatible to current and future 3GPP releases. The proposed control layers implement a unified security, connection, mobility and routing management for 5G networks. The new concept of SDN-based connectivity between virtualized network functions (applications) enables multidimensional carrier grade communication paths without the utilization of tunneling protocols. Our architectural solution dramatically reduces the end-to-end latency for mission critical type of traffic, yet guarantying a large degree of freedom, dependability and reliability, which are the most important and stringent requirements of 5G.
****Software defined networking for distributed mobility management
Mobility management, SDN
Abstract
With the mobile network core evolving towards an unlayered and decentralized architecture, distributed mobility management appears to be more compatible and efficient with the flattened networks. In this paper, we propose a new approach to realize the distributed mobility management using the software defined networking techniques instead of the existing mobility management protocols. The mobility management functions are implemented with the help of distributed controllers. The controllers will update the involved forwarding tables directly in case of handover, which realizes the route optimization inherently. The test results show that the proposed SDN-aided approach is an efficient mechanism for distributed mobility management.
***Applying software-defined networking to the telecom domain
Abstract
The concept of Software-Defined Networking (SDN) has been successfully applied to data centers and campus networks but it has had little impact in the fixed wireline and mobile telecom domain. Although telecom networks demand fine-granular flow definition, which is one of SDN's principal strengths, the scale of these networks and their legacy infrastructure constraints considerably limit the applicability of SDN principles. Instead, telecom networks resort to tunneling solutions using a plethora of specialized gateway nodes, which create high operation cost and single points of failure. We propose extending the concept of SDN so that it can tackle the challenges of the telecom domain. We see vertical forwarding, i.e. programmable en- and decapsulation operations on top of IF, as one of the fundamental features to be integrated into SDN. We discuss how vertical forwarding enables flow-based policy enforcement, mobility and security by replacing specialized gateways with virtualized controllers and commoditized forwarding elements, which reduces cost while adding robustness and flexibility.
***Controller Placement for Improving Resilience of Software-Defined Networks
Abstract
Software-defined Network (SDN) is a new architectural framework for networking that decouples control plane from data plane and provides a programmatic interface to network control. SDN enables innovations of networking technologies for Internet and data centers today in order to ease the management of the highly dynamic and complex infrastructure, to support the fine-grained traffic engineering, and to better address the mobility of virtual machines, services, and end-points. However, SDN itself opens many unanswered questions regarding reliability, performance, and security. Controller placement is a critical problem in SDN that may affect many aspects of SDN. This work introduces the use of interdependence network analysis to study the controller placement for network resilience, designs a new resilience metric, and proposes a solution to improve resilience.
*Load-aware hand-offs in software defined wireless LANs
In Wireless Local Area Networks (WLANs), providing seamless mobility and balancing load among Access Points (APs) are challenging issues due to simple signal strength based association and hand-off mechanisms employed at wireless clients. Extensions to Software Defined Networking (SDN) framework for wireless networks could help to address theses issues in an efficient and cost-effective manner with a central view of WLAN at the SDN controller. In this work, we propose a novel load-aware hand-off algorithm for SDN based WLAN systems which considers traffic load of APs in addition to received signal strength at wireless clients to solve load imbalance among APs and offer seamless mobility. We implemented the proposed algorithm on a small-scale prototype testbed and obtained improved network throughput for mobile clients as well as static clients compared to legacy hand-off algorithms used in WLANs.
*SDN based uniform network architecture for future wireless networks
Cellular networks and WLAN are the two popular wireless network structures in current wireless network systems. Cellular networks support seamless coverage and high-speed mobility, while its data bandwidth is limited. On the other hand, WLAN support high bandwidth data transmission with local coverage area and walking-speed mobility. A new wireless architecture combining the advantages of both cellular and WLAN has been the people's objective all along. In this paper, we propose a uniform wireless network architecture (U-WN) trying to achieve this goal, based on the spirit of software defined networking. Specifically, based on our analysis on the characteristics of wireless networks, we propose special designs for SDN architecture and investigate the detailed design of both controller and the nodes in the network. Furthermore, the possible applications and potential directions of U-WN are also discussed, which sheds lights on the opportunities and challenges brought by SDN to current wireless networks. We believe that the U-WN architecture opens new improvement space for wireless networks and our paper moves an important step towards the future wireless networks.
***Toward Software-Defined Cellular Networks
Existing cellular networks suffer from inflexible and expensive equipment, complex control-plane protocols, and vendor-specific configuration interfaces. In this position paper, we argue that software defined networking (SDN) can simplify the design and management of cellular data networks, while enabling new services. However, supporting many subscribers, frequent mobility, fine-grained measurement and control, and real-time adaptation introduces new scalability challenges that future SDN architectures should address. As a first step, we propose extensions to controller platforms, switches, and base stations to enable controller applications to (i) express high-level policies based on subscriber attributes, rather than addresses and locations, (ii) apply real-time, fine-grained control through local agents on the switches, (iii)perform deep packet inspection and header compression on packets, and (iv)remotely manage shares of base-station resources.
Note:
- Idea of an architecture for virtualized BTS using SDN. The central controller eases the configuration by turning policies to flow rules and communicates with BTs to make sure changes are applied.
***Toward 5G: when explosive bursts meet soft cloud
Rapid growing demand for mobile data traffic challenges capacities and service provision in the next-generation (5G) cellular networks. Real measurement data from operating cellular networks indicates that the traffic models and scenarios disobey our traditional assumptions (i.e., expressing bursty nature). As a result, current network architectures and service management may cause experience deterioration of subscribers in future networks. In this article, we propose three approaches to alleviate the influence of various traffic bursts: baseband resource pool on a cloud platform as wireless infrastructure to enhance the capacity and flexibility of networks, cloud core networks to provide dynamic extension and service flow control abilities, and software-defined bearer networks to simplify service delivery instructed by core networks. Different from conventional stovepipe-like cloud computing network architectures, our proposed architecture interconnects and shares information between entities, breaking through horizontal device barriers and vertical layers. These cloud-based approaches not only avoid the potentially negative impact of bursts, but also provide a software-controlled end-to-end service management framework for future cellular networks. In addition, by taking advantage of open interfaces of cloud-based network elements, service control algorithms and network APIs could also be implemented to realize smart and soft 5G cellular networks
* SDWLAN: A flexible architecture of enterprise WLAN for client-unaware fast AP handoff
Currently, enterprise WLAN is facing rapid growth of user scale and traffic load, as well as constantly emerging new features. However, traditional enterprise WLAN architecture suffers from poor flexibility and the lack of coordination between wireless access points (APs) and wired backbone. Inspired by the emerging idea of Software-Defined Networking (SDN), we proposed SDWLAN, an alternative architecture for enterprise WLAN. The salient features of SDWLAN are twofold. First, most of 802.11 AP functions are decoupled from scattered devices and centralized in a controller, leaving some simplified devices (i.e., wireless access switches, or WASes) manipulated by the controller through extended OpenFlow protocol. Second, the control of APs and wired backbone are consolidated to provide a unified network control platform. By reorganizing 802.11 AP's functional modules, SDWLAN can achieve remarkable flexibility. Benefiting from the extended OpenFlow protocol and the unified control platform, we proposed a client-unaware fast AP handoff mechanism in SDWLAN. Simulation results demonstrated that AP handoff operation in SDWLAN leads to negligible throughput fluctuation of on-going connection compared to traditional architecture with 802.11 standard handoff mechanism. Furthermore, SDWLAN requires no modification to existing 802.11 clients, which make our solution practical.
*Distributed flow controller for mobile ad-hoc networks
A mobile ad-hoc network (MANET) is a self-configuring, self-organized, and infrastructureless network of mobile stations, connected by radio links. The challenge to ensure network's operation means to continuously maintain updated information about the network's topology, the characteristics of the radio links and security at all levels. Due to the decentralized nature of the network, this is quite difficult. Distributed coordination and control of MANET has always been a challenge for network designers. This paper presents a proposal for a distributed flow controller, a new approach regarding the network architecture for MANET. From the perspective of the current approaches, the entire network has to be treated as a whole, from the point of view of the management, control and data plane. Our proposed approach, based on the paradigm of SDN (Software Design Network), implies a clear separation between the data plane and the control plane. This separation leads to greater flexibility on the control network, to the possibility of dynamic reconfiguration, and to the improvement of its security.
iMoveFAN Papers
Headline
5: Read More - Compare my concept, their concept
Realization of service-aware elastic network
Concepts: What can be understood?
- Vertical stack, where problems occur:
- Datacenter (Cloud network), Core, RAN
- [SDN, elastic] Problem – Solution target
Table
| Problems | DC / Cloud net | Core | RAN | DOI / Ref. |
|---|---|---|---|---|
| Dynamic restoration mechanisms of optical transport network. | Elastic optical networks (EONs). The proposed scheme contemporarily exploits centralized path computation and node configuration to avoid contentions during the recovery procedure with the final aim of minimizing the recovery time. The performance of the proposed scheme is evaluated by means of simulations in terms of recovery time and restoration blocking probability and compared against three reference schemes based on GMPLS and SDN. | Dynamic restoration with GMPLS and SDN control plane in elastic optical networks [Invited] http://dx.doi.org/10.1364/JOCN.7.00A174 |
||
| Spectrum sharing among Mobile Network Operators. Framework & | SDN framework that enables efficient & elastic spectrum utilization among multiple operators in 3GPP LTE-A Hetnets scenarios | An SDN-based framework for elastic resource sharing in integrated FDD/TDD LTE-A HetNets http://dx.doi.org/10.1109/CloudNet.2014.6968980 |
||
| Inter DC Connectivity. | Carier SDN, middleware help deal with dynamic inter DC connectivity during VMs transfer. Current network middleware handles request for VM migration between DCs. However, the requests may be lost due to network contention in peak time. SDN controller is used to handle reliable migration. | Towards a carrier SDN: An example for elastic inter-datacenter connectivity http://dx.doi.org/10.1049/cp.2013.1289 |
||
6. My Concepts and Research Problem
Elastic Network
Elasticity is the characteristic of the network, whereby connectivity between network components have highly dynamic attributes. I) Time variable resource requirement. This applies to both relatively predictable (bandwidth demand) fluctuation in Datacenter, core or backhaul (optical) network; and totally unpredictable demand fluctuation in RAN. II) Flexible network boundary. The network boundaries are not statically limited by the physical substrate in data centers or by the administration of single network operators. Depending on service usage requirements, the network must be extendable to any geographical location, regardless of administration domain and access technology. III) Scalability by operation expansion must be replaced by cost-effective approaches. Resource pooling and sharing between stakeholders is an effective approach enabled by virtualization, SDN technologies. Despite technological advances, management of resource slices and tenants, adn the control platform itself become bottlenecks to achieve scalability. IV) Adaptability implies the intelligence of the network management and architecture. This enable the ability of the network to self-manage, self-control to cope with changes in network operations.
Bandwidth
- Software defined optical network
- bandwidth management, optical NW, inter DC - SDN controller
- SDN-based framework enables efficient and elastic spectrum utilization among multiple operators in 3GPP LTE-A HetNet scenario.
- Inter-datacenter connectivity is requested in terms of volume of data and completion time.
- The application-based network operations architecture is proposed as a carrier software-defined network solution for provisioning end-to-end optical transport services through a multidomain multitechnology network scenario, consisting of a 46–108 Gb/s variable-capacity OpenFlow-capable optical packet switching network and a programmable, flexi-grid elastic optical path network.
- a novel technique, called superfilters, which enables different lightpaths, with different source-destination pairs, to coexist within the same flat region of a single filter configuration.
Network Boundary
Multi-domain, heterogeneous technologies.
Scalability
- and scalable deployment of present and future utility applications with varying requirements on security and time criticality. In this work, we first show that a well-known standard solution (i.e., IEEE 802.1Q [1]), which is popularly employed for virtual networking in industry, is limited to support large-scale utility M2M applications. Next, with some utility application use cases, we demonstrate that using the SDN technology (i.e., OpenFlow [2]), we enable elastically adaptable virtual utility network slices per-application to securely, dynamically, and cost-efficiently meet the utility communication needs. Specifically, we design a SDN-based architectural solution for virtual utility networks that will support self-configurable, secure, and scalable deployment of utility applications that leverage many end devices. Using two SDN-enabled Ethernet switches [3] available in today's market, the feasibility of our idea is discussed. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=7007682
- self-control, self-management and self-adaptive. Tenant network management isolation for scalability: http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=6903530
Adaptability
- placement algorithm to determine the allocation of managers and controllers in the SDN based, distributed management and control layer: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039203
Service-aware
Resource pooling enabled by virtualization and SDN makes efficient use of network resources. However, the network also becomes less agile and tolerant against dynamics in service layer. The dynamics are the result of unprecedented changes in service usage. i) User mobility, ii) Quality of experience… Fortunately, the separation of control and data planes enable network to be programmable and controlled by centralized intelligent algorithms. By incorporating service intelligence in network control, the network become a predictable part of service application. To achieve this level of corporation, challenges are to be solved for the integration of network control and services orchestration, i.e. to enable service frameworks to negotiate with network representatives in order to create service-aware networks.
[sdn service-aware]
- An open problem in OpenFlow, and more generally on SDN, is how to integrate network control with services orchestration, i.e. to enable service frameworks to negotiate with network representatives in order to create service-aware networks: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014201
- The OpenFlow specifications are targeted for Layer2 and layer3 functionality. The latest networking shift is to enable the switch with L4-L7 services like load balancers, proxies, firewalls, IPSec etc. This would make the middle boxes redundant in the networking deployments. In this work, we propose a methodology to extend the most commonly used Open vSwitch to L4-L7 service aware OpenFlow switch: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779346
- service awareness in future network design: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6476874
Realization
Testbed Industry Application
- Management and orchestration: agent based
- Cloud & NFV: core, radio?, management function. Carier grade network reliability,
throughput, delay
- Resource allocation: scale up, down, big data
- Cloud optimized service: energy efficience, self-management, modular, agile
- management
- on-demand, pay per use
- bring service closer to user: object storage.
- Service component aggregration
Usecase 1:
Background?
Story?
Contributions and Innovations
Implementation
Tools: Which technologies?
SDN, NFV, CC, Bigdata, Resource allocation algorithm ===
Controller
Adaptive, intelligent management
Inter-domain Protocols
Discussion
- Flexible combination of all network component (Dcs, Core, RAN) to realize seamless experience for user of mobile network.
- Application and Network access-aaS
- 4G?
- Not scalable large number of users. Usecase for offloading
- M2M, short, bursty mobile network usage
- CDN to RAN not efficient. Distributed content delivery requires more granular service packaging and orchestration.
6-b - How well current projects solve problems domain
Project MCN
WP2: Mobile Cloud Services
D2.1: Reference Scenarios and Technical System Requirements Definition D2.2: Overall Architecture Definition, Release 1 D2.3: Market Analysis and Impact of Mobile Cloud Concepts
WP3: Infrastructure Management Foundations
D3.1: Specifications & Design for Mobile Cloud framework D3.2: Infrastructure Management Foundations – Components First Release
WP4: Mobile Network Cloud Concept
D4.1: Mobile Network Cloud Component Design D4.2: First Mobile Network Cloud Software Components
WP5: Mobile Platform Architecture
D5.1: Design of Mobile Platform Architecture and Services D5.2: Implementation of IMSaaS, DSN and Mobile Platform
WP6: Testbeds
D6.1: Initial Report on Integration and Evaluation Plans
WP7: Dissemination
D7.2.1 Dissemination and Standardisation Report
D2.1: Reference Scenarios and Technical System Requirements Definition
Specification of requirements organised in four technical domains: Cloud Data Centre Infrastructure and Network Programmability, Access Network Infrastructure Cloud, Mobile Core Network Cloud, IMS/OSS/BSS/VAS as a Service
Section 3:
In order to lay systematic foundations for the Requirements Engineering (RE) process in the MCN project we have decided to refer to Volere methodology (Robertson & Robertson, 2012).
In Section 3.4 we explain how we kicked off the process by defining first draft scenarios and from where we derived consolidated scenarios as well as relevant stakeholders which evolved from those scenarios, which then formed the basis for the definition of an initial set of business use cases. Finally in Section 3.5 we provide an outlook on how we plan to proceed with these results in a more business-oriented analysis.
Table 3-1 Consolidated Scenarios:
- RAN on Demand
- Mobile Virtual Resources on Demand
- Machine Type Communication on Demand
- Software-Defined Networking
- Energy saving & fast network reconfiguration
- Scaling the capacity of a virtualized Evolved Packet Core
- Follow-Me cloud & Smart content location
- Digital Signage
- Operational Management & Charging as a Service
- End-to-End Cloud
Also, based on this set of scenarios, a set of Technical Domains (TDs) was defined, and this was later the basis to derive and organize requirements. The list of TDs as follows:
- A. Cloud Data Centre Infrastructure and Network Programmability
- B. Access Network Infrastructure Cloud
- C. Mobile Core Network Cloud
- D. IMS/OSS/BSS/VAS as a Service.
Stakeholders
EU, individual EU, enterprise EU
Mobile Cloud Network Service Provider (MCNSP) manages the End User (either IEU or EEU) subscription and appears as provider of the MCN services. In addition, the MCNSP integrates the service components required to build Mobile Cloud Network services to the EU. In summary, the MCNSP includes two basic functions:
- Integration of the different components required to build the service (radio access network, mobile core, support systems, etc.). These components are provided by stakeholders such as the Radio Access Network Provider (RANP), the Mobile Core Network Provider (MCNP), who could be separate different players, or belong to the same business entity.
Other Stakeholders:
Radio Access Network Provider (RANP) Radio Access Network Provider (RANP) is a stakeholder, who represents the entity that provides Radio Access Network (RAN) services to the MCNSP.
Mobile Core Network Provider (MCNP) represents a stakeholder, who offers Mobile Core (EPC)
Support Systems Provider (SSP)stands for a stakeholder, who offers Operational/Business Support Systems (OSS/BSS) that support the technical as well as the business operations MCNSP.
Application Services Provider (ASP), describes a stakeholder that offers Application Services (VAS, SaaS, etc.) either directly to the EU or to the MCNSP, which in turn will provide them to the EU. The business relationship between the ASP and the MCNSP depends on the particular type of application service and business model; either the ASP has a contract with the EU or it is the MCNSP who has this contract. In any case it services are made available through the “as as Service” model. ASP applications run of top of infrastructural resources (IaaS/PaaS) provided by Cloud Service Provider (CSP). Also here, any links required interconnecting the Application Services with the Mobile Core (EPC) or the MCNSP premises can be provided by a Network Connectivity Provider (NCP).
Cloud Service Provider (CSP) is a stakeholder that we already find as successful player in today‘s markets (Buyyaa, et al., 2009). CSPs provide virtual infrastructural services based on cloud technologies (e.g., Hypervisors, OpenStack, etc.) to functional block providers such as RANPs, MCNPs, SSPs and ASPs. Data Centre Infrastructure Provider (DIP) is a stakeholder that provides Data Centre infrastructure (physical/hardware/network) to the CSPs,
Mobile Virtual Network Operator (MVNO), which provides mobile communication services without owning a mobile network infrastructure.
It is to be remarked that the relationships between the different stakeholders as depicted in Figure 4-1 are obviously not one-to-one but can actually be multiple, increasing the flexibility in the business network. As a consequence the additional stakeholder role of a Service BrokerUnderlined Text might come into play, however, this is due to further investigation.
5 Scenarios
5.1 Cloud-Enabled MVNO
Light MVNO to Full MVNO
5.1.4 MCN Contribution and Innovations
The main contribution from the MCN project will be to enable the Cloud Computing principles, i.e. on-demand, elasticity, pay-as-you-go – for all technology domains and for all MVNO models (Light, Hybrid, Full MVNO). From a technology perspective this means:
- Cloud-enabled Radio Access Network (RAN) (in MCN referred to as Wireless Cloud);
- Cloud-enabled Mobile Core, i.e. Elastic EPC/EPCaaS;
- Cloud-enabled IMS (IMSaaS);
- Cloud-enabled management in the Operation Support System (OSS);
- Cloud-enabled Rating, Charging, Billing and SLA management in the Business Support
System (BSS).
5.2 Cloud-optimized MNO operations
Background: Virtual EPC
User Story: The number of customers using it goes far beyond the initial expectations of Best Mobile, causing very high peaks of traffic during the most interesting sport events. Such peaks of traffic are promptly detected by the mobile operator‘s O&M systems and, since the EPC is implemented ―as a Service‖ on top of a cloud infrastructure, Best Mobile‘s cloud orchestration platform automatically reacts to the unexpected traffic growth instantiating additional PDN Gateways. Moreover, since the O&M systems detect that most of the traffic is originated by customers located inside, or around, the Olympic Stadium, the additional PDN Gateways are instantiated in a Data Centre nearby that area, in order to optimize traffic routing. This network reconfiguration is completely transparent to the customers who continue to enjoy the service with no interruptions and have no idea of what happened behind the scenes.
5.2.4 MCN Contribution and Innovations
It is anticipated that, once enabled, the ―cloudification of the EPC will be exploited by the mobile operator in a variety of ways, such as those briefly described in the following:
- SDN, orchestration
- Datacenter
.
5.3 Machine-to-Machine / Machine-type Communication Mobile Cloud
General Background
Recently, we can observe a tremendous interest among Utility Providers (e.g., gas, electricity, water utility providers) to reduce their operational cost by means of Machine Type Communications (MTC) (or Machine to Machine (M2M)) devices. Indeed, rather than contracting a firm to manually collect meter readings or have the consumers provide meter information through mail or phone, Utility Providers may use MTC devices and have them automatically provide the current meter readings through a mobile network infrastructure. MTC devices can be smart sensors, actuators, or smart meters. Each MTC device is normally equipped with a SIM card that enables it to connect to a mobile operator network.
User Story: Depending on the MTC application/service, the Utility Provider may need to deploy a potentially high number of MTC devices (e.g. millions of devices). With the current pricing model, such a Utility Provider has to pay a significantly large amount of Euros per month, given the fact that such MTC devices connect to the mobile network only at low frequency (e.g., once a month) and most importantly for a very short time (i.e., only for several seconds at the end of each month for sending measurements about the gas/electricity/water consumption).
The Utility Provider decides to reduce these MTC related prices, including the reduction of the charging bill to its consumers, and therefore it decides to partner with the newly established Mobile Cloud Network Service Provider (MCNSP). The prices can be reduced by requesting from the MCNSP to deploy a short-term configuration of a mobile network having a lifetime for only the time to collect measurements from the deployed MTC devices and to send them to adequate MTC servers.
Contribution:
- The MTC mobile network (on the cloud) will be dynamically and on-demand created,
- Dynamic adaptation of the topology and architecture of the MTC mobile network (RAN and
EPC)
- The collection and dissemination of the MTC measurements will be balanced over time. Load
balancing is necessary since usually all MTCs belonging to multiple Utility Providers are reporting in bursts.
- The SLA and charging models used between the Utility Provider and the MCNSP provider
will be optimized in order to take into account conditions and overall costs thresholds.
.
5.4 MCN-enabled Digital Signage
5.5 End-to-End Mobile Cloud
Other Project
Topic: Data offloading biz
/home/dang/data/mydirectory/mywork/dailabor/99_horizon_x
mindmap:
/home/dang/data/mydirectory/mywork/dailabor/99_horizon_x/horizon_x thesis.mm
- Biz models: gametheory, other markets. Is it economically realizable?
- Contribution:
- service framework for emerging markets, also offloading market.
- todo: Servey on available biz models?
- Technology: Can this achieve 5G performance? 10x, 100x, 1000x
- Contribution:
- todo: IMA offloading network?
- architecture, radio technologies, network management
- todo: SDN & CCN: dynamic experimental network for CCN testing?
- CCN
- Ideas:
- CCNx, tunneling, package routing, mobile
motivation
- reason to use wifi small cells: http://www.networkworld.com/news/tech/2012/051512-wifi-small-cells-259293.html
Research: Biz model for offloading
Total network utilization increasement in urban setting with wifi hotspot
- Tradditional cell expansion method increaes CAPEX and OPEX
- general availability of hotspot
- requires a framework for coordination
User coordination for optimal network resource utilization
Usecase. Users actively schedule the use of cellular or wifi to reduce load to one of those cells. Give resource free for others. This can be make possible using market-driven resource allocation and intelligent network management. Example, in cafe use wifi instead of cellular and on road use cellular for high mobility scenarios and hight latency due to microcells handover.
Research: toward 5G
Reading
- tools.ietf.org/html/draft-kutscher-icnrg-challenges-00
- softran: software defined radio access network
Topic: Mobile Cloud
- Inter-cloud platform: http://www.cloudcomputing-news.net/news/2014/jul/31/calling-common-cloud-architecture/