Table of Contents

ONAP Projects

1. Headline

1.1 cassandra

1.2 mariadb-galera

1.3 aaf

Proposed name for the project: Application Authorization Framework

Proposed name for the repository: AAF

Project description: The goal of the project is to provide consistent authentication, authorization and security to various ONAP components. AAF organizes software authorizations so that applications, tools and services can match the access needed to perform job functions. AAF is designed to cover Fine-Grained Authorization, meaning that the Authorizations provided are able to use an Application's detailed authorizations, such as whether a user may be on a particular page, or has access to a particular Pub-Sub topic controlled within the App. This is a critical function for Cloud environments, as Services need to be able to be installed and running in a very short time, and should not be encumbered with local configurations of Users, Permissions and Passwords. The sister framework CADI ( Code Access Data Identity ) allows Java Applications to utilize Identity Authentication methods as plugins. Certificate Manager delivers X.509 certificates in support of 2 way x509 TLS.

1.4 aai

Active and Available Inventory Project

Active and Available Inventory (AAI) is the ONAP subsystem that provides real-time views of Resources and Services and their relationships. AAI not only forms a registry of active, available, and assigned assets, it also maintains up-to-date views of the multidimensional relationships among these assets, including their relevance to different components of ONAP.

This project targets a logically centralized reference point for service and resource details serving other ONAP components and non-ONAP systems to enable fulfillment, closed loop, reporting, and other operational use cases. A&AI is critical to ONAP as the existing sources of truth do not provide a cross domain view and are not designed to serve this information to multiple clients.

1.5 appc

The Application Controller (APPC) performs functions to manage the lifecycle of VNFs and their components providing model driven configuration, abstracts cloud/VNF interfaces for repeatable actions, uses vendor agnostic mechanisms (NETCONF, Chef via Chef Server and Ansible) and enables automation.

Model and policy driven application controller with intrinsic VNF management capabilities. Support of multi vendor system of VNFs with interdependence between them. Provide uploading capabilities of standard data model which describe the management, configuration and inter-dependencies of the VNF. APPC model will be based on ONAP TOSCA and Yang containing a dependency model, LCM recipes, configuration templates, policies etc. APPC provides multi-protocol southbound plugins, including support for NETCONF, Chef via a Chef Server, and Ansible and ability to operate through vendor specific VNFM/EMS via adaptation through a plugin. APPC provides a VNF configuration repository with the latest working configuration for each managed VNF instance it is responsible for.

1.6 clamp

CLAMP is a platform for designing and managing control loops. It is used to design a closed loop, configure it with specific parameters for a particular network service, then deploying and undeploying it. Once deployed, the user can also update the loop with new parameters during runtime, as well as suspending and restarting it.

It interacts with other systems to deploy and execute the closed loop. For example, it pushes the control loop design to the SDC catalog, associating it with the VF resource. It requests from DCAE the instantiation of microservices to manage the closed loop flow. Further, it creates and updates multiple policies in the Policy Engine that define the closed loop flow.
The OpenCLAMP platform abstracts the details of these systems under the concept of a control loop model. The design of a control loop and its management is represented by a workflow in which all relevant system interactions take place. This is essential for a self-service model of creating and managing control loops, where no low-level user interaction with other components is required.

At a higher level, CLAMP is about supporting and managing the broad operational life cycle of VNFs/VMs and ultimately ONAP components itself. It will offer the ability to design, test, deploy and update control loop automation - both closed and open. Automating these functions would represent a significant saving on operational costs compared to traditional methods.

1.7 cli

Both telco and enterprise customers prefer commands over GUI on many situations such as automation, CI, etc.

And ONAP CLI provides required Command-Line Interface(CLI) as commands to operate ONAP functionalities

from Linux operating system shell.

consul

Health check

1.8 contrib

1.9 dcaegen2

Data Collection Analytics and Events Project

Proposed name for the top level repository name: dcaegen2 Note: For the 4Q17 release, one of major goals for DCAE is to evolve from the “old” controller that is currently in gerrit.onap.org with a new controller that follows the Common Controller Framework. The switch will maintain external (to other ONAP component) compatibility, but NOT backwards compatible internally. That is, subcomponents built for the old controller will not work with the new controller or vice versa. We are proposing to us a new top level naming “dcaegen2” for repos of subcomponents that are compatible with the new controller because it appears to be the cleanest approach to avoid confusion. The existing “dcae” top level still hosts repos of subcomponent projects that are compatible with the old controller. Eventually we will phase out the “dcae” tree.

Project description: DCAE is the umbrella name for a number of components collectively fulfilling the role of Data Collection, Analytics, and Events generation for ONAP. The architecture of DCAE targets flexible, plug-able, micros-service oriented, model based component deployment and service composition. DCAE also support multi-site collection and analytics operations which are essential for large ONAP deployments.

DCAE components generally fall into two categories: DCAE Platform Components and DCAE Services Components. DCAE Platform consists of components that are needed for any deployments. They form the foundation for deploying and managing DCAE Service components, which are deployed on-demand based on specific collection and analytics needs.

1.10 dmaap

DMaaP – Data Movement as a Platform

DMaaP is a premier platform for high performing and cost effective data movement services that transports and processes data from any source to any target with the format, quality, security, and concurrency required to serve the business and customer needs.

DMaaP consists of three major functional areas:

  1. Data Filtering - the data preprocessing at the edge via data analytics and compression to reduce the data size needed to be processed.
  2. Data Transport - the transport of data intra & inter data centers. The transport will support both file based and message based data movement. The Data Transport process needs to provide the ability to move data from any system to any system with minimal latency, guaranteed delivery and highly available solution that supports a self-subscription model that lowers initial cost and improves time to market.
  3. Data Processing - the low latency and high throughput data transformation, aggregation, and analysis. The processing will be elastically scalable and fault-tolerant across data centers. The Data processing needs to provide the ability to process both batch and near real-time data.

    1.11 external api

    External API Framework (5/15/17)

Project Name: Proposed name for the project: External API Framework Proposed name for the repository: externalapi

Project description:

This project will describe and define the APIs between ONAP and External Systems, including ONAP interfaces targeted on BSS/OSS, peering, B2B, etc. Proposed initial focus may be on the Common APIs between ONAP and BSS/OSS. Common APIs between ONAP and BSS/OSS allow Service Providers to utilize the capabilities of ONAP using their existing BSS/OSS environment with minimal customization.

1.12 esr

Proposed name for the project: External System Register

Proposed name for the repository: esr

Project description:

ONAP components need to talk with external systems such as VIM/VNFM/vendor SDNC/EMS to orchestrate a network service, for example, VF-C need to talk with VNFM to deploy a VNF. So they should get the information of available external systems from a registry before call the Interfaces of these external systems. ESR provides a service to centralized management of the information (name, vendor, version, acess end point, etc.) of external systems. So the ONAP components can get the system information with unified API from a logical single point.

Note: This project is proposed to be a sub-project of A&AI.

1.13 Holmes

Project Name:

Proposed name for the project: Holmes Proposed name for the repository: holmes

Project description:

Holmes project provides alarm correlation and analysis for Telecom cloud infrastructure and services, including hosts, vims, VNFs and NSs. Holmes aims to find the real reason which causes the failure or degradation of services by digging into the ocean of events collected from different levels of the Telecom cloud. Holmes provides docker(s) based fault correlation analysis system with APIs which could be called by external systems. DCAE supports Holmes to be deployed as an analytic application in the form of docker(s). Actual deployment options are flexible, which should be decided by the user/use cases, Holmes can be either deployed as a standalone alarm correlation application or be integrated into DCAE as an analytic application.

1.14 log

Logging

ONAP consists of many components and containers, and consequently writes to many logfiles. The volume of logger output may be enormous, especially when debugging. Large, disparate logfiles are difficult to monitor and analyze, and tracing requests across many files, file systems and containers is untenable without tooling.

The problem of decentralized logger output is addressed by analytics pipelines such as Elastic Stack (ELK). Elastic Stack consumes logs, indexes their contents in Elasticsearch, and makes them accessible, queryable and navigable via a sophisticated UI, Kibana Discover. This elevates the importance of standardization and machine-readability. Logfiles can remain browsable, but output can be simplified.

Logger configurations in ONAP are diverse and idiosyncratic. Addressing these issues will prevent costs from being externalized to consumers such as analytics. It also affords the opportunity to remedy any issues with the handling and propagation of contextual information such as transaction identifiers (presently passed as X-ONAP-RequestID. This propagation is critical to tracing requests as they traverse ONAP and related systems, and is the basis for many analytics functions.

Rationalized logger configuration and output also paves the way for other high-performance logger transports, including publishing directly to analytics via SYSLOG (RFC3164, RFC5425, RFC5426) and streams, and mechanisms for durability.

Each change is believed to be individually beneficial:

The intention is to consolidate the required setup within this project however some changes and bug fixes might have to be applied in the relative component project, requiring each components' co-operation on the contribution. There is an economy of scale if everything can happen under a single remit. Standardization benefits all, including those who want to deviate from defaults.

1.15 music

MUSIC-Multi-site State Coordination Service

Project Name: MUSIC - Multi-site State Coordination Service

Project description:

To achieve 5 9s of availability on 3 9s or lower software and infrastructure in a cost-effective manner, ONAP components need to work in a reliable, active-active manner across multiple sites (platform-maturity resiliency level 3). A fundamental aspect of this is state management across geo-distributed sites in a reliable, scalable, highly available and efficient manner. This is an important and challenging problem because of three fundamental reasons:

Current solutions for state-management of ONAP components like MariaDB clustering, that work very effectively within a site, may not scale across geo-distributed sites (e.g., Beijing, Amsterdam and Irvine) or allow partitioned operation (thereby compromising availability). This is mainly because WAN latencies are much higher across sites and frequent network partitions can occur.

ONAP components often have a diverse range of requirements in terms of state replication. While some components need to synchronously manage state across replicas, others may tolerate asynchronous replication. This diversity needs to be leveraged to provide better performance and higher availability across sites.

ONAP components often need to partition state across different replicas, perform consistent operations on them and ensure that on failover, the new owner has access to the latest state. The distributed protocols to achieve such consistent ownership is complex and replete with corners cases, especially in the face of network partitions. Currently, each component is building its own handcrafted solution which is wasteful and worse, can be erroneous.

In this project, we identify common state management concerns across ONAP components and provide a multi-site state coordination/management service (MUSIC) with a rich suite of recipes that each ONAP component can simply configure and use for their state-management needs.

1.16 sniro-emulator

SNIRO Emulator

Purpose

The purpose of SNIRO Emulator is to mock SNIRO homing functionality, which can be used to demo use cases.

It consists of a HTTP server that we could connect to as we would to an actual web service.

Set up SNIRO emulator is built using microservices architecture to support portability and evolution. It is built based on wiremock testing tool wireMock.

It is deployed as dedicated docker container either using OOM based kubernetes deployment or as an additional container inside Robot VM using Heat template deployment technique.

1.17 oof

About ONAP Optimization Framework (OOF) CII Best Practices

The OOF provides a policy-driven and model-driven framework for creating optimization applications for a broad range of use cases.

It is being developed based on the following core ideas:

Most optimization problems can be solved in a declarative manner using a high-level modeling language. Recent advances in open source optimization platforms allow the solution process to be mostly solver-independent.

By leveraging the library of standard/global constraints, optimization models can be rapidly developed.

By developing a focused set of platform components, we can realize a policy-driven, declarative system that allows ONAP optimization applications be composed rapidly and managed easily

Policy and data adapters

Execution and management environment

Curated “knowledge base” and recipes to provide information on typical optimization examples and how to use the OOF

More importantly, by providing a way to support both “traditional” optimization applications and model-driven applications, we can provide a choice for users to adapt the platform based on their business needs and skills/expertise.

Initial deliverables of OOF for Beijing Release (Q1 2018) are:

Homing and Allocation Service (HAS): OOF-HAS is a policy-driven placement optimizing service (or homing service) that allows ONAP to deploy services automatically across multiple sites and multiple clouds. More details are at the main page of Homing and Allocation Service (HAS)

ONAP Optimization Service Design Framework (OSDF) is a model- and policy-driven optimization framework that makes it easier to build, deploy, and manage optimization applications for different use cases. More details about the OOF-OSDF are at the main page of Optimization Service Design Framework

1.18 msb

MSB(Microservices Bus) is an ONAP project that provides key infrastructure functionalities to support ONAP Microservice Architecture(OMSA), which includes:

Provides RESTFul API for service registration/discovery

Provides Java SDK for service registration, discovery and inter-services communication

Provides a transparent service registration proxy with OOM-Kube2MSB

Provides a transparent service communication proxy which handles service discovery, load balancing, routing, failure handling, and visibility by Internal API Gateway(Current implementation) and Mesh sidecar(WIP)

Provides an External API Gateway to expose ONAP services to the outside world

Provides Swagger SDK, which helps to auto-generate the language specific clients for a given ONAP micro- service and release them into nexus repository.

1.19 multicloud

Motivation

ONAP needs underlying virtualized infrastructure to deploy, run, and manage network services and VNFs. The service providers always look for flexibility and choice in selecting virtual and cloud infrastructure implementations, for example, on-premise private cloud, public cloud, or hybrid cloud implementations, and related network backends. ONAP needs to maintain platform backward compatibility with every new release.

Goal

Multi-VIM/Cloud project aims to enable ONAP to deploy and run on multiple infrastructure environments, for example, OpenStack and its different distributions (e.g. vanilla OpenStack, Wind River, etc…), public and private clouds (e.g. VMware, Azure), and micro services containers, etc. Multi-VIM/Cloud project will provide a Cloud Mediation Layer supporting multiple infrastructures and network backends so as to effectively prevents vendor lock-in. Multi-VIM/Cloud project decouples the evolution of ONAP platform from the evolution of underlying cloud infrastructure, and minimizes the impact on the deployed ONAP while upgrading the underlying cloud infrastructures independently.

1.20 nbi

NBI stands for NorthBound Interface. It brings to ONAP a set of API that can be used by external systems as BSS for example. These API are based on TMF API.

1.21 policy

Policy Framework

The Policy subsystem of ONAP maintains, distributes, and operates on the set of rules that underlie ONAP’s control, orchestration, and management functions. Policy provides a centralized environment for the creation and management of easily-updatable conditional rules. It enables users to validate policies and rules, identify and resolve overlaps and conflicts, and derive additional policies where needed. Policies can support infrastructure, products and services, operation automation, and security. Users, who can be a variety of stakeholders such as network and service designers, operations engineers, and security experts, can easily create, change, and manage policy rules from the Policy Manager in the ONAP Portal.

1.22 pomba

POMBA (Post Orchestration Model Based Audit) is an Event-driven auditing platform that tests the data integrity across NFV orchestration environment and NFV infrastructure using model driven approach. In short, POMBA provides an ONAP audit and reports on any data integrity issues found. See also Introduction to POMBA

1.23 portal

Proposed name for the project: ONAP Portal Platform

Proposed name for the repository: “portal”, “ecompsdkos”

Project description:

The ONAP Portal is a platform that provides the ability to integrate different ONAP applications into a centralized Portal Core.

The intention is to allow decentralized applications to run within their own infrastructure while providing common management services and connectivity.

The Portal core provides capabilities including application onboarding & management, centralized access management, and hosted application widgets.

Using the provided SDK, application developers can leverage the built-in capabilities (Services / API / UI controls) along with bundled tools and technologies.

1.24 robot

Step 1: Run init_robot test to enable remote access to Robot through web browser

root@oom-rancher:~# ./oom/kubernetes/robot/demo-k8s.sh onap init_robot

Step 2: Find the mapping port:

root@oom-rancher:~# kubectl -n onap get service |grep robot robot NodePort 10.43.136.36 <none> 88:30209/TCP 7h

Step 3: Access from browser with ANY k8s cluster node IP plus the above mapping port 30209, like:

1.25 sdc

Service Design & Creation (5/17/17)

Project Name:

Proposed name for the project: SDC

Proposed name for the repository: sdc, sdc/sdc-distribution-client, SDC, SDC/SDC-Destribution-Client, TBD

Project description: Provides a well-structured organization of visual design & testing tools, templates and catalogs to model and create resources, and services. The output of the SDC is a set of models which drives the orchestration. SDC in ONAP WIKI

1.26 sdnc

Proposed name for the project: SDN Controller (SDN-C)

Proposed name for the repository: sdnc SDNC

Project description: The SDN-C project provides a global network controller, built on the Common Controller Framework, which manages, assigns and provisions network resources. As a “global” controller, the SDN-C project is intended to run as one logical instance per enterprise, with potentially multiple geographically diverse virtual machines / docker containers in clusters to provide high availability. The project also will support the ability to invoke other local SDN controllers, including third party SDN controllers.

1.27 so

Service Orchestrator (5/14/17)

Project Name:

Proposed name for the project: SO

Proposed name for the repository: so

Project description: The SO provides the highest level of service orchestration in the ONAP architecture. Currently SO is implemented via BPMN flows that operate on Models distributed from SDC that describe the Services and associated VNFs and other Resource components. Cloud orchestration is currently based on HEAT templates.

In order to support Use Cases 1 and 2 in such a way to promote re-usability within ONAP, the goal of this project is to enhance ONAP’s overall orchestration capabilities by aligning and integrating its imperative workflows with a TOSCA-based declarative execution environment.

Service Orchestrator in ONAP WIKI

1.28 uui

Usecase UI Project Proposal (6/5/17)

Skip to end of metadata Created by Jimmy Forsyth, last modified by Tao Shen on Nov 28, 2017Go to start of metadata

Project Name:

Proposed name for the project: Usecase UI

Proposed name for the repository: usecase-ui

Project description:

Usecase UI is the ONAP subsystem that provides Graphical User Interface (GUI) for operators and end-users from the point of view of use cases. As a whole project, Usecase UI requiremetns not only includes service design and run-time management (resource, performance, fault, security, configuration, etc.) for operators, but also includes self-service management for end-users. This project targets identifying all GUI requirements which operators and end-users need ONAP to support, coordinating GUI parts of each ONAP subsystem, filling the gaps for improving GUI functionalities for use cases. All GUI functionalities of ONAP system can be well showed to satisfy the requirements from different customers.

1.29 vfc

VF-C: Virtual Function Controller

As part of the integration between OpenECOMP and OPEN-O, this proposed project VF-C leverages ETSI NFV MANO architecture and information model as a reference, and implements full life cycle management and FCAPS of VNF and NS.

support NS and VNF lifecycle management based on the ONAP tosca and yang data model and workflow support integration with multi VNFMs via drivers, which include vendors VNFM and generic VNFM support integration with multi VNFs via generic VNFM, which does not provide VNFM function support integration with multi VIMS via Multi-VIM, which include the opensource and commercial VIMs support microservice architecture and model driven resource orchestration and management

1.30 vid

Virtual Infrastructure Deployment Project

Provides a well-structured organization of infrastructure deployment, instantiation and change-management operations used by Operations to derive orchestrations and change-management

1.31 vnfsdk

Proposed name for the project: VNF SDK & tooling

Proposed name for the repository: vnfsdk

Project description:

VNF onboarding is a challenge across the industry because of the lack of a standard format for VNFs.

This project will build an ecosystem for ONAP compatible VNFs by: developing tools for vendor CI/CD toolchains developing validation and testing tools