"
Aarna.ml

Resources

resources

Blog

Rajendra Mishra

AMCOP Deployment using Kubernetes Operator
Find out more

The Aarna.ml Multi-Cluster Orchestration Platform (AMCOP) is an open-source orchestration, lifecycle management, closed-loop automation platform for 5G network services and edge computing applications.

AMCOP consists of:

●   Linux Foundation Edge Multi-Cluster Orchestrator (EMCO) for intent-based orchestration and Day-0 configuration

●   Linux Foundation Open Network Automation Platform (ONAP) components such as CDS for Day 1 & 2 configuration and LCM

●   Select ONAP components for analytics and closed-loop automation

●   Numerous CNCF and related projects (Istio, Prometheus, FluentD, Jaeger…)

●   Proprietary value adds such as 5G network slicing, O-RAN NONRTRIC, NWDAF

In this blog, we will talk specifically about the deployment of AMCOP. AMCOP is a full cloud-native application that can be deployed on a Kubernetes (K8s) environment. In general, the various deployment options for a cloud-native application are:

●        Using Kubectl command line when there is a yaml file

●        Using Ansible scripts that allows automation of these deployments

●        Using Helm charts

●        Using Operators and Custom Resource

                     

Fig 1 : Various Deployment Options of a Cloud Application

With version 2.0 and earlier of AMCOP, it used to be deployed using Helm Charts and Ansible scripts.

The challenges with earlier methods of cloud deployment of AMCOP were:

●        The scheme was static and not capable of dealing with complex setups.

●        Life Cycle Management (LCM) with Helm charts and Ansible was not fully automated and required manual intervention.

●        While deploying, it was not possible to store the state of deployment for retrieving it later or for restoration.

●        The upgrade process was difficult and manual.

With AMCOP 2.1, we have switched to a Kubernetes Operator. What is a Kubernetes Operator?

Fig 2 : Explaining Operators

                                       

A Kubernetes operator is a method of packaging, deploying, and managing a Kubernetes application. The Operator has a Controller and Custom Resources. It provides a single package that one can deploy to the K8s cluster and that package will take care of the entire LCM of the application being deployed. Thus an Operator creates, configures, and manages instances of complex applications. Its Controller has the current state and details of the desired state. It monitors the current state and ensures that the application is always in the desired state. It can manage Stateful Set, Config Map, Services, and the different Pods. Once it is deployed the Operator will continue to monitor the application. One can automate the Operator to take backup of the data and if there are failures then it can take care of the recovery process. The Operator can auto-upgrade the application when a newer version is available. The Operator also does not require full security privileges, as it uses a limited set of resources and shared resources.

Advantages of deployment of cloud-native applications using Operators are:

●        Integration with various solutions

●        Helm is supported

●        Can be deployed on Openshift clusters

●        Ansible Playbook can be integrated for doing various tasks

●        Reduces engineering cost and makes maintenance much easier

Operators are purpose-built and are not generic to work with any application. While building the Operator for a specific application, one needs to have knowledge of how the application works, what services it is creating, etc. Thus the Operator is more tailored to the specific needs of the application. One can do complex automation using the Operator.

Specifically, the features of the AMCOP Operator are:

●       Kubernetes Controller built using Operator SDK

●       Built-in logic to address Life Cycle Management requirements of AMCOP

●       Unified handling of Life Cycle Management for AMCOP

●       Functionalities of Helm, Juju Charms, Ansible, and Operator LCM

The AMCOP Operator has a container image, which contains all the logic for deploying the Operator. To initiate this, a yaml file is provided for deploying the Operator on the cluster. Tags are used for downloading various images. There are yaml files for EMCO microservices. Once the Operator is deployed, it can be enabled to allocate all the resources, create the necessary service account, trigger a requirement, and finally, deploy the K8s Pods.

To see the deployment process please follow the steps in the video.

Try out the latest release of AMCOP for a free trial. For further questions please contact us.

Aarna

Network Slicing: NSSMF Architecture
Find out more

Introduction

5G Network Slicing is a new concept to allow differentiated treatment to network traffic depending on each customer's or use case’s requirements. With slicing, it’s now possible for Mobile Network Operators (MNO) to consider customers or use cases as different context types with each having different service requirements. These requirements are governed in terms of what slice types each context is eligible to use based on Service Level Agreements and subscriptions. In order to orchestrate and manage 5G network slicing, the Linux Foundation Open Network Automation Platform (ONAP) project has come with E2E slice management architecture with different choices based upon various 3GPP defined components with specific roles (e.g CSMF, NSMF, NSSMF). In this blog, we shall uncover the architecture of NSSMF and its interaction with other components.

Keywords

3GPP - 3rd Generation Partnership Project

BSS - Business Support System

E2E - End to End

CSMF - Communication Service Management Function

NSMF - Network Slice Management Function

NSSMF - Network Slice Subnet Management Function

NSI - Network Slice Instance

NSSI - Network Slice Subnet Instance

ONAP - Open Network Automation Platform

OSS - Operations Support System

SO - Service Orchestrator

5G ONAP Network Slicing

In ONAP, options 1 & 4 from the various slice management architecture choices are supported — see Figure-1.

Figure-1: Slice Management Architecture Choice

Essentially, the choices have evolved from the requirements of service providers and open source community contributors. In Architecture Choice#1, CSMF, NSMF, and NSSMF are internal components of ONAP in reference design and implementation. In this option, OSS/BSS can integrate with the runtime component CSMF hosted in SO for business and operational needs. In Choice#4, CSMF and NSMF are internal components of ONAP and hosted in a runtime component called SO. In Figure-1, CSMF takes the business requirements from OSS/BSS and transforms communication service requirements to network slice requirements and is consumed by NSMF. Moreover, NSMF is responsible for the management and orchestration of NSI and derive network slice subnet requirements. In choice#4, NSSMF is an external component and is responsible for the management and orchestration of NSSI. ONAP has implemented standard APIs defined by 3GPP towards NSSMF and the same is[1] [2]  listed below in Table-1.

Table-1: NBI Interface exposed from NSSMF towards NSMF

NSSMF Architecture

As we uncover architecture choice#4, NSSMF performs communication with the ONAP NSMF component and that integration is done using the standard RestAPI interface as per Table-1 shown above. In our NSSMF architecture as shown below in Figure-2, consists of various elements and is defined as:

  • NBI Interface: This is the reception point for all the incoming requests from NSMF using the RestAPIs as detailed in Table-1.
  • Model Validator: This component validates the incoming NSMF requests with predefined models for various lifecycle events of a slice.
  • NSS Manager: Network Slice Subnet manager decides the role of processing the incoming request in Asynchronous or Synchronous flow mode.
  • Egress Manager/Ingress Manager: NSSMF is defined with two internal roles. First being a forwarder and second as a receiver. This helps in a distributed model for processing slice events.
  • CnCfgManager: The core network configuration manager is responsible for processing the slice events received from the Ingress manager for event types as AllocateNSSI, ActivateNSSI, DeactivateNSSI, and DeAllocateNSSI. Core network configuration manager was built with the support of NETCONF client and RESTCONF interface to cater deployment of network functions (NF) as PNF and CNF respectively.

Figure-2: NSSMF as external to ONAP (Choice#4)

 

We have done a webinar on deep dive on NSSMF architecture wherein we talked about insights and how the slice event lifecycle is processed by NSSMF and provisioning a commercial 5GC Network functions with a demo. You can refer to the video for details. Also, if you want to try out this NSSMF in your setup, reach out to us at info@aarna.ml

References

Aarna

The Aarna.ml Multi-Cluster Orchestration Platform (AMCOP) performs orchestration, lifecycle management, and analytics/closed-loop automation for
Find out more

The Aarna.ml Multi-Cluster Orchestration Platform (AMCOP) performs orchestration, lifecycle management, and analytics/closed-loop automation for cloud-native 5G network services and edge computing applications. The last part of the functionality is through a component called the Aarna Analytics Platform (AAP). The AAP uses open source software from ONAP DCAE (Data Collection Analytics and Events) and the DMaaP (Data Movement as a Platform, which is based on Kafka). Aarna Analytics Platform includes both the 3GPP NWDAF and the O-RAN Non-Real-Time Radio Intelligent Controller (Non-RT RIC) functionality.

NWDAF or Network Data Analytics Function provides analytics to 5G Core Network Functions (NFs) and the OAM platform. NWDAF has 3GPP defined interfaces using which NFs can request analytics information. There can be multiple instances of NWDAF deployed in the 5G Core. Each NWDAF is identified by the NF-ID and the Analytics ID. Like other NFs, NWDAF registers itself with NRF (Network Repository Function) of the 5G Core, enabling other NFs to reach it.

Figure 1 - NWDAF Architecture

NWDAF provides statistics and predictions. For prediction, it can make use of ML models, which would be embedded into NWDAF. This ML Model can address specific use cases of 5G.

The Non-RT RIC is a component in the Service Management and Orchestration (SMO) framework specified by the O-RAN Alliance. It enhances the functionality of the RAN by communicating with the Near Real Time RIC (Near-RT RIC) which resides in the edge/remote cluster. The Non-RT RIC provides policies and enrichment information for the RAN to the Near-RT RIC over the A1 interface. The Non-RT RIC is an extensible platform and includes modular applications called rApps. These modular apps can be added in the Non-RT RIC based on specific use cases. Along with rApps, the Non-RT RIC contains A1 Policy function, A1 Enrichment Function and A1 Termination Function. The Non-RT RIC provides enrichment information and the latest A1 policies to the Near-RT RIC over the A1 interface. Using rapid closed-loop automation, the Non-RT RIC can also enable additional advanced SON use-cases. The Non-RT RIC also includes AI/ML functions.

Figure 2 - Non-RT RIC Architecture

Thus, AMCOP is capable of orchestrating NWDAF in the 5G Core and AMCOP also has the SMO functionality for O-RAN and thus includes the Non-RT RIC functionality. AMCOP utilizes ML models but is not capable of training these models or managing the lifecycle of these models as mentioned above. In order to address the gap Aarna.ml has partnered with Pradera. AMCOP along with Predera’s MLOps AIQ platform serves end-to-end use cases of NWDAF and Non-RT RIC.

AMCOP provides raw training data to Predera’s MLOps AIQ platform by sending telemetry data (metrics, logs, alarms, events) to an external datalake. AIQ uses this raw data to train ML models. These ML models are stored in a catalog such as the LF AI Acumos project. AMCOP then pickes up these MLOps models and uses them for use cases such as the Non-RT RIC, NWDAF, closed-loop automation, and more.

To learn more about Aarna.ml AMCOP, Non-RT RIC, or NWDAF offerings, please contact us. Alternatively, if you are an rAPP vendor that wants to collaborate with us, please do not hesitate to contact us.

To learn more about the AIQ platform, please contact Predera.

Figure 3 - End-to-end NWDAF Implementation

Figure 4 - End-to-end Non RT RIC Implementation

Aarna

Operator Based Installation for AMCOP 2.1
Find out more

This blog is focused on the installation of AMCOP Release 2.1 on any existing Kubernetes deployment using AMCOP Operator. See demo here.

Pre-requisites:

A Kubernetes cluster, which will host AMCOP deployment. AMCOP is validated to work with a minimal single node all-in-one cluster with following hardware configuration

  • 8 vCPUs, 32 GB RAM and 80 GB SSD

Storage Class:

AMCOP requires usage of persistent volumes to manage stateful information. These persistent volumes are required to be provided by a default storage class configured with a persistent volume provisioner, more details are available at kubernetes storage class.

Check if there is already a default storage class available in your kubernetes cluster

kubectl get storageclass

If you’re using public cloud, chances are you’ll already be having a default storage class.

Otherwise for the deployment you can proceed with the following to create your own storage class with persistent volume provisioner, where persistent volumes will be backed by local SSD.

kubectl apply -f https://aarna-network.gitlab.io/amcop-deployment/amcop-k8s-operator/storage.yaml

AMCOP Operator:

Once availability of storage class is ensured, we will roll out AMCOP Operator itself

kubectl apply -f https://aarna-networks.gitlab.io/amcop-deployment/amcop-k8s-operator/v2.1.0/operator.yaml

This will deploy the AMCOP Operator deployment along with necessary constructs for RBAC and CRD. This AMCOP Operator deployment carries the intelligence about the components required for AMCOP Release 2.1, dependencies between them and information about ordered roll out of these components.

Additionally, it introduces the Custom Resource Definition (CRD) for defining properties of AMCOP deployment

  • Enabling disabling certain components
  • Choosing specific storage class instead of the default

AMCOP Custom Resource:

Once AMCOP operator deployment is available, you can now use the Custom Resource of type Installer to customize AMCOP deployment.

apiVersion: amcop.aarnanetworks.com/v1alpha1

kind: Installer

metadata:

name: default

spec:

db:

persistent:

storageClass: ""

debug: enable

cds: enable

In Most of the cases, you will not require to change these parameters and can safely execute creation of Custom Resource using following

kubectl apply -f https://aarna-networks.gitlab.io/amcop-deployment/amcop-k8s-operator/v2.1.0/default.yaml

Upon creation of this Custom Resource, AMCOP Operator starts deployment of various AMCOP components in a staged manner. Progress of deployment can be monitored by watching kubernetes pods in amcop-system namespace or by watching the status of Custom Resource itself.

kubectl get installer.amcop

Over time, when all the components of AMCOP are rolled out successfully. The status of deployment will change from “RollingOut” to “Deployed”, beyond this you can then start using AMCOP.

See a recording of the technical meetup that covered this topic in more depth along with a hands-on demo. If you would like to replicate any of this work in your environment, please contact us.

Interested in an AMCOP presentation and demo meeting? Let us know at info@aarna.ml and we can schedule it.

Aarna

This blog describes a 5G network slicing demo we did with ONAP as the network slice management component and Capgemini Engineering 5G Core along with
Find out more

This blog describes a 5G network slicing demo we did with ONAP as the network slice management component and Capgemini Engineering 5G Core along with the Kaloom UPF.

In an end-to-end 5G network, there are typically three components that come into play:

  • Radio Access Network
  • Transport Network
  • Core Network

These same three components act together and participate in forming an End-to-End 5G network slice. In addition, a slice management component is required, and the Linux Foundation Open Network Automation Platform (ONAP) is one comprehensive solution for this. To learn more, see our prior blog on End-to-End 5G network slicing with ONAP. See a conceptual diagram of end-to-end 5G network slicing below.

Figure 1: End-to-end 5G Network Slicing Concept

Next, the diagram below shows a high-level 3rd Generation Private Partnership (3GPP) view of how a network slice looks and lists the different components:

Figure 2: End-to-End 5G Network Slicing Management Components

3GPP has the notion of Communication Service Management Function (CSMF) through which the BSS layer can easily order details of the slice with associated characteristics. Southbound of the CSMF is the Network Slice Management Function (NSMF) which then talks to domain specific Network Slice Subnet Management Functions (NSSMF).

ONAP has constantly been updating the slice management functionality. Currently Options 1 and 4 below are supported. In a recent demo, we at Aarna.ml showed a demo of Option#4 for 5G Core slicing where we used an external NSSMF.

Figure 3: ONAP Network Slicing Implementation Options

Helicopter View of ONAP Slice Management Functionality:

There is a design-time and runtime dileniation in ONAP; hence with respect to slicing, we also have to design certain models in ONAP which act as the design time aspect of the network slicing. The design part is hosted under a component, known as SDC or Service Design & Creation. Next those models are distributed to runtime that consists of CSMF, NSMF, and NSSMF functions. See diagram below.

Figure 4: ONAP Design & Run Time

Process flow for 5G Network Slicing with ONAP:

  • User orders a slice using CSMF
  • ONAP NSMF processes the request and identifies the correct slice template
  • Slice allocates request submitted to internal of external NSSMFs; in this demo we use an external NSSMF for 5G Core
  • The core external NSSMF will call the Capgemin Engineering 5GC REST APIs to configure the slice parameter
  • 5GCore components will configure the components and set the slice values

See the flow below.

Figure 5: ONAP with Capgemini Engineering 5G Core and Kaloom UPF

To see a thorough explanation of these concepts and a recording of a hands-on demo, view a recording of our 5G Core network slicing using ONAP with Capgemini Engineering 5GC and Kaloom UPF technical meetup from one week ago (45 minutes at 1x speed). If you don't have the time, you can watch just the demo portion of the meetup (20 minutes at 1x speed).

A surprisingly large number of companies want to try ONAP network slicing in their labs. If you are one of these companies and need some help, feel free to contact us.

Sriram Rupanagunta

Announcing Aarna.ml Multi Cluster Orchestration Platform (AMCOP) Release 2.1
Find out more

I am pleased to announce version 2.1 of the Aarna.ml works Multi Cluster Orchestration Platform (AMCOP). As a quick recap, AMCOP addresses the orchestration, life-cycle management, and automation of cloud native, B2B, 5G network service + edge computing applications. We support only the Kubernetes NFVI. And we focus on B2B applications e.g., Industry 4.0, healthcare, precision agriculture, and more.

AMCOP 2.1 Free Trial here.

AMCOP supports intent-based orchestration of 5G network services and composite edge computing applications, manages their lifecycle and enables creation of policy-driven control loop automation. In the latest version 2.1, AMCOP added support for edit/modification of Network Services that have already been deployed, comprehensive O-RAN SMO functionality, and also an early access version of network slicing manager, Non-Real Time RIC (NRTRIC) and Network Data Analytics Function (NWDAF)/Management Data Analytics Function (MDAF).

AMCOP is a fully cloud-native application that can run on a variety of Kubernetes clusters (open source k8s, LFN Anuket, as well as all the public cloud variants - GKE, AKS and EKS). It has been integrated with Prometheus, Istio and Keycloak, and will be supporting Jaeger integration soon. AMCOP is installed using the AMCOP Operator, which will manage the complete life-cycle of AMCOP, and enables seamless upgrades to future versions of AMCOP. Support for RedHat OpenShift is coming shortly!

See the AMCOP product page for more information. The product is open source (based on LF Networking and CNCF projects), so you can try it out for free as well.

Also, check out AMCOP demos:

  • AMCOP demo of Altran/Kaloom 5G Core orchestration on Red Hat OpenShift (keynote or booth)
  • AMCOP demo of Free5GC (5G Core) orchestration on K8s

Alternatively, check out the demos on our YouTube channel:

Finally, check out our white papers. They are purely educational and 100% product-free:

Interested in an AMCOP presentation and demo meeting? Let us know at info@aarna.ml and we can schedule it.