"
Aarna.ml

Resources

resources

Blog

Aarna

In this part 2 of a 3-part series, the author has introduced the non-RT RIC - Data Driven RAN Optimizer.
Find out more

In this part 2 of a 3 part series (part 1 here), I am going to introduce the non-RT RIC. Before jumping into the non-RT RIC capabilities, let's evaluate the limitations of current RAN infrastructures and future needs.

Rapid traffic growth and multiple frequency bands utilized in a commercial network make it challenging to steer the traffic in a balanced distribution.

Current controls are limited to:

  1. Adjusting the cell re-selection and handover parameters
  2. Modifying load calculations and cell priorities
  3. RRM (Radio Resource Management) features in the existing cellular network that are all cell-centric
  4. Base stations based on traditional control strategies treat all UEs in a similar way and are usually focused on average cell-centric performance, rather than being UE-centric

The current semi-static QoS framework does not efficiently satisfy diversified quality of experience (QoE) requirements such as cloud VR or video applications. The framework is limited to looking at previous movement patterns to reduce the number of handovers.

The non-RT RIC, which is part of O-RAN Service Management & Orchestration (SMO) layer, addresses these current limitations and supports various use cases. It allows operators to flexibly configure the desired optimization policies, utilize the right performance criteria, and leverage machine learning to enable intelligent and proactive traffic control.

The following figure shows a functional view of non-RT RIC platform.

As described in O-RAN architecture, the non-RT RIC will communicate with near-RealTime RIC element via the A1 interface. Using the A1 interface the non-RT RIC will perform actions such as:

  1. Policy Management: facilitate the provisioning of policies for individual UEs or groups of UEs
  2. Monitoring: monitor and provide basic feedback on the policy state from near-RT RICs
  3. Enrichment Information: provide enrichment information as required by near-RT RICs
  4. AI/Ml updates: facilitate ML model training, distribution and inference in cooperation with the near-RT RICs

Data Collection

The Non-RT RIC platform within the SMO collects FCAPS (fault, configuration, accounting, performance, security, management) data related to RAN elements over the O1 interface. Examples of collected items:

  1. Events
  2. Performance data.
  3. Cell load statistics
  4. UE level radio information, connection and mobility/handover statistics.
  5. Various KPI metrics from CU/DU/RU nodes

Derived Analytics Data

The collected data is used to derive analytics data. Derived analytics data includes measurement counters and KPIs that appropriately get aggregated by cell, QOS type, slice, etc. Examples of derived data analytics:

  1. Measurement reports with RSRP/RSRQ/CQI information for serving and neighboring cells
  2. UE connection and mobility/handover statistics with indication of successful and failed handovers
  3. Cell load statistics such as information in the form of number of active users or connections, number of scheduled active users, utilization of PRB and CCE
  4. Per user performance statistics such as PDCP throughput, RLC or MAC layer latency

The non-RT RIC can provide data driven operations to various RAN optimizations. AI-enabled policies and ML-based models generate messages in non-RT RIC that are conveyed to the near-RT RIC. The core algorithm of non-RT RIC is owned and deployed by network operators.

These algorithms provides the capability to modify the RAN behavior based by deployment of different models optimized for individual operator policies and optimization objectives.

Following are few such examples and applicable scenarios:

  1. Adapt  RRM control for diversified scenarios and optimization objectives.
  2. Capabilities to predict network and UE performance.
  3. Optimal traffic management with improved response times by selecting the right set of UEs for control action. This results in an optimal system and UE performance with increased throughput and reduced handover failures.
  4. The O1/EI data collection is used for offline training of models, as well as for generating A1 policies for V2X handover optimization like handover prediction and detection.
  5. The non-RT RIC can influence how RAN resources are allocated to different users through a QoS target statement in an A1 policy.

In this next post we will see the use cases where the non-RT RIC can play a pivotal role to optimize the RAN resources.

Bhanu Chandra

This blog explains the architecture of O-RAN.
Find out more

In this three part blog series, I am going to talk about O-RAN architecture with a little extra emphasis on the Service Management & Orchestration component, the NONRTRIC, and finally some NONRTRIC use cases. Today we'll cover the first topic.

The O-RAN specification allows service providers to speed up 5G network development through its standardized architecture. It minimizes proprietary hardware dependency and makes the network more accessible to a broader range of designers.

O-RAN Architecture

The O-RAN alliance architecture principles are:

  • Radio area network (RAN) virtualization, open interfaces, and AI-capable RAN
  • Disaggregation of RAN element deployment and leveraging multi vendor solutions
  • Minimization of proprietary hardware and a push toward merchant silicon and off-the-shelf hardware
  • Interfaces and APIs to drive "standards to adopt them as appropriate" and explore "open source where appropriate"
  • O-RAN Architecture

O-RAN Components

  • Service Management & Orchestrator (SMO): The component that oversees all the orchestration aspects, management and automation of RAN elements. It supports O1, A1 and O2 interfaces.
  • Non-RT RIC (non-real-time RAN Intelligent Controller): A logical function that enables non-real-time control and optimization of RAN elements and resources, AI/ML workflow including model training and updates, and policy-based guidance of applications/features in near-RT RIC.
  • Near-RT RIC (near-real-time RAN Intelligent Controller): A logical function that enables near-real-time control and optimization of O-RAN elements and resources via fine-grained data collection and actions over the E2 interface. It includes interpretation and enforcement of policies from Non-RT RIC. and supports enrichment information to optimize control function.
  • O-CU (O-RAN Central Unit): A logical node hosting RRC, SDAP and PDCP protocols. O-CU includes two sub-components O-CU-CP (O-RAN Central Unit – Control Plane) and O-CU-UP (O-RAN Central Unit – User Plane).
  • O-DU (O-RAN Distributed Unit): A logical node hosting RLC/MAC/High-PHY layers based on a lower layer functional split.
  • O-RU (O-RAN Radio Unit): A logical node hosting Low-PHY layer and RF processing based on a lower layer functional split.

O-RAN SMO Interfaces

The key O-RAN SMO interfaces are:

  • O1: Interface between management entities in Service Management and Orchestration Framework and O-RAN managed elements, for operation and management, by which FCAPS management, software management, and file management shall be achieved.
  • O2 (previously O1*): Interface between Service Management and Orchestration Framework and Infrastructure Management Framework supporting O-RAN virtual network functions.
  • A1: Interface between non-RT RIC and near-RT RIC. Over this interface non-RT RIC performs policy management, enrichment information and AI/ML model updates on the near-RT RIC.

In the next post we will see the various requirements and current limitations, and how the O-RAN architecture — especially the non-RT RIC — optimizes the 5G network.

Aarna

With the help of several ONAP community members, such as Aniello and Krishna, we have been able to successfully create a 5G Slice on ONAP Guilin+ release.
Find out more

With the help of several ONAP community members, such as Aniello and Krishna, we have been able to successfully create a 5G Slice on ONAP Guilin+ release. You can follow  the steps below or join our End-to-End 5G Network Slicing technical meetup on Monday March-29 at 7AM PDT.

Below are the wiki links that we followed for creating the setup.

https://wiki.onap.org/display/DW/Template+Design+for+Option2

https://wiki.onap.org/display/DW/Setup+related+issues

https://wiki.onap.org/display/DW/Install+Minimum+Scope+for+Option+1

https://wiki.onap.org/display/DW/External+Core+NSSMF+Simulator+Use+Guide

https://wiki.onap.org/display/DW/External+RAN+NSSMF

https://wiki.onap.org/pages/viewpage.action?pageId=92996521

https://wiki.onap.org/display/DW/Manual+Configuration+for+5G+Network+Slicing

In addition a few additional things to note.

- The above documentation does not work on Guilin branch and it won't work on master either. Besides the image changes described in above documentation we had to do the following changes.

    a. We used the master branch from 19th March 2021

    b. Downgraded the following images

           Image:         nexus3.onap.org:10001/onap/optf-osdf:3.0.2

           Image:         nexus3.onap.org:10001/onap/optf-has:2.1.2

- The above documentation has a mention  but was not very clear. We added all the ARs to AAI, otherwise distribution will fail.

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ "model-invariant-id": "bb9c30d4-552b-4231-a172-c24967e8ee24", "model-type": "resource", "model-vers": { "model-ver": [ { "model-version-id": "f10a33da-114e-41b6-89b2-851c31a1e0dc", "model-name": "Slice_AR", "model-version": "1.0" } ] } }' "https://10.43.189.167:8443/aai/v21/service-design-and-creation/models/model/bb9c30d4-552b-4231-a172-c24967e8ee24

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ "model-invariant-id": "43515a2b-5aa7-4544-9c40-f2ce693b99bd", "model-type": "service", "model-vers": { "model-ver": [ { "model-version-id": "81d13710-811f-47f7-9871-affc666ba11a", "model-name": "EmbbNst_O2", "model-version": "1.0" } ] } }' "https://10.43.189.167:8443/aai/v21/service-design-and-creation/models/model/43515a2b-5aa7-4544-9c40-f2ce693b99bd" | python -m json.tool

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ "model-invariant-id": "7a1ffb3c-7bd0-415c-b3d8-cf6170de805e", "model-type": "resource", "model-vers": { "model-ver": [ { "model-version-id": "e551c7cd-9270-4d15-b1f8-9634e0ab0ffe", "model-name": "EmbbAn_NF_AR", "model-version": "1.0" } ] } }' "https://10.43.189.167:8443/aai/v21/service-design-and-creation/models/model/7a1ffb3c-7bd0-415c-b3d8-cf6170de805e"   | python -m json.tool

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ "model-invariant-id": "29620dfe-cffc-4959-93cc-314279d17f96", "model-type": "resource", "model-vers": { "model-ver": [ { "model-version-id": "529f5a61-7a8c-4b8b-b591-0bba72f9fd2e", "model-name": "Tn_BH_AR", "model-version": "1.0" } ] } }' "https://10.43.189.167:8443/aai/v21/service-design-and-creation/models/model/29620dfe-cffc-4959-93cc-314279d17f96"   | python -m json.tool

- We needed to follow a specific order to push policies, here is how we did it.

python3 policy_utils.py create_policy_types policy_types

python3 policy_utils.py generate_nsi_policies EmbbNst_O2

python3 policy_utils.py create_and_push_policies gen_nsi_policies

python3 policy_utils.py generate_nssi_policies EmbbAn_NF minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

python3 policy_utils.py generate_nssi_policies Tn_ONAP_internal_BH  minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

python3 policy_utils.py generate_nssi_policies EmbbCn_External minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

- For core simulator, the vendor name should match the one passed in the SDC templates, here is what worked for us.

"esr-system-info": [

   {

     "esr-system-info-id": "nssmf-an-01",

     "type": "an",

     "vendor": "huawei",

     "user-name": "admin",

     "password": "123456",

     "system-type": "thirdparty-sdnc",

     "ssl-cacert": "test.ca",

     "ip-address": "192.168.122.198",

     "port": "8443",

     "resource-version": "1616417615150"

   }

 ]

Please join the technical meetup mentioned above or contact us to get tips on how to replicate this setup in your lab.

Aarna

Fully Automated 5G Core Services Management and Orchestration: A Joint Amantya & Aarna Solution.
Find out more

Unlike 4G, which mainly focussed on the Telco space, 5G has a broader focus across numerous industry vertical use cases. The below figure shows the primary capabilities of 5G and the use cases around reliability, capacity, latency, and connectivity.

Figure 1: Benefits of 5G Core Orchestration (courtesy Amantya Technologies)

Another change with 5G is that we are moving to a completely software-driven infrastructure. This means that there is now a need for more nuanced control of the network infrastructure centered around automation. This is where intent-based orchestration of 5G Core with closed-loop automation comes into play.

Amantya and Aarna Joint Solution:

In our technical meetup last week, Amantya Technologies and Aarna.ml presented a joint 5G Core solution. The solution has two key components:

  • Amantya’s Cloud-Native 5G Core
  • The Aarna.ml Multi-Cluster Orchestration Platform (AMCOP)
  • In this demo, AMCOP is installed on one Kubernetes cloud and the Kubernetes cloud for the Amantya 5GC is a separate one in a private cloud. AMCOP provides an easy-to-use intuitive end-to-end 5G service creation and management environment that abstracts the inherent complexity of the underlying network behind a layer of intuitive GUI-based workflows and automation interfaces.

    Figure 2: Aarna's and Amantya's Joint 5G Core Orchestration Solution

    Amantya’s solution is an integrated 5G and 4G core. Its implementation is completely cloud-native which makes the deployment of this core on public and private clouds very simple.

    Key Features of Amantya’s 5G Integrated Core:

    • Multi-tech Integrated Core
    • Standard Deployment Models – ENDC + SA
    • Release 15 Complaint
    • Cloud-Native – COTS HW
    • High-Performance User Plane
    • Full Slicing Support
    • Node Discovery
    • IPv4 / IPv6 support

    A high-level block diagram of Amantya's Integrated 5G Core is given below:

    Figure 3: A high-level block diagram of AMANTYA's Integrated 5G Core

    Key Features of AMCOP 2.0:

    1) Integrates OpenNESS-EMCO and ONAP-CDS with a unified run-time GUI

    2) Early access Aarna Analytics Platform (AAP) functionality

    3) Support for GKE, AKS, EKS, OpenShift, Anuket, and open-source K8s NFVI

    4) AMCOP is now available on the Azure marketplace for easy deployment onto Azure

    5) AMCOP also includes ONAP Guilin network slicing

    A high-level block diagram of AMCOP 2.0 is shown below:

    Figure 4: A high-level block diagram of AMCOP 2.0

    See a recording of the technical meetup that covered this topic in more depth along with a hands-on demo. If you would like to replicate any of this work in your environment, please contact us.

    Aarna

    AMCOP can be used to orchestrate both cloud native network functions (CNF) and cloud native applications (CNA) on public clouds or edge cloud offerings from public cloud vendors.
    Find out more

    Public cloud vendors will play a big role in the upcoming 5G + edge computing space. Customers will either use the public cloud directly for services that are not latency, bandwidth, or location sensitive, or use edge computing offerings from these same public cloud providers. The Aarna.ml Multi Cluster Orchestration Platform (AMCOP) can be used to orchestrate both cloud native network functions (CNF) and cloud native applications (CNA) on public clouds or edge cloud offerings from public cloud vendors.

    As we know there are different types of clouds:

    • Public Clouds (such as Microsoft Azure AKS, GCP/GKE, AWS EKS)
    • Edge Cloud Offerings from Public Cloud Vendors (such as Microsoft Azure Edge Zones, Google Anthos, AWS Outposts/Wavelength)
    • Private Clouds (such as Kubernetes or OpenStack on Bare Metal Servers)
    • Hybrid Clouds (such as IBM Hybrid Clouds)

    Out of these, we’ll only focus on the first two categories.

    What is a CNF/CNA Orchestration?

    CNFs are networking elements packaged as a set of cloud native microservices. CNAs are any arbitrary application (in our case edge computing applications) packaged as a set of cloud native microservices. These CNFs/CNAs need to have a “package” that describes how to orchestrate the application.

    Before we orchestrate these CNFs/CNAs to edge clouds, AMCOP first needs to register each of the Kubernetes clusters. Next, each CNF/CNA package needs to be “onboarded”. Once this is complete, a user can construct network services (in the case of CNFs) or composite applications (in the case of CNAs) and orchestrate them to the right Kubernetes cluster based on placement intent. These clusters could be private, public or even hybrid, and can run in the same or different locations.

    The way to interact to an Orchestrator is either through REST APIs, or through CLI, or through a Web Interface.

    Figure 1: Block diagram of AMCOP

    How Does a CNF Orchestration Setup Look Like?

    Figure 2: High-Level Overview of a CNF Orchestration Setup

    We have two Kubernetes (K8s) independent Clusters: one cluster is used for deploying AMCOP and on the right-hand side we have another set of clusters for CNFs (also called Edge/Target Clusters). For simplicity the diagram only shows one target cluster. Here are some common challenges faced on Public Clouds:

    • CNFs require multiple network interface for operation, most public K8s clouds do not offer this nor do they offer a way to install additional plugins such as Multus
    • We have to make sure that the clusters are not publicly accessible
    • Orchestration and the EDGE cluster should be on the same subnet

    The below table shows what interoperability testing has been done between AMCOP and public clouds as of the date of writing this blog (Feb-2021).

    Figure 3: Current state of compatibility of AMCOP with various public clouds

    See a recording of the technical meetup that covered this topic in more depth along with a hands on demo. If you would like to replicate any of this work in your environment, please contact us.

    Aarna

    Three Ways to Manage Edge Computing Applications With Ease using AMCOP.
    Find out more

    5G and edge computing are expected to be large new market opportunities; ABI research predicts a $1.5T market size by 2030. However, the advent of 5G and Edge Computing has led to an exponential stress on application management. Gone are the days when every network component was a piece of hardware with fixed functionalities, in 5G and edge computing everything is a piece of software. Looking at this problem quantitatively:

    Stress on application management = Number of edge/core sites x Application instances x Application changes per unit of time.

    In 5G and edge computing, there are 100,000s of edge sites, 10,000s of application instances created by a combination of a large number of applications and network slicing (which causes multiple instances of an application to be created) and 10s of application changes per hour. So, the stress on application management is a million times greater than anything we do today.

    Let’s shift gears a little bit and explore an analogy before continuing. The popular pets vs cattle analogy was earlier applied to infrastructure where each server was treated as a pet, as admins would upgrade and maintain servers individually before onboarding applications on them. With cloud computing, now a group of servers is treated as a unit. However, applications are still treated as pets and because of the huge stress on application management, the pets approach will not work. We need to adapt to a cattle methodology to simplify application management. See our prior blog on the pets vs. cattle analogy. With this new approach, the impact on initial orchestration will be:

    • Register K8s clouds with the orchestrator (manual/automatic)
    • Onboard Helm chart for each application
    • Orchestrate onto 1 to N clouds with multiple instances with a click of a button

    For ongoing Life Cycle Management, also, the cattle methodology works best, as it is not feasible to use millions of application management endpoints (GUI or API). LCM is handled best by the cattle methodology by doing the following:

    • For app independent LCM actions, one should use a unified endpoint for all app instances
    • For category dependent LCM actions (e.g.: O-RAN, 5G Core, SD-WAN, Firewall etc.), one should use a unified dashboard for that particular application that manages all instances from any vendor
    • For app dependent LCM actions (e.g.: AR/VR, drone control), one should use management endpoints retrofitted to connect to multiple instances of that application

    For service assurance, as well, the cattle methodology is helpful. With pets methodology you would have to log into many management endpoints to troubleshoot and raise tickets. Whereas in the Cattle methodology application (and optionally infra) telemetry is sent to a closed loop automation system (big data or AI/ML) that makes corrective actions automatically.

    With the recently announced 2.0 version of the Aarna.ml Multi Cluster Orchestration Platform (AMCOP), we are solving all three aspects of network service and application management:

    • Initial Orchestration
    • Ongoing Lifecycle Management (LCM)
    • Service Assurance or Real-Time Policy Driven Closed Loop Automation

    AMCOP 2.0 has three new capabilities:

    1. It has a full integration of the Intel OpenNESS EMCO (our orchestration engine) with the ONAP CDS project for full day 1 & 2 configuration and lifecycle management of cloud native network functions (CNF) and cloud native applications (CNA).
    2. There is an early access version of the Aarna Analytics Platform based on Google’s CDAP project that can be used for real-time policy-driven closed-loop automation. The analytics platform will also be the foundation of additional technologies coming from us such as the Non-Real-Time RIC (NONRTRIC) for O-RAN and Network Data Analytics Function (NWDAF).
    3. AMCOP 2.0 has full support for end-to-end 5G network slicing.

    A high-level block diagram of AMCOP 2.0 is shown below:

    AMCOP 2.0 is available for a free trial. Give it a shot. You can onboard a free 5GC and orchestrate that onto a Kubernetes cloud.

    Also, don’t forget to join our “Cloud Native Application (CNA) Orchestration on Multiple Kubernetes Edge Clouds” meetup on Monday Feb-22 at 7AM PT. In this hands-on technical meetup, we will show you how to onboard and orchestrate edge computing applications on multiple K8s edge clouds.