"
Aarna.ml

Resources

resources

Blog

Amar Kapadia

Nephio is a new open source project seeded by Google and hosted at the Linux Foundation that is getting substantial attention in the industry.
Find out more

Nephio is a new open source project seeded by Google and hosted at the Linux Foundation that is getting substantial attention in the industry. I attended the first Nephio Developer Summit last week in Sunnyvale, June 22-23 and wanted to share my key takeaways. As a member of the Nephio project with Sandeep Sharma from our team holding a Technical Steering Committee (TSC) seat, it is no surprise that we at Aarna are big fans of the project. Here are my observations of Nephio along with the pros and cons as I see them.

Scope

The stated goal of Nephio is to “simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments.” This is a clear and self-explanatory definition. At the meeting, Google stressed that Domain Orchestration as opposed to Service Orchestration is the focus of the project. Of course, the lines blur. Is a 5G service consisting of UPF+AMF+SMF a domain or a service? I think from Nephio’s point-of-view, this would be considered a domain. In other words, Nephio can deploy and manage a 5G service with a variety of NFs. So, that leaves very little (if anything) for the “Service Orchestration” layer to do.

What I found fascinating about Nephio is that it considers cloud infrastructure within its scope as well. Other projects, such as the Linux Foundation Networking ONAP project, have only worked on the service/NFVO/VNFM layers. I think considering both infra+NFs together is a huge plus for the 5G + MEC (multi-access edge computing) era. We at Aarna are seeing evidence of this trend from groups such as the O-RAN Alliance, where FOCOM (Federated O-Cloud Orchestration and Management), NFO (Network Function Orchestration), and NF (Network Function) Configuration Management, Performance Management, and Fault Management are all within the scope of the O-RAN Service Management and Orchestration (SMO) entity.

Nephio Technical Overview

Very simply put, Nephio uses Kubernetes (K8s) automation for cloud infrastructure and NFs. I had not appreciated this point, but Kubernetes is general purpose. It just happens to be used for container orchestration first, but it is not limited to that use case. With that understanding, we can see that Nephio is applying Kubernetes to a new use case.

Needless to say, Kubernetes comes with tremendous benefits. It is mature. It is declarative and intent driven (an intent driven system monitors the end state and continuously reconciles it with the intended state). Kubernetes can be expanded through mechanisms such as CRDs (Custom Resource Definitions) and Operators. Custom Resources are extensions of the Kubernetes API that can declaratively express user intent for a particular domain. Operators or Custom Controllers (apologies if they are not exact synonyms, I am using them as such) listen to the APIs and perform actions to fulfill the declarative intent. Ultimately declarative intent has to be converted to imperative. That is the job of the Custom Controller.


So is that it? Then why do we need Nephio? Clearly there’s more…


Distributed State

Nephio creates the concept of a centralized Nephio K8s cluster with platform controllers which reconcile the high level user intents expressed in KRM files. From that standpoint, the Nephio cluster runs the user intent through a series of Custom Controllers to produce the state that can be consumed by the edge cluster(s).

The state is transmitted to the edge cluster using a “pull” mechanism using an open source project called ConfigSync. However, ConfigSync may be replaced by alternatives such as ArgoCD or Flux v2. A pull mechanism is significantly more scalable than a push approach. It also moves the burden of maintaining the state to the edge cluster as opposed to the Nephio cluster. Again much more scalable.

The edge clusters in-turn use the input provided to them by the Nephio cluster for their own Operators/K8s cluster configuration that may include edge cluster infrastructure and NF automation.

GitOps

That’s not all. Nephio also bakes in the concept of GitOps into the project. The user provides KRM files in a package called kpt that is checked into a Git repo. kpt uses the principle of configuration as data (APIs) rather than configuration as code (templates or Domain Specific Languages). The Custom Controllers on the Nephio cluster successively refine the kpt package in the git repo. Finally the edge cluster pulls the state from the Git repo to apply it to the local K8s cluster. This architecture is both pragmatic and clever. It’s like infusing Fluoride into water. The user gets the benefit of GitOps without explicitly knowing or worrying about it.

Pros

Nephio has a number of key benefits.

  • Simplicity: Like Kubernetes, I think Nephio will disrupt open source networking vis-à-vis cloud infrastructure and greatly simplify network service delivery.
  • Google backing: Google is not only behind the open source project, they also seem to be committed to Nephio based cloud service(s). This is ideal backing for an ambitious open source project.
  • Common across Infra, Platform, Workloads: The same descriptors and project can be applied for setting up the infrastructure (on-prem or cloud), the CaaS platform (K8s with different plugins and software components such as Multus, SR-IOV, DPU, Istio, Prometheus etc.), and workloads (NFs and MEC applications).
  • GitOps Built-in: Users don’t have to bolt on a DevOps framework on top of Nephio. It’s inbuilt. I love this feature.
  • Distributed: Nephio is inherently built for a world with a large number of edge clouds. This again distinguishes it from prior projects where a centralized entity can struggle to scale.
  • Data-model first: By having CRDs first, the data model is essentially agreed upon even before writing the 1st line of code. This is the right way to do things. Current projects either approach the data model in parallel to writing code or often as an afterthought.
  • Community excitement: If the attendance at the event is any indication, the community is truly energized by Nephio. It also includes several active end users. This is a positive sign.

Cons

In my opinion, Nephio comes with some architectural assumptions that might slow down its adoption.

  • Developer effort: Nephio is just a framework. Without CRDs and Custom Controllers, Nephio doesn’t actually do anything. This means that the developer burden, as compared to prior solutions or open source projects, is definitely higher. In addition, the ops personnel at telcos will need to be comfortable with KRM files and kpt packages, which requires sophistication. Of course, there could be a GUI to front-end and simplify this mechanism.
  • Moving the burden to NF vendors: Philosophically, Nephio moves the control to NF vendors (aka the sVNFM model). In the past, systems such as ONAP SO+CDS+SDN-C, had tried to wrest control away from NF vendors and seek common approaches via a gVNFM. I don’t think the Nephio approach is either good or bad. After all, the NF vendor is the expert on how to manipulate their NF. Why not give the control to them? But this does mean waiting for NF vendors to create Operators.
  • KRM vs. Helm: With one exception, every NF vendor I have talked with creates CNFs (cloud native network functions) via Helm Charts. It seems Nephio doesn’t hold Helm Charts in a positive light at this time since it mixes declarative with imperative. However, this position might slow Nephio adoption.

Conclusion

I am a Nephio believer. After having seen prior approaches, I believe that software simplicity is the number one factor that determines its success. And Nephio fully embodies simplicity. I think Nephio will have a big impact on 5G in general and O-RAN and MEC specifically (Nephio has the O2 interface as one of its stated use cases). We at Aarna are onboard. We will announce our Nephio strategy later in Q3’22 and will publish blogs and videos on the Nephio architecture. Want to learn more? Check out the Aarna's Nephio Executive Brief. Feel free to reach out to us if you have any Nephio needs or questions.


Aarna

Join Aarna.ml at the LF Networking DTF Next Week In Porto!
Find out more

Updated: Jul 11

The Linux Foundation Networking Developer & Testing Forum is being held from June 13-16, 2022, at Porto in Portugal. In this event, various LFN project technical communities will present their project architecture, direction, and integration points; and will explore future possibilities through the open source networking stack. This is the primary technical event for the LFN project communities, where community members converge via sessions, workshops, tutorials, You can register for the event here  and explore the schedule here.

Aarna is excited to be participating in 10 sessions! Please join us live in Porto, online via the Zoom Bridge, or post-event in the event recordings. Send any questions to info@aarna.ml

1. Plenary: The LFX Dashboard: Tool Suite and Community Review

Mon Jun 13 2022

Speakers - Henry Quaye, Linux Foundation; Brandon Wick, Aarna.ml

Description - LFX tool suite has been designed to support the various project communities of the Linux Foundation. Through this demo learn setting up your individual dashboard. Also learn to view, parse, and analyze your project community metrics. A brief overview of the tool would be given, with emphasis on how to update and leverage LFX. Explore the LFX Tool

Recorded Session.

2. Plenary: Marketing for LFN Projects

Mon Jun 13 2022

Speakers - Heather Kirksey, Linux Foundation; Bob Monkman, Intel; Brandon Wick, Aarna.ml.

Description - A brief overview and tutorial for how to market LFN Projects

  • Internal Marketing – how do we convince our bosses
  • Marketing for projects, including operational aspects – easy to use, easy to discover
  • How to make more friendly to developers – what needs to be in place (ties to tooling)
  • How to make more consumable to end users – what needs to be in place (ties in to documentation)
  • How to get the word out

3. ONAP: An O-RAN SMO Use Case with Netconf Notifications

Tue Jun 14 2022

Speakers - Sriram Rupanagunta, Aarna.ml; Bhanu Chandra, Aarna.ml

Description -

Part 1: Demonstration of O-RAN SMO use case built with ONAP.

Part 2: Session covering extending Netconf notification support for ONAP SND-C/SDN-R.

Part 1: Topic Overview:

  1. Aarna.ml built an O-RAN SMO using open source components from ONAP projects SDNR, DCAE, etc.
  2. CapGemini offers O-RAN compliant CU/DU that follows the O-RAN WG1 O1 spec and supports various features like Provisioning Management, Fault Management, File management, and more.
  3. Aarna and CapGemini are working together for a private 5G O-RAN deployment.
  4. Aarna and CapGemini doing interoperability testing between SMO and CU/DU.

We will show the following Demo:

  1. Bring Up the CU/DU and SMO
  2. Connect  CU/DU to SMO (manually/plug n play)
  3. Configuration Management (config push from SMO to CU/DU)
  4. Show Fault Management

Part 2: Topic Overview:

  1. Support for netconf notifications is limited in the ONAP SDN-C/SDN-R.
  2. This presentation shows how to extend the support based on your need.
  3. Code walk through on adding new netconf notification and corresponding netconf notification to ves conversion

Why do we need this? As per the ORAN specs, SMO has to handle various netconf notifications on O1 side like, fileready, software activate, inprogress.. etc.

We will show the following Demo:

  1. Generate netconf notification on simulator
  2. Receive netconf notification on SDNR from karaf log
  3. Ves collector logs to show the converted netconf notification to ves conversion
  4. Show the event in DB.

Recorded Sessions -

Topic 1 and Topic 2

4. EMCO: BackUp and Restore

Tue Jun 14 2022

Speaker - Sriram Rupanagunta, Aarna.ml

Description - This presentation will showcase how an EMCO deployment can be configured to handle disaster recovery. We use the Velero tool to take a backup of an active EMCO deployment and store it on cloud. We simulate disruption by destroying the cluster namespace. Next we use the recovery function in the velero tool and restore the entire deployment to original state.

Recorded Session.

5. EMCO : Deploying on ROSA Custer

Tue Jun 14 2022

Speaker - Sriram Rupanagunta, Aarna .ml

Description -

In this presentation, we will cover the following.

  1. A brief description of the ROSA platform and how it simplifies deployment of      complex infrastructure on kubernetes cluster.
  2. We will show how EMCO deployed on ROSA is able to orchestrate Free 5G core on target cluster.
  3. Description of how Free 5G deployed on K8 cluster works.

Recorded Session.

6. EMCO: Open Policy Agent Service Assurance in the Telcom Edge

Tue Jun 14 2022

Speaker - Sriram Rupanagunta, Aarna.ml

Description - Policy-driven closed-loop automation is vital for enabling AI and machine learning in 5G and edge systems. Open Policy Agent is a simple, cloud-native and domain agnostic policy engine with a simple policy language. OPA has a small memory footprint and better performance compared to traditional policy engines. OPA's policy language, Rego, is an intuitive and natural declarative policy language. In this talk, we will look at the use of OPA in service assurance use-case, specifically in telecom edge use.

We propose a framework for building a general-purpose policy evaluation framework for EMCO, which can monitor application-specific activities on the edge clusters and trigger actions based on policy evaluation. We are working on integrating EMCO’s temporal workflow manager as the Policy Enforcement Point (PEP). The Temporal workflow engine allows users to define, deploy and track custom workflows. EMCO’s Workflow Manager provides interfaces for defining workflow intents and managing the life cycle of workflows. The Workflow Manager with the proposed OPA-based policy controller gives a mechanism to define policy-based service assurance applications in EMCO. With standardized workflows, composite applications can be assigned with ‘policy intents’ for the policy-driven life cycle management.

Recorded Session.

7. ONAP: PCEI Edge to Cloud connectivity and application deployment

Wed Jun 15 2022

Speakers - Sriram Rupanagunta, Aarna.ml; Vivekanandan Muthukrishnan, Aarna.ml; Oleg Berzin, Equinix

Description -

Complex Multi-domain orchestration across Edge and Public Clouds, using ONAP CDS and Terraform plans

The purpose of Public Cloud Edge Interface (PCEI) Blueprint is to develop a set of open APIs, orchestration functionalities and edge capabilities for enabling Multi-Domain Interworking across the Operator Network Edge, the Public Cloud Core and Edge, the 3rd-Party Edge as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks.

In this presentation/demo, we will be showing how ONAP module CDS can be used (along with Terraform plans) to provision Infrastructure on a Bare Metal (Equinix Metal Cloud), install K8S on Bare Metal, and configure Azure Cloud (Express Route, Peering, VNET, VM, IoT Hub). We will then interconnect Edge Cloud with Public Cloud (Equinix Fabric), and deploy Edge Application (PCE), which includes dynamic K8S Cluster Registration to EMCO, dynamic onboarding of App Helm Charts to EMCO, and finally, design and instantiate composite cloud native app deployment and end-to-end operation.

Recorded Session.

8. EMCO: Enhancing the EMCO GUI with RBAC

Wed Jun 15 2022

Speaker - Sriram Rupanagunta, Aarna.ml

Description - We will show the implementation of Role Based Access control in EMCO GUI. Currently it supports ADMIN and TENANT roles. Once the user logs in based on the role, different views are presented to the user. The Admin is like a super user with all the privileges. The Admin has the capability to add users (with tenant role) and assign them to a project.

When a tenant user is logged in, the user will be redirected to the project view to which a tenant belongs to. Currently one tenant can be the owner of only one user. RBAC in UI is translated to EMCO with the help of logical clouds. When a tenant or admin creates a logical cloud, we pass the user’s email ID as the user for that particular logical cloud.

Recorded Session.

9. EMCO: Orchestration and Demo of LCM of AnyLog using EMCO

Wed Jun 15 2022

Speakers - Sriram Rupanagunta, Aarna.ml; Raghuram Gopalshetty, Aarna.ml; Ori Shadmon, AnyLog

Description - As of today, there are no data services at the edge which are similar to what the cloud is able to offer. The integration of the AnyLog Network with EMCO delivers a "cloud-like" solution at the edge and provides a new, unique option for the industry. Using AnyLog, developers are able to connect cloud and edge applications with the data at the edge without intermediaries (the public clouds). It is an opportunity to replace the cloud providers in servicing the data, provide real-time insight to edge data, and lower the cost.

In this demo, we’ll show that using EMCO, developers can deploy and manage AnyLog instances at the edge from a single point and using the AnyLog Network, manage and view the distributed edge data from a single point.

EMCO Demo:

  1. Onboard all 4 target clusters to EMCO.
  2. Onboard AnyLog Master, Operator and Query helm charts to EMCO.
  3. Orchestrate AnyLog Master to Cluster-1.
  4. Orchestrate AnyLog Operator to Cluster-2 and Cluster-3 (NEW_CLUSTER: "demo-cluster2").
  5. Orchestrate AnyLog Query to Cluster-4.
  6. Orchestrate Grafana to Cluster-4.

AnyLog Demo:

  1. AnyLog - Explain/Show the setup with multiple (2) nodes hosting data
  2. AnyLog - Explain/Show that data processing at the edge is automated (from schema creation to HA).
  3. Explain the current approach - to have a unified view, data moved to the cloud
  4. AnyLog - Explain/Show a unified view of all the edge data (from the 2 nodes).
  5. Aarna - Explain/Show deploying a new storage node (I think we need to automate the simulator to push data when the node is up and running)
  6. AnyLog - Explain/Show a unified view of all the edge data (now from from the 3 edge nodes).

Recorded Session.

10. EMCO: Orchestration and Service Assurance 5GC Functions with EMCO

Wed Jun 15 2022

Speaker - Sriram Rupanagunta, Aarna.ml Ulysses Lu, Quanta; Sandeep Sharma, Aarna.ml

Description - We show the orchestration of QCT's 5GC functions on multiple k8s clusters, using EMCO. We configure Prometheus to scrape for events (CPU utilization) from the target clusters, and create a closed loop using ONAP CDS as the actor that can scale out one of the 5GC network functions. We use a sample NWDAF and an application function (AF) to show this functionality.

Recorded Session.

Sandeep Sharma

What’s New in EMCO 22.03?
Find out more

The Edge Multi-Cluster Orchestrator (EMCO) open source project, part of the Linux Foundation Networking umbrella, is a software framework for intent-based deployment of cloud-native applications to a set of Kubernetes clusters, spanning enterprise data centers, multiple cloud service providers and numerous edge locations. It can be leveraged for Private 5G, O-RAN, multi-access edge computing (MEC) applications. EMCO has significant industry momentum from companies like Intel, Equinix, Nokia, and Aarna.ml.  

A major benefit of EMCO is extensibility via controllers which perform specific operations. Multiple controllers can be onboarded based on different use cases. Here’s a sample:

  • Cluster Manager - Registers clusters by cluster owners, enables users to onboard target Kubernetes clusters to the platform.
  • Network Manager - If secondary interfaces are required for orchestrating the services and applications through EMCO, this controller creates and manages these secondary networks such as exposing existing physical/provider networks into K8s
  • Distributed Cloud Manager - Presents a single logical cloud from multiple edges. It is used for stitching the clusters onboarded to the platform.
  • Application Config  Manager - Enables distribution of application/CNF configuration across Edges & Clouds.
  • Cert Distribution  Manager - Enrolls CA certificates using tenant specific parent CAs and distributes them across tenant specified K8s clusters.
  • Distributed Application Manager - Orchestrates the complex applications (or network services) with the help of various placement controllers. Works with various action controllers to enable secure communication among the microservices.
    - Hardware Platform Aware Controller - Enables selection of K8s clusters based on microservices hardware requirements.
    - 5GFF EDS Placement Controller - Enables selection of K8s clusters based on latency requirements of application microservices, UE capabilities, 5G Network requirements.
    - Generic Action Controller - Allows the customization of K8s resources of applications. Some customization examples include CPU/Memory limits based on destination cluster type.
    - Secure Mesh Controller - Auto-configures service mesh (ISTIO/Envoy) of multiple clusters to enable secure L7 connectivity among microservices in various clusters. Also, it can configure ingress/egress proxies to allow external L7 connectivity to/from microservices.
    - Secure WAN Controller - Automates fireewall & NAT policies of Cloud native OpenWRT gateways to enable L3/L4 connectivity among microservices and also with external entities.
    - Temporal Controller - Allows a way for third parties to develop workflows that need to be executed with complex application (or network service) life cycle.
    - SFC Controller - Allows the automation of service function chaining of multiple CNFs.
  • Resource Synchronizer & Status Monitoring - Manages instantiation of resources to clusters using various plugins to address clusters from various providers.

EMCO 22.03 Highlights

The EMCO 22.03 release brings several improvements, including:

  • EMCO GitOps Integration

Resources (services and Kubernetes object) created via EMCO, are now GitOps enabled and deployed in target clusters. These additional controllers push all resources to GitOps in a specific directory structure–enabling any pull request to these resources to be reconciled by agents like Flux V2. This allows for a complete continuous deployment (CD) cycle.

  • Modify instantiated logical cloud

There are two kinds of logical clouds – Standard and Admin. Admin logical cloud gives cluster wide authorization to the user. In standard logical cloud user has to specify a name space and resource quotas and permissions about the kind of Kubernetes resources this specific user can access. Until this release, once a logical cloud is created, it could not be modified. In this version, one can modify instantiated logical cloud.

  • Enhanced Status Querying/Notifications

In EMCO, a monitoring agent provides the status of resources (e.g. Kubernetes), deployed in the target cluster along with orchestrated applications. The newly added feature is  subscription for notifications and enhancements in the status API itself

  • New Features Introduced on the Web-Based UI
    -
    RBAC or Role Based Access on the GUI - A way of granting users granular access to Kubernetes based API resources. It implements a security design which provides restricted access to Kubernetes resources based on the role of the user.
    - Standard Logical Cloud - Until now, only the admin logical cloud was supported on the GUI, now the standard logical cloud has also been integrated.
    - Service Discovery on the GUI - Now integrated is the specific sub-controller, underneath the Distributed Traffic Controller, called the Istio traffic subcontroller. When orchestrating applications across multiple clusters, service discovery helps applications in one cluster to reach out to others.. EMCO creates all the Istio resources to make service discovery possible when deploying the applications across multiple clusters.
  • Temporal Workflow Engine

This is a new controller added in EMCO that orchestrates and manages temporal workflows based on use cases.

UI Enhancements: The EMCO 22.03 demo seen here presents a subset of the features of EMCO 22.03 and focussed on the enhancements in the UI. Here you can see how to log in as an admin, onboard the controller, and create a user along with a tenant. EMCO is shown orchestrating two apps -- client and server – across two Kubernetes clusters.

Want to learn more about EMCO? We encourage you to explore the EMCO Wiki, EMCO Repos on GitLab, join the EMCO Mailing list, and attend calls. Send any project related questions to Louis Illuzzi: lilluzzi [at] linuxfoundation [dot] org.

Aarna

Service Orchestration using BPMN with Domain Orchestration using CDS
Find out more

BPMN

Business Process Management Notation (BPMN) is a process modeling standard owned by the Object Management Group (OMG), a standards development organization. BPMN has become the de facto standard for understanding business process diagrams. The stakeholders of BPMN are people who design, manage, and realize business processes. BPMN diagrams are then translated to software process components. It provides businesses with the capability to understand their internal business procedures using a graphical notation and it uses an XML schema-based specification.

Camunda

Camunda is an open-source, Java-based BPMN engine, compliant with the BPMN 2.0 specification, that provides intelligent workflows. Aarna.ml uses a Camunda modeler, which is a UI to design the workflows.

There are different Camunda components:

●     Open source BPMN engine compliant with BPMN 2.0 specifications

●     A Java-based framework providing intelligent workflows

●     A camunda Modeling UI to design workflows

●     A Java-based process engine library responsible for executing BPMN 2.0 processes:

  • Case Management Model and Notation (CMMN 1.1) - this is a standard for modeling cases, while BPMN is a standard for modeling processes
  • Decision Model and Notation (DMN 1.1) decisions - this is used to configure any rules, which are required for the given use cases or for storing any configuration for a given use case.
  • Default support for Javascript & Groovy based tasks

●     Uses relational database for persistence - This stores all state related information.

Camunda Modeler is a UI  for designing workflows with drag and drop notations.

                                Fig 1 - Camunda Modeler

Camunda Process Engine Architecture has multiple components:

  1. Modeler - It is the Camunda UI or desktop application where we design the processes. This comes under the design phase.
  2. Task List -This is an out-of-the-box web application tightly integrated with Camunda. When we deploy any process as a user task, then it can be seen in the Task List.
  3. Cockpit - It is a web application used for process monitoring, e.g., if we have      deployed any process and it is still running, then through the cockpit, we can check the status of the process.
  4. REST API - It is used to interface with external components. Its goal is to provide      access to the engine interface.
  5. Custom  application - Using this we can have our own applications and through REST API, we can access all the engine interfaces.

                                          Fig 2 - Camunda Process Engine Architecture

Engine interfaces connect to the database which stores all the process states.

ONAP CDS Definitions and Concepts

CDS (Controller Design Studio) is a framework which is used to automate the resolution of the resources for instantiation and any configuration provisioning operation. There are different concepts in CDS:

  1. Data Dictionary - A list of parameters that need to be resolved during runtime.
  2. Resolution - Provides value to a configuration parameter during runtime. The source of these values can be through input, set as default, fetched through REST,      SQL and many more.
  3. Configuration Provisioning - Used for complex interaction with south bound APIs. CDS provides us workflows (in ONAP directed graph format) and scripts (Kotlin, Python).
  4. Modeling - Used for defining data dictionaries, resolution mechanism, workflows, and scripts, which are part of provisioning. CDS uses TOSCA and JSON for      modeling and all the model information is stored as CBA (CDS Blueprint      Archive).

Camunda and CDS Interaction Diagram (Synchronous)

In this synchronous process, the actor calls the workflow, using REST API or through the UI. This will call CBA workflow1, which is deployed in the CDS. Thus in runtime we trigger the CBA. CBA returns some value back to the Camunda workflow. Again wecall the CBA workflow2., which returns the response. In the Camunda workflow we can consolidate all the responses we received from the CBA and then return it to the consumer in a particular format as required. Each process will have a unique ID. In each response we will find the response ID and relevant response data.

                            Fig 3 - Camunda and CDS Interaction Synchronous

Camunda and CDS Interaction Diagram (Asynchronous)

The consumer will call a workflow, which would be defined as asynchronous. It immediately returns the response to the user with the Process UUID and in the background it will call the CBA configured here in the workflow and store the responses in the Camunda workflow cache. At the end it will be stored in the relational database. If we want this user to send back this response through some REST API, which the user has exposed, we can send the response.Users can use either standard Camunda REST API to get the status of this workflow and get the response data using standard REST API exposed by Camunda. If we have CBAs which are long running, we will use asynchronous interaction.

                            Fig 4 - Camunda and CDS Interaction Asynchronous

For more information -

https://docs.camunda.org/manual/latest/

https://camunda.com/best-practices/invoking-services-from-the-process/

https://docs.camunda.org/manual/latest/reference/bpmn20/

Below is the link to our webinar on the same topic, it includes the demo as well -

https://www.youtube.com/watch?v=tSMan-0ENy0&t=1099s

Aarna

What’s new in AMCOP 2.3.0 and 2.4.0?
Find out more

Aarna Multi-Cluster Orchestration Platform or AMCOP is an open-source platform for orchestration, lifecycle management, and automation of network services consisting of CNFs, PNFs, and VNFs and MEC applications consisting of Cloud Native Applications (CNAs) . AMCOP can be used for use cases like 5GC orchestration, MEC orchestration, ORAN SMO, and many more.

The new features introduced in AMCOP 2.3.0 are given below:

  • Kubernetes object management (Create/Modify) using GAC controller
  • Multi cluster orchestration support using service mesh
  • High Availability support
  • NWDAF support in AMCOP
  • Open Policy Agent (OPA) support in AMCOP
  • PNF device management support using AMCOP

Kubernetes object management using the Generic Action Controller(GAC) helps in Day 0 and Day N management of Kubernetes objects. During Day 0 configuration, if the user wants to deploy the application across multiple clusters but while deploying the user wants to verify or edit an environment variable that is different across the target clusters, GAC helps specify these details, so that the application can be deployed across multiple target clusters with the environmental variable being different across target clusters. It also helps in Day N configuration. Suppose the application is already running and the user wants to edit any value in the config map, she can do that using the single pane dashboard of AMCOP. Thus manual modification by going into 100s of clusters individually can be avoided.

Multicluster orchestration support using service mesh is achieved through a Distributed Traffic Controller (DTC). If one application running on a target cluster wants to discover and talk to another application running on a different target cluster, first discovery takes place and then network traffic steering takes place. The service entries will help the application running on one target cluster to discover the application running on another target cluster. This is taken care of by the DTC of AMCOP.

As part of high availability support, AMCOP now supports multi-master multi-worker deployments. In case a master node or worker node is down, the workload gets distributed automatically across different worker nodes, which are available and the master node takes care of rescheduling.

AMCOP currently has NWDAF support, which introduces the analytics part of a 5G network. If any Network Function requires analytics information it will connect to NRF; NWDAF also is registered with NRF. Hence NRF fetches the analytics function from NWDAF and sends it to the NF requesting the information. With NWDAF, AMCOP can execute closed loop automation. For example, if the prediction of CPU usage is high, horizontal scale out can be triggered by AMCOP. Thus an appropriate action can be triggered by an incident.

           

Fig 1: NWDAF in AMCOP

The Open Policy Agent (OPA) is a lightweight and powerful CNCF policy engine project, that is included in AMCOP. Using this, the admin can push policies so that the actions required in closed loop automation can be altered by user-defined policies. The policy engine executes the policy to alter the behavior of the closed loop automation.

AMCOP can be used to orchestrate both PNF and CNF based applications (and VNF through Kubevirt). It has a component called CDS (Controller Design Studio) and a Camunda based workflow engine, that help in Day 0 and Day N configuration in PNF devices. Day 0 involves powering up devices, starting workflows and discovery of devices and once the Day 0 configuration is successfully pushed using CDS, Day N configuration can also be executed on PNF devices using AMCOP.

AMCOP 2.4.0 was recently announced the enhancements are as follows:

●     RBAC support (Early access)

●     Log4j vulnerability fixes for AMCOP components

●     Upgrade support from prior a AMCOP release

●     Target cluster monitoring agent (Early access)

●     Performance improvements

Currently, two roles are possible in AMCOP—admin and tenant. The admin role is a superset of tenant role. Default admin credentials are created during deployment of AMCOP. Using these credentials admin can login and change the password and then admin can start creating tenants. Tenants are one level lower than admins in terms of privileges. The admin can see all the tenants but one tenant cannot see the other tenant created by the admin. For example, each business unit in an organization can be given a separate tenant access to only the applications that the tenant should have access to.

This latest version of AMCOP addresses the vulnerability in Log4j ver 2.8, arising from the Log4 shell. Log4 shell is a remote code execution vulnerability using which remote attackers can take control of any device on the internet if the device is running an application based on Log4j. Some of the components of AMCOP uses Log4j library. In version 2.4.0 of AMCOP, this vulnerability has been fixed to make sure AMCOP is safe and secure.

The current version of AMCOP provides one-click upgrade support from prior release, thus avoiding reinstallation of AMCOP involving migration of databases. Now any production server which has AMCOP 2.3.0 can be upgraded to AMCOP 2.4.0 without any data loss.

AMCOP 2.4.0 has a Target Cluster Monitoring Agent. When a user creates a logical cloud a monitoring agent gets deployed all across the onboarded target clusters which are part of the logical cloud. In the earlier version, AMCOP did not have information over the exact state of the application running in the target cluster. With this monitoring agent AMCOP knows the real state of the application or composite application, running as part of the target cluster. AMCOP dashboard now displays live status of the applications.

AMCOP 2.4.0 provides faster deployment and faster upgrade. Thus the performance of AMCOP has improved greatly.

Try AMCOP for free at aarna.ml/amcop.

Aarna

Aarna.ml MWC Barcelona 2022 Demos
Find out more

Aarna.ml, is an open-source software company that enables zero-touch management of 5G networks and edge computing applications. Our flagship product is AMCOP or the Aarna.ml Multi-Cluster Orchestration Platform, which is an open-source orchestration, lifecycle management, closed-loop automation platform for cloud native network services and edge computing applications. It consists of:

  • Linux Foundation Edge Multi-Cluster Orchestrator (EMCO) for intent-based orchestration and Day-0 configuration
  • Linux Foundation Open Network Automation Platform (ONAP) CDS component for Day 1 & 2 configuration and LCM; for the O-RAN SMO use case, this component is supplemented with OpenDaylight MDSAL
  • Kafka, CNCF Open Policy Agent (OPA), along with select ONAP DCAE microservices for analytics and closed-loop automation
  • Numerous CNCF and related projects (Istio, Prometheus, …)
  • Proprietary value adds such as 5G network slicing, O-RAN NONRTRIC, NWDAF

At the recently concluded MWC in Barcelona, Aarna.ml had the following live demonstrations:

1. Joint Demo with AWS and RedHat at AWS Experience Booth in MWC Barcelona

In this demo, it was shown how the Aarna.ml Multi-Cluster Orchestration Platform (AMCOP) was installed on a Red Hat OpenShift Service on AWS (ROSA) cluster and how AMCOP deployed a 5G core network service on another Kubernetes cluster running on an AWS EC2 instance. ROSA provided a way to accelerate application development by leveraging familiar OpenShift APIs and tools for deployments on AWS. AMCOP was one such application that could be installed on ROSA with a Kubernetes Operator. AMCOP in-turn deployed cloud-native network services and edge computing applications to other ROSA, AWS EKS, or Kubernetes on AWS EC2 clusters.

Read the Press Release to know more - https://bit.ly/3t8r3ww

See the recorded demonstration - https://youtu.be/gqF6oasNyeM

2. Joint Demo with Quanta Cloud Technology at the QCT Booth in MWC Barcelona

In this demo, the scaleout of QCT 5G Core was demonstrated across two Kubernetes clusters by using AMCOP. The demo focussed on the following:

  • Installation of AMCOP on Kubernetes cluster #1
  • Creation and registration of Kubernetes clusters #2 & #3  onto AMCOP
  • Onboarding of QCT 5GC CNFs onto AMCOP
  • Creation of a 5GC Network Service with onboarded CNFs
  • Orchestration of the 5GC network service onto target cluster #2 by specifying placement intents
  • Orchestration of Prometheus which scrapes CPU load information from cluster #2
  • Egress of CPU load data from Prometheus to AMCOP
  • Automatic scale-out of AMF network function into cluster #3 triggered by AMCOP when the CPU load exceeds the threshold

See the recorded demonstration - https://youtu.be/8p82GErOkm8

3. Free5gc CORE with multiple Anchor UPF orchestration via AMCOP at the TelcoDR Cloudcity Booth in MWC Barcelona

In this demonstration, we showed orchestration of Free5GC Core using AMCOP across multiple Edge Clusters with Uplink Classifier (ULCL) mode enabled and then tested it with UERANSIM (Simulator for End Device).

See the recorded demonstration - https://www.youtube.com/watch?v=qXI5P1m592g&list=PLyQ7hs1Psze4zaBPdRUgY5zL6U_PQgB2N&index=1

4. Multidomain orchestration using Terraform & ONAP CDS at the TelcoDR Cloudcity Booth in MWC Barcelona

In this session, we showed how EMCO can be integrated with other open-source projects: Terraform, Camunda workflow engine & ONAP CDS, to perform multidomain orchestration of cloud and edge services.

See the recorded demonstration - https://www.youtube.com/watch?v=gO_liMAxuRs&list=PLyQ7hs1Psze4zaBPdRUgY5zL6U_PQgB2N&index=2

Aarna.ml had also joined the Oracle for Startups Team at Mobile World Congress in the 4YFN hall. Read the blog to know more - https://blogs.oracle.com/startup/post/startups-mobile-world-congress