"
Aarna.ml

Resources

resources

Blog

Amar Kapadia

Pets Vs. Cattle Part II: Hint This Time it is the Applications
Find out more

If you thought the Pets vs. Cattle saga was over in 2016, you'll be thrilled or disappointed, as the case may be, that I'm going to resuscitate the thread but in a different context.

Pets Vs. Cattle part I was all about the infrastructure. In the good old days, we used to give each server tender loving care (TLC) just like we would to a pet. Operating system provisioning and updates, BIOS updates, BMC firmware updates, peripheral (e.g. NIC) installation and configuration, RAID configuration, logical volume management, remote KVM access, IPMI management, server debug, and more were all performed manually on a per server basis. Applications would then be installed manually on a given server(s). During the 2009-2016 timeframe, the cloud architecture completely standardized and automated the entire server management task. Server management was now akin to managing cattle, hence the term. Applications were no longer installed on a particular server, instead they were deployed on the "cloud" and the cloud layer—public cloud, OpenStack, or Kubernetes (K8s)—would take care of placing the specific VMs or containers onto individual server nodes (amongst other things).

However, applications have continued to be treated as pets. Each application receives TLC. Even in cattle-infrastructure aka a cloud framework, applications are installed onto an individual cloud using a declarative template such as Terraform, Helm Charts, or OpenStack Heat and configured using manual techniques or tools such as Ansible. Service assurance, or fixing problems, has revolved around humans looking at application dashboards (also called Element Management System or EMS) and alerts, and closing tickets manually.

Let's do a thought experiment to see how well a pets approach works for application management in the edge computing context. Let's assume 1,000 edge locations and 200 K8s applications per edge location. An application could be a network function like a UPF, AMF, SMF, vFirewall, SD-WAN; or a cloud native application such as AR/VR, drone control, ad-insertion, 360 video, cloud gaming; or AI/ML at the edge such as video surveillance, radiology anomaly detection; or IoT/PaaS framework such as EdgeXFoundry, Azure IoT edge, AWS Green Grass, XGVela. Furthermore, assume that the number of application instances go up to 1,000 per edge site with 5G network slicing. So this means, there would be 1,000,000 application instances across all the edge sites in this example.

Here is the impact on application management:

Initial orchestration:  The Ops team would have to edit 1,000,000 Helm Charts (to change Day 0 parameters), log into 1,000 K8s masters, and run a few million CLI commands. Clearly, this is not possible.

Ongoing lifecycle management: Log into a 1,000,000 dashboards and manage the associated application instance (since very few application management dashboards manage multiple instances) OR run 200 Ansible scripts 5,000 times each with different parameters which means executing the scripts a 1,000,000 times. This is not practical either.

Service assurance: Monitor 1,000,000 dashboards and fix issues based on tickets opened. This is also not feasible.

Keep in mind, actual edge environments could scale even more. There could be 100,000 edge sites and 1,000 applications. mushrooming to 10,000 application instances per edge site with 5G network slicing. If you are thinking this scale is a pipe-dream I'd remind you of Thomas Watson's comment from 1943 where he said, "I think there is a world market for maybe five computers."

So what's the solution to this seemingly impossible problem? Join us for the "What's New in AMCOP 2.0" meetup on Monday Feb-15, 2021 at 7AM Pacific Time for the answer or see the next installment of this blog series next week.

Bhanu Chandra

The O-RAN alliance is defining the various components and interfaces required to disaggregate 5G RAN.
Find out more

The O-RAN alliance is defining the various components and interfaces required to disaggregate 5G RAN. According to the O-RAN site, the "O-RAN ALLIANCE is transforming the Radio Access Networks industry towards open, intelligent, virtualized and fully interoperable RAN." One of the components defined by O-RAN is the Non-Real-Time Radio Intelligent Controller (NONRTRIC).

According to the O-RAN Software Community (O-RAN-SC), that implements parts of the O-RAN specification, the NONRTRIC performs the following:

The Non-Real-Time RIC (RAN Intelligent Controller) is an Orchestration and Automation function described by the O-RAN Alliance for non-real-time intelligent management of RAN (Radio Access Network) functions. The primary goal of the NONRTRIC is to support non-real-time radio resource management, higher layer procedure optimization, policy optimization in RAN, and providing guidance, parameters, policies and AI/ML models to support the operation of near-RealTime RIC functions in the RAN to achieve higher-level non-real-time objectives.

Earlier this week, I demonstrated the NONRTRIC development environment from the O-RAN-SC Bronze release. As the name suggests, the development environment allows users to develop different functionalities and interfaces for the NONRTRIC. Please check  out the webinar recording here.

Ultimately, we will offer a fully productized commercial-grade NONRTRIC as part of our Aarna.ml Multi Cluster Orchestration Platform (AMCOP) product.

Separately, ONF announced the first Release of SD-RAN v1.0 Software Platform for Open RAN and mentioned that we are a member of the project. Our goal is to work on the interop between our NONRTRIC and the ONF Near-Real-Time RIC.

If you are interested in replicating any of this in your lab, feel free to contact us.

Sandeep Sharma

AMCOP orchestrated the open source Free5GC 5G Core onto an open-source Kubernetes edge cloud.
Find out more

5G is moving to an all software environment. Users will no longer deploy physical network functions except for the Radio Unit (RU). These software network functions require orchestration, ongoing management, and automation to be deployed at scale. A few instances can be managed manually, but at scale, automation is a must.

In addition, 5G network functions are moving to a cloud native architecture. As a result, the orchestration solution (we will use the term orchestration to include ongoing lifecycle management and real-time policy driven control loop automation as well) must support cloud native network functions (CNF) that can be deployed on a Kubernetes edge/core/public cloud.

Our Aarna.ml Multi Cluster Orchestration Platform (AMCOP) product can accomplish the above. It can orchestrate both 5G RAN and 5G Core. On Dec-14, I showed a demo of AMCOP orchestrating the open source Free5GC 5G Core onto an open source Kubernetes edge cloud.

The demo goes through cluster registration, CNF onboarding, service design, and finally service orchestration. The service orchestration phase takes care of placement intent and day 0 configuration as well. If you want to watch the demo, you can do so here.

A number of our customers have recreated, or are in the process of recreating this demo in their own lab. If would like to do so as well, please contact us.

Aarna

Aarna.ml is thrilled to announce the Aarna.ml User Group on Meetup. We plan to have 5G and edge computing-related technical meetups.
Find out more

I'm thrilled to announce the Aarna.ml User Group on Meetup. We plan to have 5G and edge computing related technical meetups roughly every 2 weeks on Monday at 7AM Pacific Time. These sessions will be hands-on technical demo sessions with presentations. No marketing fluff, just a sneak peak at the technology we are working on. In addition, you will get to meet Aarna.ml engineers, ask them questions, and network with other community members that share the same interest as you.

The first five meetups are as follows:

Please join us! We look forward to seeing you. And, if you want to recreate any of these demos in your lab, contact us at info@aarna.ml.

Sriram Rupanagunta

Aarna.ml is pleased to announce that they have joined the Open Networking Foundation. Our primary interest is in the SD-RAN project.
Find out more

I'm pleased to announce that we have joined the Open Networking Foundation. Our primary interest is in the SD-RAN project. What makes SD-RAN  interesting to us is the fact that this community is building an open source near-real-time RIC (nRT-RIC) and a set of xApps. As a quick backgrounder and the context for our interest, let's review the O-RAN architecture diagram first (from an ONAP point-of-view).

Our product, the Aarna.ml Multi Cluster Orchestration Platform (AMCOP), serves the role of what O-RAN calls an SMO (Service Management & Orchestration). AMCOP onboards and orchestrates the various O-RAN components, performs configuration and lifecycle management on these components, and then monitors them for analytics and closed control loop automation. AMCOP will also implement the Non-Real-Time RIC functionality (NONRTRIC). We utilize (or plan to utilize) ONAP projects such as EMCO, CDS, DCAE, Policy, DMaaP to build AMCOP.

AMCOP will have multiple interfaces to other O-RAN components. The main ones are O1 for fault management (FM), performance management (PM), and configuration management (CM) and A1 for providing policies to the nRT-RIC.

By participating in the SD-RAN project, we hope to contribute to and benefit from the community in the following ways:

  • Collaborate on the O1 interface to configure the nRT-RIC
  • Collaborate on the A1 interface to provide policies to nRT-RIC
  • Collaborate on defining rApps (that reside in the NONRTRIC) that might be beneficial to the nRT-RIC
  • Interop testing between AMCOP and the ONF nRT-RIC

We believe that this will be a great two way street. We can hopefully contribute to the community and at the same time, we can benefit from the community by interacting with the top minds in the O-RAN space.

If you have any thoughts or questions, please do reach out to us at info@aarna.ml. We already have a video of AMCOP and the O1 interface and we will be publishing future blogs and demo videos on our O-RAN activities.

Aarna

Sign up for a comprehensive 5-day certification course on ONAP by Aarna.ml.
Find out more

The Linux Foundation announced  ONAP certification yesterday to help close the human talent gap that is emerging with the  growth of network automation, 5G, and edge computing. As per the Linux Foundation, "COP [Certified ONAP Professional] is a three-hour, performance-based certification exam that provides assurance that a certificant has the ability to onboard Virtual Network Functions (VNFs), design and deploy network services, and configure VNFs. "

This is a great step forward to help create a pool of experts with baseline understanding of ONAP and the underlying concepts of xNF onboarding, service design, service orchestration, xNF lifecycle management, and closed control loop automation.

You might be wondering — how can you prepare for this exam? As it turns out, we at Aarna.ml can help. We have a comprehensive 5-day course called the Certified ONAP Professional course to help you prepare for the exam. The training is instructor led, and due to COVID, the entire session will be virtual. If you sign up before December-15-2020, we have a 5% discount for 5+ participants and a 10% discount for 10+ participants. The minimum number of participants is 3 and the scheduling of the actual training is flexible and can also be pushed to 2021.

If you have some 2020 training budget left over, why not sign up?