"
Aarna.ml

Resources

resources

Blog

Brandon Wick

The O-RAN Opportunity: Business Advantages
Find out more

This blog highlights the key points we discussed in our recently concluded webinar on O-RAN and its business opportunity. The proliferation of cloud-native technologies and the adoption of open source in 5G networks are driven by the need for scalability, flexibility, service orchestration, cost efficiency, collaboration, and interoperability. These trends help address the challenges posed by the scale and complexity of 5G deployments and foster innovation. The cloud native approach allows for the efficient allocation of computing resources, enabling dynamic scaling and on-demand provisioning to meet varying network demands.

O-RAN the Final Frontier

O-RAN is the "final frontier" for Network Operators' open, interoperable, virtualized, and disaggregated HW+SW solutions. By embracing O-RAN, network operators can break free from the traditional closed and proprietary network architectures. They gain the ability to select and integrate components from different vendors, promote innovation, reduce costs, enhance security, and ensure long-term scalability. 

The RAN currently represents the telecom operators' largest resource allocation (including CapEx and OpEx). With ordinary radio base stations often operating at between 25% and 50% capacity, it also needs to be more utilized.

Main Benefits of O-RAN:

  • Openness
  • Interoperability 
  • Reduces vendor lock-in
  • Allows for “best of breed” solutions
  • Fosters greater innovation

O-RAN Architecture

Fig 1: O-RAN Architecture

O-RAN architecture enables disaggregation of RAN components, thereby enabling them to run on a cloud (on-prem, edge, or public clouds). Each component can come from a different vendor but they will interoperate as the interfaces are standard. Below is a high-level explanation of the main components: 

  • Service Management & Orchestrator (SMO)
  • Near-RT RIC
  • O-RU (O-RAN Radio Unit)
  • O-DU (O-RAN Distributed Unit)
  • O-CU (O-RAN Centralized Unit)
  • O-Cloud (Open Cloud)

Role of O-RAN SMO 

  • O-CU, O-DU, and O-RU RAN functions are managed and orchestrated (Day-0/Day-N) by O-RAN SMO.
  • It carries out FCAPS procedures in support of RAN tasks.
  • The functionality of O-RAN SMO is a logical fit for AMCOP, which carries out: 
  • Orchestration (Day-0)
  • (Day-N) LCM
  • Service Assurance, or automation using Open Loop and Closed Loop
  • AMCOP is a fully functional O-RAN SMO as well as capable of orchestrating other Network Functions.

RAN-in-the-Cloud

"RAN-in-the-Cloud" refers to the concept of deploying Radio Access Network (RAN) components in a cloud-based environment. Here the RAN functions, such as baseband processing and control functions, are virtualized and run on cloud infrastructure instead of dedicated hardware. This virtualization allows for more flexibility, scalability, and cost-efficiency in deploying and managing RAN networks. Instead of deploying and maintaining physical equipment at each cell site, RAN-in-the-Cloud enables centralized control and virtualized RAN functions that can serve multiple cell sites.

O-RAN-based implementations must use an end-to-end cloud architecture and be deployed in a genuine data center cloud or edge environment in order to be more valuable than proprietary RAN. RAN-in-the-Cloud is a 5G radio access network running as a containerized solution in a multi-tenant cloud architecture alongside other applications. These can be dynamically assigned in an E2E stack to improve utilization, lower CapEx and OpEx for telecom carriers, and enable the monetization of innovative edge services and applications like AI. 

During periods of underutilization in a RAN-in-the-Cloud architecture, the cloud can run additional workloads. Power usage and effective CAPEX will both be greatly decreased. With the ability to support 4T4R, 32T32R, 64T64R, or TDD/FDD on the same infrastructure, the RAN will become versatile and programmable. Additionally, MNOs can employ a GPU-accelerated infrastructure for other services like Edge AI, video applications, CDN, and more when the RAN isn't being used to its full potential. 

RAN-in-the-Cloud POC

This Proof of Concept (POC) was commissioned by Aarna.ml along with its partners Nvidia and Radisys. The left-hand side reflects a peak hour scenario, where there is an SMO, a CU and DU, and two servers. The Radio Units are connected through a switch. There is a 5G Core running with the SMO. During the off-peak hours (as shown by the right-hand side), when the utilization reduces, the vendor-neutral, intelligent SMO monitors the metrics of the underlying network, and based on certain predefined policies it can migrate some of the functions dynamically. For example, the DU2 and CU2 are utilizing 50% of the server capacity, then SMO migrates the DU to the first server and CU1 serves both the DUs. Since Server 2 gets freed up and thus it can be used for other applications (for example AI Edge App).

Fig 2: Enabling Multi-tenancy with Dynamic Orchestration of RAN Workload

AI/ML is a Game Changer

Artificial intelligence is revolutionizing every industry. Not only can a COTS platform with GPU accelerators speed up 5G RAN, but also can be used for AI/ML edge applications. This offers a simple method for speeding 5G RAN connection and AI applications using the same GPU resources. Network Operators running AI/ML applications with no additional resource cost, and using the output from the AI/ML models for critical business decisions will have a competitive advantage.

Role of AI in O-RAN

O-RAN Architecture supports interfaces for running applications through standard interfaces. 

These applications can be developed by multiple vendors, thus encouraging a vendor-agnostic ecosystem. For example, rApps/xApps are developed by various vendors for RAN optimization.

Additionally, the RAN data or network data may be fed into a Data Lake and connected to an MLOps Engine for further data analysis. An additional function of AI and ML is what we discussed above – improve cloud (running RAN) resource utilization by using analytics and run cutting-edge AI applications using the same resources.

Requirements for AI/ML in O-RAN

  • General purpose compute with a cloud layer such as Kubernetes
  • General purpose acceleration, for example NVidia GPU, can be used by non-O-RAN workloads such as AI/ML, video services, CDNs, Edge IOT, and more
  • Software defined xHaul and networking 
  • Vendor neutral SMO (Service Management and Orchestration) that can perform the dynamic switching of workloads from RAN→non-RAN→RAN
  • The SMO also needs intelligence to understand how the utilization of the wireless network varies over time. 

Sustainability is no Longer a Luxury

Reducing carbon emissions in network deployment is an important consideration for mitigating the impact of information and communication technology (ICT) infrastructure on the environment. Here are some strategies and technologies that can contribute to carbon reduction in network deployment:

  • Opting for energy-efficient hardware
  • Enabling multiple virtual network functions or services to run on a single physical server
  • Designing and operating data centers with efficient cooling systems, optimized airflow management, energy-efficient servers, and use renewable energy sources to power data centers
  • Transitioning to renewable energy sources such as solar, wind, or hydropower for powering network infrastructure
  • Implementing energy management systems and monitoring tools to track and optimize energy consumption 
  • Using fiber-optic networks as they use less power to transmit data over longer distances
  • Implementing energy-efficient protocols, such as Energy Efficient Ethernet (EEE)
  • Optimizing network routing and traffic management to reduces unnecessary data transmission 
  • Encouraging infrastructure sharing and collaboration among network operators
  • Properly manainge lifecycle of network equipment

Here, the SMO can have a dramatic impact on the O-RAN Capital Expenditure (CAPEX).

O-RAN implementations lend themselves to sustainability and energy efficiency by using: 

  • Standard off-the-shelf servers rather than specialized hardware logic
  • Optimal resource utilization by leveraging them to run other tasks
  • Software/Virtualization of RAN components reduces power consumption
  • On-demand scaling is made simpler by container and VM workloads

For more details check the recording of the webinar. Subscribe to our YouTube Channel for more informative videos on O-RAN and its advantages. Get the RAN-in-the-Cloud Solution Brief.

Amar Kapadia

Cloud Edge Use Cases Part 2: Storage Repatriation
Find out more

Storage Repatriation

From the AvidThink New Middle Mile report that features Aarna.ml we see the following three use cases emerging for the cloud edge (aka the new middle mile) orchestration:

1. Edge MultiCloud Networking

2. Storage Repatriation

3. Cloud Edge Machine Learning

In part 1 of this blog series, we discussed the Edge MultiCloud networking use case.

In this blog, we’ll explore the storage repatriation use case. In an insightful piece by Andressen Horowitz titled “The Cost of Cloud, a Trillion Dollar Paradox”, the authors observed how the cloud is a powerful vehicle for a growth stage company to accelerate their journey and yet the same cloud is a drag for later stage companies by compressing their margins. The article continues with some best practices that include incremental repatriation back from the public cloud. In another article David Heinemeier Hansson discusses why Basecamp abandoned public clouds

While we are not suggesting anything as radical as completely abandoning public clouds, we are suggesting that repatriating storage (and in future databases) to a datacenter location should be considered. You could also call this design pattern cloud-adjacent storage. By being on a neutral site, the user can enjoy numerous benefits:

  • Run compute in public or private clouds and access storage without paying egress costs (thus cutting OPEX). 
  • Even if you plan to use the public cloud, you can take advantage of cost arbitrage and move your compute to different clouds on an almost daily basis, again slashing OPEX.
  • Get the ideal storage performance you want. Spot pricing makes this option especially attractive, and using spot instances is only possible if the persistent storage is in a neutral site.
  • Get the data closer to the edge, where Gartner predicts 75% of the world’s data will be created by 2025, thus saving on bandwidth costs of moving the data to the public cloud. For example, by using log filtering, most logs could be stored on the cloud edge while just a subset could be sent to services such as Splunk or to the public cloud.

An example of cloud adjacent storage is as follows:

Aarna Edge Services (AES)

The AES SaaS offering provides initial orchestration and ongoing management of topologies such as the one shown above. It features an easy-to-use GUI that can slash weeks of work into less than an hour. In case of a failure, AES includes fault isolation and roll-back capabilities. The first version of AES supports the following digital products (with more to come):

  • Equinix Metal
  • Equinix Fabric
  • Equinix Network Edge (Cisco C8000V)
  • Azure Express Route
  • Pure Storage (coming soon)
  • AWS Direct Connect (coming soon)

Please see the demo and feel free to request a trial.

Brandon Wick

Nvidia Superchip Boosts Open RAN
Find out more

Nvidia, the renowned graphics processing unit (GPU) giant, has now unveiled its groundbreaking "Grace Hopper" chip (Light Reading), signifying their continued commitment to advancing AI and OpenRAN technologies. This represents a significant leap forward in the realm of cloud economics and wireless connectivity.

Per Nvidia, Grace is used for L2+, Hopper (the GPU) is used for inline acceleration at Layer 1, and the BlueField DPU runs timing synchronization for open fronthaul 7.2 -- an interface between baseband and radios developed by the O-RAN Alliance. This results in an impressive 36 Gbit/s on the downlink and 2.5x more power efficiency. Softbank and Fujitsu are some early customers lining up behind the AI-plus-RAN approach of combining powerful AI analytics at the edge with a software-defined 5G RAN.

RAN-in-theCloud Demo

In a recent demonstration, Nvidia collaborated with Radisys and Aarna.ml to showcase RAN-in-the-Cloud, a 5G radio access network fully hosted as a service in multi-tenant cloud infrastructure running as a containerized solution alongside other applications. This Proof-of-Concept includes a single pane of glass orchestrator from Aarna and Radisys to dynamically manage the 5G RAN workloads and 5G Core applications and services end-to-end in real time on NVIDIA GPU accelerators and architecture.  

Learn more about this demo by downloading the solution brief.  This RAN-in-the-Cloud stack is ready for customer field trials in 2H 2023. Contact us to learn more.

Aarna

O-RAN Front and Center at Big 5G Event
Find out more

We want to thank our hosts ( Light Reading, Heavy Reading, Informa, Omdia ) for a successful Big 5G Event 2023. It was great to have our Sr Director of Marketing, Brandon Wick, participate in a panel on Open RAN and to hear a lot of industry commitment towards sustainability and an Open RAN marketplace. 

How Open RAN Can Boost Innovation and Competition in the Telecom and Enterprise Industry

Open RAN disaggregates hardware from software and enables interoperability between different vendors and components and fundamentally shaking up the market. But how can service providers and enterprises adopt Open RAN? Will enterprises be the first to adopt Open RAN and does it ever go mainstream with service providers? Join us for a panel discussion where experts will share their insights on these questions and more. You will hear from leaders in the field of Open RAN who will share their experiences, best practices, and lessons learned. You will also get to ask your questions and interact with the panelists and other attendees.

We look forward to more O-RAN collaboration in the future.

How Open RAN Can Boost Innovation and Competition in the Telecom and Enterprise Industry

Bhanu Chandra

Understanding the O-RAN Opportunity: Architecture, Community Contribution, and Advanced Features & Use Cases.
Find out more

Aarna.ml conducted a live webinar on May 10th, 2023 that covered the state of the art on O-RAN. The discussion comprised sessions from our experts Yogendra Pal, who took the audience through the O-RAN Architecture Overview, Pavan Samudrala, who gave an overview of the O-RAN Community and the different open source, collaborative efforts revolving around O-RAN, and Bhanu Chandra, who took the audience through advanced features and use cases of O-RAN. See the webinar slides. Watch the webinar on-demand.

O-RAN Architecture Overview

Figure 1 - O-RAN Architecture

The orange color-coded components are adapted from the O-RAN Alliance group and the blue color-coded components are as per 3GPP specification. The various disintegrated RAN elements are installed on O-Cloud. The disaggregated O-RAN components are managed through O-RAN Service Management and Orchestration (SMO). SMO uses the standard interface O2 for bringing up the distributed infrastructure. SMO also has interfaces (O1/O2/M-Plane) to various RAN elements and it decides the connection between network elements. It uses the O1 interface for carrying out configuration management. 

The O-RU (Radio Unit) has an abstraction layer, but each of these interfaces has an FCAPS defined and SMO is responsible for managing them. SMO has a component called Non-Real Time RIC, this is an abstraction layer of all the logic which was present in previous generations of deployments like 3G and 4G. All the RAN elements in traditional deployment have been abstracted and replaced by two components Non-Real Time RIC and Near-Real Time RIC. Near-RT RIC is placed closer to the Control Plane of the disintegrated RAN elements. The intensive computing functions are present in the Non-Real Time RIC.

O-RAN Software Community Contributions

O-RAN Alliance and The Linux Foundation are developing the de facto standards for the O-RAN community. O-RAN Alliance defines the specifications and O-RAN Software Community follows those specs for implementation of O-RAN software. The key goals of the O-RAN Sc are:

  1. To develop open and intelligent RAN
  2. Align with O-RAN Alliance
  3. Drive open I/Fs and interoperability (xApps/rApps)
  4. Support collaboration between communities like ONF, TIP, ONAP, etc.
  5. Demonstrate capabilities
  6. Model and data driven operations and automation of those operations

TIP ROMA 

The goal of the TIP ROMA working group under the Telecom Infra Project (TIP) is to facilitate testing of disaggregated Open RAN elements. It is a functional testbed made up of O-RAN partners O-RU, O-CU, O-DU, and 5GC running on common hardware and was created with the intention of fostering the maturity of the Open RAN orchestration and management automation (ROMA) product and solution ecosystem. It helps in outlining the requirements of Telecom Network Operators and by deciphering those real-time problems and requirements of MNOs, the community has defined test cases. The O-RAN vendors (O-CU, O-DU, O-RU, SMO) come together and carry out interoperability tests as per these laid out test cases and can get badging eligibility. Based on the results of these tests, the telecom vendors can pick the O-RAN vendors.

i14y Lab

Hosted by O-RAN, the i14y Lab is another interoperability test lab. Through the Interoperability Lab (i14y lab), Aarna.ml took part in the O-RAN Global PlugFest Autumn 2022. We collaborated with CapGemini Engineering to demonstrate both the O1 and O2 interface of the Aarna AMCOP SMO. This lab is mostly driven by system integrators who run interop tests and recommend solutions to silicon vendors.

Advanced Features and Use Cases of O-RAN

Figure 2 : Non RT RIC and rApps

NonRTRIC

  • Internal to SMO
  • A1 interface termination
  • Expose R1 services to rApps

rApps

  • Values added services
  • Radio Resource management, Data Analytics, EI
  • AI/ML models

With the help of rApps we can solve various use cases like -

  • Traffic steering use case 
  • QoE use case 
  • QoS based resource optimization 
  • Context based dynamic handover management for V2X 
  • RAN slice SLA Assurance 
  • Massive MIMO Optimization use case

Traffic Steering Use Case

  • Steering or distributing the traffic in a balanced manner for efficient usage of radio resources
  • Allows mobile network operators to configure the desired optimisation policies in a flexible manner
  • To enable intelligent and proactive traffic management, one needs to use the appropriate performance criteria along with machine learning
  • Predict network and UE performance
  • Switch and split the traffic across access technologies in radio and applications

Models Used in Traffic Steering Use Case

  • Cell load prediction/user traffic volume prediction
  • Generate relevant A1 policies to provide guidance on the traffic steering preferences
  • Time-series prediction of individual performance metrics or counters
  • QoE prediction at each neighbor cell for a given targeted user

Input data in Traffic Steering Use Case

  • Load related counters, e.g., UL/DL PRB occupation
  • User traffic data, counters and KPIs
  • Measurement reports with RSRP/RSRQ/CQI information for serving and neighboring cells
  • UE connection and mobility/handover statistics with indication of successful and failed handovers
  • Cell load statistics such as number of active users or connections, number of scheduled active
  • users per TTI, PRB utilization, and CCE utilization
  • Per user performance statistics such as PDCP throughput, RLC or MAC layer latency, DL
  • throughput

QoE Use Case

  • This use case is intended for highly interactive, traffic-sensitive, and demanding 5G native applications like AR/VR, which requires low latency.
  • Application-level QoE estimation and prediction can assist in addressing this uncertainty, increase the effectiveness of radio resources, and ultimately enhance user experience.
  • To assist with traffic recognition, QoE prediction, and QoS enforcement choices, ML algorithms can be used to collect and process multi-dimensional data (user traffic data, QoE measurements, network measurement report).

Models Used in QoE Use Case

  • QoE prediction model
  • QoE policy model
  • Available BW prediction model

Input Data in QoE Use Case

  • Network level measurement report like UE level (radio, mobility metrics).
  • Traffic pattern(throughput, latency, pps), RAN( PDCP buffer), Cell level(Dl/RL PRB)
  • QoE-related measurement metrics collected from SMO
  • User traffic data

QoS based resource optimization Use Case

This use case is relevant when we must provide priority to a certain group of users, such as first responders in an emergency, by pre-empting normal users or allocating them more bandwidth.

Context based dynamic handover management for V2X

In a V2X ecosystem, vehicles and infrastructure elements need to communicate with each other and exchange information in real-time to enable various applications such as traffic management, collision avoidance, autonomous driving, and infotainment services. Context-based dynamic handover management aims to ensure uninterrupted and reliable communication by dynamically managing the handover process when a vehicle moves between different coverage areas or network technologies.

RAN slice SLA Assurance

RAN slicing allows network operators to partition their RAN resources into multiple virtualized slices. Each slice can be allocated to different services, applications, or user groups, enabling customized network capabilities and QoS (Quality of Service) levels. RAN slice SLA (Service Level Agreement) assurance refers to the process of monitoring and ensuring the fulfillment of agreed-upon service-level objectives for RAN slices in a network environment. It involves continuous monitoring, fault detection, performance optimization, and effective communication to maintain the desired quality of service for each RAN slice.

Massive MIMO Optimization use case

Massive MIMO optimization enables the efficient utilization of network resources, mitigates interference, improves coverage and capacity, and enhances the overall quality of service in wireless communication networks, particularly in dense urban environments or high-traffic areas.

For the technical details on the AI/ML models used in SMO for setting up policies of the first two use cases please check our webinar.

AI/ML Model Training & Deployment

Figure 3: AI/ML Models in AMCOP O-RAN SMO

  • Data Collection: Relevant data is collected from the network. This data can include performance metrics, network configuration parameters, user behavior, and other relevant information. 

  • Data Preprocessing: Once the data is collected, it needs to be preprocessed to remove noise, handle missing values, and normalize the data for training ML models. Data preprocessing techniques like data cleaning, feature scaling, and data transformation may be applied to ensure the quality and consistency of the input data.

  • Model Training: AI/ML models, such as regression models, decision trees, random forests, or deep learning models, can be trained using the preprocessed data. 

  • Deployment: A trained AI/ML model can then be deployed in a production environment to process real-time data and provide actionable insights or recommendations.

  • Real-time Monitoring and Adaptation: Deployed AI/ML models continuously monitor network conditions, analyze incoming data, and provide real-time insights to the SMO system. 

  • Model Refinement and Retraining: Over time, the performance of AI/ML models can be further improved by periodically retraining them with updated data. Retraining can be performed offline, and the updated models can be seamlessly deployed without disrupting the SMO operations.

By incorporating AI/ML model training and deployment in O-RAN SMO, network operators can benefit from automated decision-making, proactive network management, and improved efficiency. The models can assist in optimizing network resources, enhancing user experience, and enabling intelligent orchestration of the O-RAN environment.

Thanks to AMCOP's comprehensive O-RAN SMO, a cloud-native application for orchestrating and managing O-RAN network activities, network operators can manage multi-vendor RAN settings and select best-of-breed network functions for validation and interoperability testing. Check the Aarna.ml product page to learn more.

See the webinar slides. Watch the webinar on-demand.

Amar Kapadia

Cloud Edge Use Cases Part 1: Edge MultiCloud Networking
Find out more

Edge MultiCloud Networking

A couple of weeks ago, we published a blog on the AvidThink New Middle Mile report that features Aarna.ml. In the blog, we also talked about how we see the following three use cases emerging for the cloud edge (aka the new middle mile) orchestration:

1. Edge MultiCloud Networking

2. Storage Repatriation

3. Cloud Edge Machine Learning

In this blog, we will talk about the Edge MultiCloud Networking orchestration use case. We will cover the other two in subsequent blogs. 

Macro Trends

The datacenter space is growing at about 7.5% (source: Future Market Insights). Digital products or software driven products within data centers are growing substantially – from being 10% of the datacenter infrastructure in 2021 to 40% by 2025 (source: Gartner). Here are some examples of digital products available at a datacenter such as Equinix:

  • Compute: Equinix Metal, HPE GreenLake, AWS Outposts, Zenlayer, …
  • Storage: Pure Storage, NTAP, Dell APEX, Seagate Lyve, …
  • Networking: Equinix Network Edge that provides Cisco, Juniper, Palo Alto, Fortinet, Versa, …
  • DC⇔DC or DC⇔public cloud Connectivity: Equinix Fabric, Megaport, …
  • Kubernetes: Red Hat OpenShift, VMWare Tanzu, EKS Anywhere, Google Anthos,...

This is great news for enterprises because they can get the digital attributes of cloud computing –  on-demand, elastic, self-service, OPEX-model in the data center. In my mind, this is the exact definition of a cloud edge. Just having hardware in the datacenter does not amount to a cloud edge. It has to be digital. 

The Edge MultiCloud Networking Problem 

Now that we have digital products in a datacenter, how do we compose environments that workloads can use? These workloads can be solely running on the cloud edge or they can span cloud edge to the public cloud. The answer is quite shocking – creating environments that require compute, storage, networking, Kubernetes, and public cloud connectivity have to be built manually. I’ve heard horror stories where it took users weeks to compose these environments. If an application has to be deployed on top, it could even take months. Gartner predicts that 70% of organizations will implement structured infrastructure automation by 2025, up from 20% in 2021. I question the 20%, we have seen domain level automation, i.e. automation of just Equinix Metal servers for example, but we have not seen any evidence of automation of entire environments that need compute, storage, networking, K8s, and connectivity.

Orchestration as a Solution

Edge MultiCloud Networking orchestration provides zero touch automation for deploying, configuring, and managing the cloud edge along with public cloud connectivity. This allows workloads and resources to be shared seamlessly across the cloud edge and the public cloud.

Here is an example of how this orchestration would work in real life:

Edge Multicloud Networking
Edge Multicloud Networking

Aarna Edge Services (AES)

The AES SaaS offering provides initial orchestration and ongoing management of topologies such as the one shown above. It features an easy-to-use GUI that can slash weeks of work into less than an hour. In case of a failure, AES includes fault isolation and roll-back capabilities. The first version of AES supports the following digital products (with more to come):

  • Equinix Metal
  • Equinix Fabric
  • Equinix Network Edge (Cisco C8000V)
  • Azure Express Route
  • Pure Storage (coming soon)
  • AWS Direct Connect (coming soon)

Please see the demo and feel free to request a trial.