"
Aarna.ml

Resources

resources

Blog

Aarna

Join Aarna.ml at the Linux Foundation Networking Developer & Testing Forum - Jan 2022
Find out more

The Linux Foundation Networking Developer & Testing Forum is being held from Jan 10-13, 2022. In this event, various LFN project technical communities will demonstrate and present their progress; discuss project architecture, direction, and integration points; and will explore possibilities of further innovation through the open source networking stack (Register and explore the schedule). Aarna.ml is participating in four discussions. Below is a brief description of these sessions:

1. ONAP: CDS Error Handling in Production deployments

Time - 11th Jan, 2022, 19:30 - 20:00 IST

Speakers - Vivekanandan Muthukrishnan and Kavitha P.

Description - Handling various error scenarios in a real-life production deployment of CDS.

Slides & Recording

2. EMCO: Orchestration of Magma

Time - 13th Jan, 2022, 18:30 - 19:00 IST

Speakers - Yogendra Pal and Rajendra Prasad Mishra

Description - In this session, we will show how Magma core (Access Gateway & Magma controller/orchestrator) can be deployed on the Kubernetes cluster.

Slides & Recording

3. EMCO: Service Upgrade/Update using GUI

Time - 13th Jan, 2022, 19:00 - 19:30 IST

Speakers - Sandeep Sharma Vikas Kumar

Description - In this session, we show how a network service consisting of multiple cloud-native functions can be updated or upgraded using EMCO GUI (which can also be done using the REST API).

Slides & Recording

4. EMCO: Multidomain Orchestration using Terraform & ONAP CDS

Time - 13th Jan, 2022, 19:30 - 20:00 IST

Speakers - Vivekanandan Muthukrishnan Oleg Berzin

Description - Integration of EMCO with Terraform, Camunda workflows & CDS, to perform multidomain orchestration. In this session, we will show how EMCO can be integrated with other open-source projects Terraform, Camunda workflow engine & ONAP CDS, to perform multidomain orchestration of cloud-native functions.

Slides & Recording

Sriram Rupanagunta

NWDAF Rel 17 Explained - Architecture, Features and Use Cases
Find out more

5G System is expected to be AI capable for optimal allocation and usage of network resources. Analytics functionality of 5G Network System are separated from other core functionalities to ensure better modularisation and reach. Network Data Analytics Function or NWDAF with its 3GPP compliant interfaces provide data analytics on 5G Core. NWDAF Rel 15 did not see much adoption because of the unavailability of data and the specifications in 3GPP was not fully defined with respect to NWDAF. At present with 5G deployments kicking off and 3GPP standardizing all necessary specifications, complete implementation of NWDAF is possible.  The architecture of NWDAF is defined in 3GPP Specifications TS 23.288 and the detailed specification with APIs, etc. is defined in 3GPP Specification TS 29.520.

Release 17 specifies two separate NWDAF functions

  • Analytics Logical Function (AnLF)
  • Model Training Logical Function (MTLF)

NWDAF is responsible for data collection and storage required for inference, but it can use other functions to achieve the same. A typical NWDAF Use Case consists of one or more machine learning models. Building a machine learning model is an iterative process. Data scientists experiment with different models and different data sets. Even after deployment, it requires constant monitoring and retraining. A typical use case consists of many ML models and overlapping data is fed into these ML models.

                     

Fig 1: NWDAF Architecture

NWDAF is different from other NFs in the 5G Core because -

  1. Requirement      of retraining, when we consider an NF and once we deploy it we don't      expect its behaviour to change. But a ML model is different, it is tightly      coupled to the data it was trained on. So if data patterns change from      trial runs to actual environments, the ML model might behave differently.      This is called Data Drift.
  2. Requirement      of historical data, NFs just need the current state of the machine, but a      ML system analyses historical data to derive the future values.

NWDAF is an API layer that provides a standard interface for other network elements to get analytics. The consumer can be an NF/OAM/AF. Consumers can subscribe to analytics through NWDAF. Each NWDAF will be identified by an analytics id and area of interest. Single NWDAF can provide multiple ids also. Managing the huge data set and different NWDAF ML Models need to ensure that there is no duplication of effort for collecting the data and storing the data. The Area of Interest is the geographical location that the NF belongs to. Since UE is mobile it can move from one area of interest to another.

Features of NWDAF -

1. Aggregation
Fig 2: Aggregation supported by NWDAF

                     

There are different types of Aggregation that NWDAF can do -

●     Aggregation based on Area of Interest -

Each Area of Interest can have separate Areas of Interest. In some cases the analytics consumer might require a larger area of interest. In the example above the consumer requires three areas of interest. Some NWDAF can act as an aggregator by collecting data from other NWDAF associated with other areas of interest, and send a single aggregated result to the consumer.

●     Aggregation based on Analytics -

A Use Case can be made up of different types of Use Cases. In the example above the NWDAF with AID 3 is made up of NWDAF with AID 1 and NWDAF with AID 2, by means of a logic. This kind of aggregation is called Analytics Aggregation.

  1. Analytics Subscription Transfer

One NWDAF can transfer subscriptions to another NWDAF. For example if an Analytics Consumer is getting analytics data of a UE through a NWDAF associated with a particular Area of Interest. Now if the UE moves to another Area of Interest, then the NWDAF associated with the new Area of Interest will continue sending analytics data to the consumer, as the first NWDAF will transfer the subscription of the consumer to the second NWDAF. This also becomes handy when NWDAF undergoes graceful shutdown or performs load balancing.

MLOps - the complete picture

MLOps comprises practices for deploying and maintaining machine learning models in production networks.The word “MLOps” is a compound of "machine learning" and “DevOps” It includes the following components which are also the prerequisites for building a NWDAF platform -

●             Configuration Module

●             Data Collection Module on the Core/Edge (should be as per 3GPP standards)

●             Data Collection Long Term Module

●             Data Verification Module

●             Machine Resource Management

●             Feature Extension Module

●             Analysis Tools

●             Process Management Tools

●             Data Serving Module (should be  as per 3GPP Standardisation)

●             Monitoring Module

●             ML Code

               

Fig 3: MLOps Introduction

The group of standard functions that are defined by 3GPP for supporting  data analytics in 5G Network Deployments -

●             NWDAF- AnLF - Analytical Logical Function

●             NWDAF - MTLF - Model Training Logical Function

●             DCCF - Data Collection Coordination (& Delivery) Function

●             ADRF - Analytical Data Repository Function

●             MFAF - Messaging Framework Adaptor Function

             

Fig 4 : Complete Loop

The NF/OAM/AF which is the Analytical Consumer, requests for Analytics from the NWDAF directly or through DCCF. NWDAF is divided into two functions - AnLF and MTLF. The Analytical Logical Function (AnLF) of NWDAF is responsible for collecting the analytical request and sending the response to the consumer. AnLF requires the model endpoints, which is provided by the MTLF (Model Training Logical Function). NWDAF MTLF trains and deploys the model inference microservice. Now the AnLF requires historical data that the Model Microservice requires for prediction, For this it requests the DCCF (Data Collection Coordination and Delivery Function). DCCF is the central point for managing all the data requests. If any other NF has already requested the same set of data and this data is available, DCCF directly sends it to NWDAF. Otherwise the DCCF initiates a data transfer from the Data Provider. It also initiates the data transfer with the data provider. Then the data transfer will actually happen between the MFAF (Messaging Framework Adaptor Function) and ADRF (Analytical Data Repository Function). ADRF stores the historical data that is required. Now the DCCF will pass on the data to the NWDAF’s AnLF. Now the NWDAF’s AnLF can request prediction from the Model Microservice. Now the NWDAF will construct the response in 3GPP format and pass that prediction back to the NF or Analytical Consumer.

Data Collection -

The data collection for analytics by NWDAF happens in 3 levels -

●     For Feature Engineering, Analysis and Offline Training, data is collected for the long term and  can be stored in  Data Lake/ Data Warehouse.

●     The data required for Online Training (which is managed by MTLF) can be collected in ADRF.

●     Data required by AnLF for model inference may come from ADRF/NF/OAM. This data will be shorter-term, like a few hours of data.

Model Serving & MTLP -

To understand MTLP we need to know what model it is serving. Models basically contain code or trained parameters. But for applications to use this we need to convert this into a microservice. So that the result of analytics is available as an end point to the application. So different frameworks are available for this like TF Serving (TensorFlow Model Serving), TorchServe Framework, Triton Inference Server (NVIDIA’s framework) and the Acumos AI.

The following is the input formats that MTLP accepts -

●             ML Code - Online Training

●             Saved Models - include code and trained parameters - this is the most popular way to share the pretrained models.

●             Container Images

Model Monitoring and Feedback -

ML Model’s performance may decay over the time, this can impact the performance of the system negatively like over allocating the resources or affecting the user experience. So continuous self-monitoring and re-training is required.Re-training with newer data can be managed by MTLP or outside the edge/core. Redesigning of the ML Model may be required in certain cases. MTLP needs to send a trigger to the model management layer when retraining is no more effective in MTLP.

ML Pipeline - NWDAF Interaction

             

Fig 5 : ML Pipeline

The part in the cloud is the ML Pipeline. To connect the cloud with the components in edge, three interfaces are important (the ones drawn in blue lines). The first is the Model Deploy interface that is required to push the Model from the Cloud Layer to MTLF. The second is the Model Feedback interface which MTLF uses for sending the feedback to the upper layer. The third is the Pull Data interface, which is required to send the data for ML training and will be stored in the Data Warehouse/ Data Lake.

The data in the data warehouse should be easily accessible for experiments by data analysts, hence it is present in the cloud. ETL/ELT is the step where data is Extracted, Transformed and Loaded to the storage. In some cases when the endpoint is data lake the data is loaded without any transformation. Data collection from the source is done in batches. Nowadays streaming of data is getting more traction. NWDAF is designed in such a way that even streaming can be used. Which data is to be uploaded or which data is required for the ML experiment is a complex subject. Uploading all data can lead to unnecessary use of bandwidth. Also there are Data Protection Regulations of different geographical areas where the edge is located. Hence this component should be designed very carefully.

Distributed System Platform -

Fig 6 : Distributed System Platform

The components of NWDAF are treated in a similar way as Network Functions by the underlying platform and they are installed in a distributed manner. The NWDAF should have minimal dependency on the underlying platform of the ecosystem. 

NWDAF SDK -

The platform provides SDK/Framework for developing AnLF & MTLF, this reduces coding effort for MLOps Engineers. The SDK framework should hide the complexity of the 3GPP standard from the developer. The SDK should also support advanced NWDAF features like aggregation, subscription transfer. 

3GPP Release 17 Timeline -

  • Q3 2021 - We can expect the Architecture to freeze

  • Q2 2022 - We can expect a Stage 3 freezing with detailed definitions and APIs

  • Q3 2022 - We can expect the protocol codes to be freezed.

Amar Kapadia

AWS Private 5G — Battle Stations or Meh?
Find out more

Amazon shook the wireless networking industry last week with their disruptive managed Private 5G offering announcement. Though their statement is short on details and is likely to be heavy on marketing and light on implementation, it is nevertheless a very important milestone for the industry.

WHAT DOES THE ANNOUNCEMENT VALIDATE?

The announcement validates a number of important points:

  • Cloudification is now a given: A no brainer—Private 5G is going to be completely software driven with the 5G radio being the only non-cloud hardware component.
  • Private 5G is real: I talk to some people who question the need for  Private 5G and hypothesize that Private 5G is years out. Apparently not so, given the push by the most dominant cloud provider in the world.
  • 5G pulls edge computing: It appears that Amazon is betting that Private 5G will massively increase the demand of AWS infra for edge computing applications (MEC). AWS is not interested in Private 5G per-say, the real prize is the demand for AWS infra.
  • Management plane is the key: The management plane is vital to the solution. The entire 5G + edge computing solution is going to be software (see point#1) and will be spread across 1,000s or 10,000s of edge locations. Also, the environment will change dynamically through DevOps. All of this means that the stress on the management plane will go up exponentially, and Private 5G differentiation will center around the management plane as opposed to the data plane.
  • Wifi like pricing: Perhaps the most important aspect of the announcement is the simplicity of pricing, very much akin to Wifi. This is the way it should be. Why should Private 5G pricing be linked to public mobile network metrics such as the number of connected devices?
  • Managed services is the future: This announcement is further validation that enterprises are not interested in building or managing IT infrastructure. They would rather have someone else take care of it.

WINNERS and LOSERS

I think telcos will be negatively impacted by this announcement. A number of analysts are taking the view that telcos need not be terribly concerned about the AWS announcement because it’s too lightweight. Or they take the view that ultimately AWS will be dependent on telcos to sell their offering. In fact, recent news states “Verizon CEO Hans Vestberg not worried about AWS private 5G .“ Hmmm… I am not convinced about this bravado.

I think if telcos are not sounding the alarm bells internally, they are going to be sitting ducks. Yes, I think telcos and hyperscalers can collaborate on public networks. But on private networks, I think it’s an either-or situation. Sure, telcos have licensed spectrum and they have the ability to create interesting offerings like PNI-NPN (public network integrated non public network). But I don’t think these features are big enough in the long term to be a substitute for innovation. The only way for telcos to block hyperscalers is to innovate quickly and get interesting Private 5G solutions out in 2022.

Another group that will be impacted is 5G dataplane vendors. Offerings like the AWS along with SDO efforts and open source  projects will increasingly standardize the data plane.

The ultimate winners are enterprise customers. The offering, should AWS pull it off, is in the right direction.

WHAT IT MEANS FOR AARNA

The AWS announcement validates our strategy in a fairly substantial way. The management plane is clearly the critical control point in the AWS announcement.

However, I think AWS is underestimating the problem of multi-domain orchestration. Once the solution goes beyond being a “starter-kit”, it will require integration with radios, SD-WAN, firewalls, edge computing applications, transport networks, heterogeneous multi-clouds, and more.

Aarna leverages open source that enables us to incorporate the learning from numerous customers and use cases, and our packaging will allow users to consume the technology with ease.

Solving this difficult problem is our strength, and this is where we can shine. I am hoping the AWS announcement is the gravity assist that we can use to propel us into orbit in 2022! Stay tuned for more...

Aarna

Aarna.ml Introduces 5G and MEC Tech Talk Video Series
Find out more

5G and Edge Computing enable a wide range of use cases from Industry 4.0, healthcare, precision agriculture, to connected cars. These use cases demand low latency, reliability, mobility, low power, or high bandwidth along with a multi fold increase in the number of connected devices. And 5G is the ideal network to connect end-devices to edge computing applications that will enable these use cases.

However, 5G networks include a number of new innovations that the community may not be familiar with. While there is a ton of marketing material on these technologies, it is very difficult to find insightful technical material short of reading the standards. We at Aarna.mlhave launched a series of Tech Talk Videos, in which our subject matter experts will explain these new 5G and edge computing terms in roughly 5 minutes.

We will cover topics like NWDAF, NonRTRIC, CSMF, NSMF, NSSMF, O-RAN SMO, O-RAN interfaces, MEC orchestration, traffic steering, local breakout and many more. We will ask relevant questions on each topic and let our experts explain these topics in video interviews.

Subscribe to our Youtube channel along with the Tech Talk Playlist and stay up-to-date with the latest 5G and MEC terms.

Bhanu Chandra

Demonstration of NWDAF with NF Load Use Case
Find out more

NWDAF (Network Data Analytics Function) is a network function in 5G Core. With the increase in 5G requirements, a huge number of cell sites and connected devices lead to a surge in requirement of 5G bandwidth. NWDAF provides analytics to 5GC NFs and OAM. It uses standard 3GPP interfaces for centralized data collection and analytics through subscription or by request. NWDAF collects data from various resources like the NFs of the 5GC (AMF, SMF, UPF, etc), Application Functions (AF) and OAM. After collecting all the metrics it analyses and correlates the data and provides the analytics in the form of 3GPP specified APIs.

Analytics reporting information provided by NWDAF are:

  • Statistics - NFs/OAM request analytics target period in the past.
  • Predictions - NFs/OAM request analytics target period in the future.

NWDAF retrieves network data from 5G NFs and management data from OAM :

  • AF  Data ( possibly via NEF)
  • UE related data from NFs
  • OAM global NF data

                       

Fig 1 : NWDAF Architecture

The above diagram shows the logical components of NWDAF from Aarna.ml. The NWDAF Service API, as part of 3GPP specs, has Analytics Info Service and Subscription Service. Subscription Service provides data in a periodic fashion. The Data Collector in NWDAF collects data from

NF/AF and pushes it into further layers. Data Transformation layer transforms the data before it reaches the database or data repository. Analytics - Historic is mainly for statistics which consists of Analytics Application and Analytics Algorithm. Analytics Predictions give the predictions and manage the AI/ML Model Catalogue. It also includes the Deployment Model and Training Model. Our partner Predera helps us with the MLOps part. The outputs from NWDAF are then formatted by Output Formatter and finally notifies the subscribers.

Demo Modules consist of -

●       NWDAF : from Aarna.ml

●       NRF: from Free5GC

●       AF: is the consumer of the data from NWDAF

Explaining the NF Load Use Case -

●       NF Load Use case: NWDAF provides NF load analytics of specific NFs in the form of statistics as well as predictions.

●       Input data sources are OAM and NRF.

●       Output analytics  are Network Function Status (whether it is up and running or down), resource usage(cpu, memory, storage) and the NF load.

●       Helps in infrastructure scaling and insights of service experience. Based on NF load analytics autonomous networks can take a decision to scale out and obtain insights of the running status of the NF.

 

Fig 2 : Demo Flow

In our demo AMCOP from Aarna.ml is used for deployment and configuration of all these NFs. It can deploy the Network Functions in multiple edge clusters. However in this demo we deployed the NFs on a single edge cluster and monitor the interaction between them. The demo flow has been explained below (in reference to Fig 2).

  1. Design and orchestrate NRF,NWDAF registers itself with NRF with the help of NF register request.
  2. Design and orchestrate AF. AF      will discover NRF through Day0 configuration and obtain the NWDAF end      points
  3. AF queries NWDAF AnalyticsInfo  API to get NF_LOAD data
  4. NWDAF analytics info API      implementation internally calls CPU prediction model to get the cpu predictions
  5. NWDAF creates output responses      as per the 3GPP standards

To see the demo please follow the  video.

Try out the latest release of AMCOP for a free trial. For further questions please contact us.

Aarna

Join Aarna.ml at the Linux Foundation Open Networking and Edge Summit
Find out more

The Linux Foundation Open Networking & Edge Summit or the ONE Summit virtual event will be held from October 11-12. The event is promoting end-to-end connectivity solutions powered by Open Source and will showcase collaborative efforts that are defining the future of networking and edge computing. We are participating in two-panel discussions, three presentations, and three demos (explore the schedule). Below is a brief description of the various sessions Aarna.ml is participating in.

1. Keynote Address on Network Slicing for the 5G Super Blueprint (11th October, Monday 9:00 AM PDT)

Recording here

Slides here

In this keynote address, we will discuss the state of network slicing for the 5G Super Blueprint and show a demo of video streaming through two slices, each with a different bandwidth service level agreement.

2. Complex Infrastructure Design and Multi-Domain Orchestration using EMCO and ONAP CDS (12th October, Tuesday 5:00 PM PDT)

Recording & Slides here

In this panel discussion jointly with Equinix, we will cover the Public Cloud Edge Interface – Akraino Blueprint.

Edge computing requires orchestration, management, and automation across multiple domains. These tasks can quickly become complex, especially in a cloud-native environment with a large number of cloud-native network functions (CNF) or cloud-native applications (CNA). The LF Edge Akraino Public Cloud Edge Interface blueprint solves one such use case where the tasks range from edge cloud infrastructure deployment, edge cloud application orchestration, public cloud application orchestration, edge cloud to public cloud network orchestration, and registration of edge cloud applications with the public cloud applications. In this presentation and demo, we will show how two Linux Foundation Network projects: EMCO (a new addition to LFN) and ONAP CDS can be used to automate the various tasks required for end-to-end provisioning.

The purpose of Public Cloud Edge Interface (PCEI) Blueprint is to develop a set of open APIs, orchestration functionalities, and edge capabilities for enabling Multi-Domain Interworking across the Operator Network Edge, the Public Cloud Core and Edge, the 3rd-Party Edge as well as the underlying infrastructure such as Data Centres, Computer Hardware and Networks. In the Infra Design and Multi-Domain Orchestration Demo, we will deploy EMCO 2.0, CDS, and CBAs, publish the cluster info, application helm charts, and Terraform plans to GIT, design the edge and public cloud infrastructure then provision the CDS/Terraform infrastructure be it bare metal (Equinix Metal Cloud), K8S on bare metal and Azure cloud. Thereafter we will interconnect Edge Cloud with Public Cloud and finally deploy the Edge Application - dynamic K8S cluster registration to EMCO and dynamic onboarding of App Helm Charts to EMCO.

3. ONAP for Enterprise Business (11th October, Monday 3:40 PM PDT)

In this joint panel discussion with AT&T and Ericsson, we will cover the role of ONAP for Network Automation, RAN Virtualization and acting as an enabler for Enterprise Market:

Over the last 4 years, the ONAP community has been collaborating with the Industry through cross open-source community engagement and cross-influences with SDOs. We have developed impactful features like 5G Footprint, Network Slicing, and O-RAN integration. We continuously add more cloud-native, modular capabilities, and robust integration. We are also implementing ONAP “best practices” in our source code to offer scalable, reliable, and secure production readiness deployments. All this -- makes ONAP a true enabler for Innovation. While ONAP has primarily been used by Network Service Providers to support their Network Automation Transformation and RAN Virtualization Journey, the ONAP Community also recognizes the value of ONAP in the Enterprise/Vertical markets. A new TSC Task Force was therefore created on January 20th, 2021 in order to define ONAP added-value for Enterprise Business. This session will share the role of ONAP in the “5G Open Source Stack Initiative“ recently launched by the Linux Foundation Networking - Codename « 5G Super Blueprint » We will explain how the ONAP Platform will interact with the Magma open-source platform and with other 5G Super Blueprint components.

We will explain the roadmap of ONAP right from defining Magma Controller CNF and AGW VNF using ONAP Integration to the orchestration of Magma Controller and AGW CNFs using ONAP, Network Slicing Optimization, and many more. We will explain the 5G Open Source Stack Initiative and dive into the applicability of ONAP for Orchestration and Life Cycle Management, Cloud-native Modularity, 5G Network Slicing, supporting ORAN SC SMO and Control Loop Automation. We will explain the compliance of ONAP with Anuket which delivers a common model, standardized reference infrastructure specifications. ONAP / Anuket integration leads to the deployment of ONAP using Kuberef. We will explain the architecture and key points of Magma and then delve into the scope of integration of ONAP with Magma.


4. AI/ML, Data Analytics, Closed Loop Control (12th October, Tuesday 5:00 PM PDT)

The O-RAN Alliance and 3GPP specifications call for analytics functionality to be implemented in the Service Management & Orchestration (SMO) and Operations Administration & Management (OA&M) software. Though the functions span different domains, O-RAN and 5G Core, they have a striking resemblance in what is required of them. In this session, we will show how various components of ONAP DCAE can be used to implement the functionality of NONRTRIC and NWDAF in an architecturally unified manner. We will show what external glue logic (such as the LF AI Acumos project) is needed to build these features and what interfaces are expected from the 5G Core and RAN functions (which can be commercial or open-source).

5. Multi Kubernetes Cluster Networking, Mesh and Application Orchestration in Open Source Way (11th October, Monday 4:20 PM PDT)

Traditional hybrid cloud deployments worked fine with several discrete control planes, one for each functionality – SDN controller for switch & transport automation, individual cloud orchestrators for application deployments, few security control planes for threat security, service assurance central platform for analytics and observability, and many more. As Enterprises adopt multi-cloud and multi-edge strategies for their application deployments that are on-demand and sometimes ephemeral, the industry sees the need for a universal control plane that automates infrastructure in an end-to-end fashion in tune with the application lifecycle. The newly introduced Linux Foundation Networking EMCO project is one such open-source initiative that is striving to address the needs of a universal control plane. In this panel, we will reiterate enterprise requirements, introduce EMCO, its features, the need for open-source, and how it addresses the lock-in issues that enterprises care about.

6. Cloud Native 5G Blueprint Network Slicing Demo

The LF Networking community has enhanced the 5G Super Blueprint foundation by adding network slicing on the 5G cloud-native network demo. This POC is based on the ONAP Honolulu release and demonstrates an open-source approach to improving QoS in 5G networks by optimizing network resources and topologies driven by 5G use cases, MNOs would be able to achieve improved performance and greater flexibility. The demo will also showcase a custom Network Slice Subnet Management Function (NSSMF) that was developed as part of this effort.

See the demo here.

7. LF Edge Akraino Private LTE/5G ICN Blueprint Demo

In this demo, we will showcase the integration of various open-source projects under LF and CNCF to realize an E2E solution for private-5G connectivity. The focus would be automation required to provide 5G connectivity in multiple sites by deploying 5G network services with multiple CNFs, life cycle management of 5G services, and connectivity & security automation. It uses EMCO for automation, Akraino/ICN platform for Edges/Sites with Edge/Telco K8s extensions, and Magma/free5GC for 5GC and simulated 5GRAN.

See the demo here.

8. EMCO + Magma Demo

In this demo, we will show how the new LFN Edge Multi-Cluster Orchestration (EMCO) project can be used to deploy Magma Orchestrator and Access Gateway (AGW) on a Kubernetes cluster with appropriate Day 0 configurations. The benefit of this integration to users is to reduce the manual and error-prone tasks required to deploy Magma. The demo will also talk about the roadmap where Day 1, 2 configurations, lifecycle management, network slicing, and service assurance can all be automated.

See the demo here.

I hope you can join us!