"
Aarna.ml

Resources

resources

Blog

Brandon Wick

Transforming Industries with Generative AI
Find out more

In the fast-paced world of technology, Artificial Intelligence (AI) has emerged as a game-changer for industries seeking a competitive edge. Generative AI in particular – algorithms (such as ChatGPT) that can be used to create new content – has brought the exciting world of AI to the masses and opened up new frontiers of innovation.

No matter how sophisticated the algorithm, however, Generative AI tools are only as good as the data they process. All else being equal, the higher volume of high quality data consumed, the more insightful and powerful the results.

To truly make use of Generative AI, companies need to run Large Language Models (LLMs) on their own data and we believe that the cloud edge in a private cloud is an ideal place to collect data and run AI/ML algorithms for business intelligence. This approach ensures faster response times, heightened privacy, and unparalleled efficiency.

But deploying a GenAI foundational model from scratch is quite cumbersome and time consuming (to install, configure, upgrade, and manage the stack), and requires AI talent that few organizations have. Using a public LLM model is also fraught with security and privacy risks that are not yet fully understood. 

Generative AI Solution from Aarna Networks and Predera
Generative AI Solution from Aarna and Predera

To meet this pressing need, Aarna.ml has partnered with Predera, a leading provider of AI-driven solutions for businesses, to offer a Generative AI solution to the industry. This is a secure, private, and fully managed LLM where users can choose which LLM to leverage and customize (LLaMA, Dolly, or NeMo) while applying Predera’s AIQ MLOps Platform modern toolstack with Aarna.ml AMCOP for zero touch orchestration, configuration, management of upgrades/updates.

We offer a streamlined onboarding process and dedicated support to tailor the solution to meet your unique business needs and goals. 

To explore the transformative impact of Generative AI through the Fully Managed Private LLM service, read the news, see our Solution Document, and contact us for a free consultation.

Brandon Wick

Aligning rapidly emerging industry needs with Aarna.ml software and SaaS solutions
Find out more

In today's rapidly evolving digital landscape, the demand for orchestration and management of complex heterogenous environments, edge services, and 5G networks has skyrocketed, prompting network operators to seek new approaches and innovative solutions. 

Aarna’s software and SaaS solutions leverage open source, cloud native, and DevOps methodologies to provide zero-touch edge and 5G service orchestration and management services for a wide variety requirements and useWe've recently updated each solution area on our website to better reflect how our innovation and leadership meet industry needs. We invite you to learn how our solutions can create value for your organization: 


  • Private 5G: Open new horizons by delivering dedicated and customizable 5G networks. With its end-to-end automation and orchestration capabilities, Aarna.ml empowers businesses to build and operate private 5G networks tailored to your specific requirements.
  • Edge Multicloud Networking: Connect applications and data across multiple clouds and the edge. By automating operations and managing network resources in real time, Aarna is simplifying IT operations, improving performance, and reducing costs for businesses.
  • Cloud-Adjacent Storage: Host storage for state information and specific workloads on-prem or  the network edge. By integrating on-prem storage resources with cloud-based storage platforms, Aarna.ml optimizes data accessibility, simplifies management, and reduces storage costs.
  • Cloud Edge Machine Learning: Explore use cases including LLM/GenAI, smart cities, computer vision, AR/VR, and more. By bringing AI and ML capabilities closer to the edge of the network, Aarna.ml facilitates the deployment of machine learning models real-time, improves data analysis and decision-making, and enables enterprises to leverage generative AI.
  • O-RAN SMO: Manage disaggregated, multi-vendor RAN environments. By leveraging Open standards and interoperable components, Aarna's O-RAN SMO allows enterprises to choose best-of-breed network functions for validation and interoperability testing. 
  • Telco Cloud: Embrace the power of cloud native technologies. By leveraging containerization and microservices architectures, Aarna.ml empowers network operators to orchestrate infrastructure using private cloud, hyperscalers, and hybrid multicloud.

We’d be glad to discuss your specific needs and how we can help you to achieve your goals. Contact us today to discuss your requirements.

Brandon Wick

Learn the 5 Edge Native Application Principles
Find out more

“Cloud Native” computing represents paradigm shift for the computing ecosystem that continues to spur innovation and drive new business models. Edge computing is intricately woven into the cloud -- and driven by use cases like multicloud networking, cloud adjacent storage, and cloud edge machine learning -- has taken this concept to the enterprise edge. But edge computing environments have compute, connectivity, storage, and power constraints, necessitating new approaches and a new set of "Edge Native" principles.

The edge native infographic below highlights the work of the IOT Edge Working Group to explore what it means to be edge native, the differences and similarities between “Cloud Native” and “Edge Native”, and offer an initial set of principles for the industry to interpret, apply, and iterate upon. You can get the full whitepaper here. The group is now working on a set of Edge Native Application Design Behaviors Whitepaper. Stay tuned!

Sriram Rupanagunta

Demystifying Orchestrators: Navigating the Landscape of Management and Orchestration Solutions
Find out more

We often tell our stakeholders that Orchestration is a general term, and we need to clarify what are the different types of Orchestrators available to apply in their network architecture. 

In this blog, I will give an overview of various aspects of Orchestrators, since not all are created equal. 

This can help decision makers choose what works for their organizational needs. 

So what do orchestrators do?? In short, they perform things like: 

  • Infrastructure Orchestration, which includes physical functions (servers, storage, networks etc.)
  • Cloud Orchestration, which includes tools that can spin up resources on clouds. All public clouds have tools available for this functionality. 
  • Network Service Orchestration, which include network functions that are virtualized or containerized. They include Telco domains, such as 5G Core, Radio Access Network (RAN), Fronthaul/Backhaul networks, etc. 
  • Application Orchestration, which typically include MEC (Multi-access Edge Computing) applications, that run on Edge clusters (e.g., CDN, computer visio,n etc.)
  • All or some of the above, can be also be done using a single Orchestrator

Next comes the scope of Orchestration functionality. This can be simplified into Day-0 or Day-N management. Some Orchestrators (typically the ones used on public clouds) usually perform only Day-0, which is to spin up the required resources for the operation of the function (Infrastructure, network functions, etc.). If any changes need to be made during the life of these resources (e.g., they need to be reconfigured, either manually or based on some configurable policies), there is no mechanism for doing so. The latter functionality is known as Day-N orchestration/management, also referred to as LCM (Life Cycle Management). 

This itself has two distinct functions -- monitoring and taking actions (open loop or closed loop). Either the Orchestrator performs both of these functions, or it relies on other tools/utilities to monitor, and can simply perform corrective actions, such as the reconfigurations. In the virtualized world (VNF/CNFs), the reconfiguration could also include healing, scaling-in, scaling-out, etc. 

Scalability is another important factor in choosing the Orchestrator. If your use cases need to be run on Edge networks (Enterprise Edge or Cloud Edge, also known as the New Middle Mile), it needs to be a lot more scalable compared to Cloud scale (those confined to few public cloud locations and clusters). The number of Network services and Applications (including the vendors) is also an order of magnitude more in case of Edge networks (1000s as opposed to 10s). 

There is another twist to this tale -- some Orchestrators may be vendor or cloud-specific, which work well for the proprietary vendor or cloud resources, but obviously cannot be used for other vendor products or clouds. So if your environment is multi-cloud/multi-vendor, this option may not be suitable for you.

Independent of the above (but somewhat related in reality) is the openness of the Orchestrator implementations -- whether they are closed source implementations (proprietary) or open source based. In theory, the Orchestrator could be working with proprietary target functions but could be open source based (or vice versa). But in practice, they go hand in hand. 

Lastly, some orchestrators may be specific to a particular domain (e.g., 5G Core, Transport or RAN). O-RAN Service Management and Orchestration (SMO), as the name implies, is an example of an Orchestrator that is specific to the RAN domain. 

There is another category of vendors who offer Cloud Networking, which are also sometimes referred to as Orchestrators. But their functions are typically meant for specific networking use cases such as SD-WAN, MPLS Replacement, etc. 

Regardless of their specific functions (which includes any of those mentioned above), there is another nuance to the way the solution is offered, which could be either as a Technology provider or a Service Provider. The former provides a level of customization that may be necessary for specific use cases or environments, whereas the latter (though well-packaged) is typically a fixed function. 

With so many variations and nuances, users need to think through their specific organizational needs (present and future) before deciding on which solution to choose from the choices available. 

I hope this sheds some light on the topic and gives some clarity on how to go about choosing a vendor. At Aarna.ml, we offer open source, zero-touch, orchestrator, AMCOP (also offered as a SaaS AES) for lifecycle management, real-time policy, and closed loop automation for edge and 5G services. If you’d like to discuss your orchestration needs, please contact us for a free consultation.

Brandon Wick

Aarna's participation in Linux Foundation Networking Developer and Testing Forum.
Find out more

Aarna.ml participated in the recently concluded Linux Foundation Networking Developer and Testing Forum. The community gathered virtually June 6-8 and had the opportunity to collaborate to advance open source networking initiatives. Just like the previous virtual D&TFs, this was a gathering for community-generated sessions and discussions in order to address the challenges, opportunities, and future roadmaps for the projects and key initiatives. 

Aarna.mlparticipated in the following sessions:

The 5G Super Blueprint is an initiative to prototype and integrate open source software to address real-world use cases. It gives projects an opportunity to demonstrate their value to end users and gives end users an idea of how they could leverage the open source projects. During the February 2023 we had an initial brainstorming session to figure out how Nephio might fit into the existing blueprint work. Since then there has been some progress on the blueprint, and it is time to go to the next level of details on leveraging Nephio's as well as ONAP capabilities to enrich the blueprint.

Check the slides and video.

Aarna Speaker - Yogendra Pal

As part of 5G SBP efforts, Aarna has worked in the community to develop E2E network slicing of the Core network and RAN using open source components. In this presentation, we demonstrated how to create an E2E network slice with cloud native network functions (CNFs) using EMCO. The Core Network (CN) will be an open source version of Free5GC (v3.2.1), and the RAN will be an open source version of UERANSIM (v3.2.6) with an integrated gNB.

Check the slides and demo video.

Aarna Speaker -Sandeep Sharma

The community is investigating Nephio's suitability for new use cases for network transformation as it develops. For the forthcoming O-RAN SC plugfest, a brand-new project is currently being developed that investigates how O2 IMS can expose a cloud-native declarative interface and combine with FOCOM in the Service Management and Orchestration (SMO) for administering the O-Cloud. This is applicable to both wireless and wired core IP networks using the IMS transport plane. We will discuss ideas for implementing and driving a declarative O2 IMS interface using Nephio Design Principles in this talk. The integration of the FOCOM in the SMO layer with the IMS implemented as Nephio is then discussed.

Check the slides and demo video.

We look forward to seeing you at the next LFN D&TF in the fall, 2023.

Cloud Edge Use Cases Part 3: Cloud Edge Machine Learning
Find out more

In part 1 and part 2 of this blog series, we mentioned the three use cases for the cloud edge (or the new middle mile) – edge multicloud networking, storage repatriation, and cloud edge machine learning. We discussed both the edge multicloud networking and storage repatriation use cases in more detail as well.

In this blog, we will cover the remaining and perhaps the most exciting use case – cloud edge machine learning.

Today, by and large, users run machine learning inferencing or light training such as for large language models i.e. LLM on-prem or in the public cloud. Both approaches have pros & cons.

Cloud Edge ML: OnPrem vs Public Cloud

What if there was a third-approach? Here the machine learning processing would occur at the cloud edge. See blog titled “AI and 5G Are Better at the Edge” by Oleg Berzin and Kaladhar Voruganti that addresses this scenario. While the blog links machine learning at the cloud edge to 5G, most of the blog applies to other access technologies as well. The authors cover three use cases – smart parking, AR/VR for predictive maintenance, and V2X and make a compelling case for the cloud edge.

In my view, the benefits of performing machine learning processing at the cloud edge include:

  • The ability to process data close to where it gets produced; this location is not as ideal as on-prem from a data proximity point of view, but it is a lot better than the public cloud. For the use cases listed above and for computer vision applications, the cloud edge can be a boon.
  • Ease of use features at part with the public cloud.
  • OPEX model.
  • On-demand, pay for what you use.

The benefits are further magnified when we talk about generative AI:

  • By using open source models such as Llama or Dolly, the user can have full control over the LLM model.
  • 0% (ZERO) probability of data leakage – one of the biggest fears of commercial LLMs is that a company’s intellectual property might leak into public models. By using a private model, there is 0% probability that sensitive IP leaks into the public domain.
  • Given that the cloud edge can be easily connected to a company’s private data by using a private link to their datacenter cage or through SD-WAN breakout (see figure below), a cloud edge LLM might have much easier access to sensitive data for training purposes than a private or public LLM running in a public cloud.
Cloud Edge ML Implementation

The above figure shows a Cloud Edge ML implementation with connectivity to the company’s on-prem locations over SD-WAN. The ML workloads could be LLMs like Llama or Dolly or computer vision ones such as NVidia Metropolis.

Aarna Edge Services (AES)

The AES SaaS offering provides machine learning at the cloud edge. It features an easy-to-use GUI that can slash weeks of orchestration work into less than an hour. In case of a failure, AES includes fault isolation and roll-back capabilities. A private beta is coming soon with support for:

  • Equinix Metal Servers with GPUs
  • Equinix Fabric & Network Edge with Azure Express Route/AWS Direct Connect
  • Pure Storage
  • ML workloads:
  • NVidia Fleet Command + Metropolis, OR
  • Open source Llama LLM, OR
  • Open source Dolly LLM

Our partner Predera provides support and professional services for the ML workloads. Reserve your spot today to get on the private beta list as we will initially be enabling just a few users!