"
Aarna.ml

Resources

resources

Blog

Amar Kapadia

The Magic of Model Driven Design in ONAP.
Find out more

In the previous ONAP blog from this series, we looked at a high level view of what the new Linux Foundation Open Networking Automation Platform or ONAP project is all about, and what you can do with it. We also looked superficially at the three key architectural principles that, according to me, are Model Driven, Cloud Native and DevOps. In this blog, we will discuss the principle of Model Driven design in more detail.

As the name suggests, ONAP is all about automation. Instead of creating network services and the OAM (operations, administration and management) mechanisms manually, which results in large amounts of wasted effort and different results each time, a model driven approach allows the use of templates or models to quickly assemble network services, OAM recipes and the underlying infrastructure. A model driven approach results in the same outcome every single time! There is a another major benefit -- models do not require programmers, so a CSP does not have to hire an army of programmers to enable this automation.

The diagram below shows an analogy to better explain model driven.

Instead of the manual, custom and non-repeatable design on the left, a model driven approach on the right uses pre-designed and tested components to quickly assemble a design, that is automatically deployed in exactly the same manner each time. Additionally, models are service and vendor agnostic (unless of course they describe the VNF) making re-use even easier.

ONAP supports a model driven approach at three different levels -- 1) standard definition of models across all components of ONAP, 2) building blocks created by using these definitions and 3) a guided design studio to assemble these building blocks into a higher level design.  

Model definitions in ONAP include structural models such as products (created by using one or more network service along with business data e.g. pricing), network services (e.g. VoLTE, vCPE etc.), VNFs (e.g. virtual Firewall), modules (that make up a VNF), virtual function components (or VFCs that make up a module), virtual links (VLs) that provide connectivity, ports etc. Of course new models can be defined, so this list is by no means exhaustive. ONAP also defines management data models such as VNF descriptors, licenses, engineering rules (e.g. only place VNF-a on a machine that supports DPDK) and recipe models such as policies, analytic microservices (e.g. string matching) or workflows (e.g. network service upgrade workflow).

These standard models can be used to create building blocks such as populated product definitions, network services, policies, closed-loop automation recipes, APIs, analytic apps and so on. And finally the guided design tool allows a designer to combine different models to create network services, closed-loop automation and other higher level design artifacts.

In summary, the architectural principles of ONAP are indeed elegant. Depending on who you talk to, he/she might highlight different principles. My top#3 list is Model Driven, Cloud Native and DevOps. In this blog we saw how a model driven approach promotes extreme automation without having to write a single line of code. We will look at cloud native and DevOps in subsequent blogs.

Until then:

Sign up for our 1/2 day ONAP100 in-person training course (offered jointly with Cloudify) in Santa Clara on November-10, 2017. This course will give you a complete picture of what ONAP is, its architecture and 29 constituent projects in the upcoming Amsterdam release. There's early-bird discounts and special discounts for CORD/TIP conference attendees.

If you are unable to make the class, you can also attend the "ONAP Overview: Navigating its Many Projects" webinar (jointly with Mirantis) scheduled for November-15, 2017.

Amar Kapadia

ONAP is a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions. 100% open source, part of Linux Foundation.
Find out more

Network Functions Virtualization or NFV is a major transformation of telecom and cable operator networks where network services traditionally constructed with physical boxes and manual configuration/monitoring/management are changing to virtualized network functions (VNFs) with automated configuration/monitoring/management. Sounds like the holy grail, right? So why hasn't this happened in a broad sense?

I think there have been three major barriers:

1. Dataplane performance

2. Lack of cloud-optimized/native VNFs

3. Lack of network automation

Points 1 & 2 are outside the scope of this blog (maybe we can revisit some other day). Instead we will cover point 3.

While there have been many proprietary management and orchestration (MANO) solutions, they have been incomplete and lock customers into their software. Also, given the proprietary nature, these solutions have also not had the force of standardization (for VNFs, analytic applications etc.) that comes with a broad open source solution.

Given these deficiencies, AT&T developed an in-house network automation solution called ECOMP. In parallel China Mobile was working with vendors such as Huawei and others on an open source MANO project called Open-O. Fast forward, AT&T open sourced ECOMP and merged with Open-O to form a new Linux Foundation project called the Open Network Automation Platform or ONAP.

According to the ONAP website:

ONAP is a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions. 100% open source, part of Linux Foundation.

ONAP has strong momentum with 18 platinum members, out of which 7 are operators.

We still haven't answered what ONAP is: ONAP is a superset of ETSI MANO. At a high level, ETSI MANO's scope includes:

- NFV orchestration: Network service (NS) onboarding, lifecycle management, performance management, fault management and VNF forwarding graph management

- VNF management: VNF onboarding, lifecycle management, performance management, fault management and software image management

- Other: Security, VIM (virtualized infrastructure manager aka cloud software)/SDN controller

ONAP does everything required by the above list, but goes beyond by including:

- Unified design framework: The framework is used to onboard VNFs, design network services and recipes. More on recipes later.

- FCAPS: Fault, configuration, accounting, performance, security functionality.

The below figure shows the scope of ONAP and the other systems it interacts with i.e. OSS/BSS/big data applications, NFVI/VIM and vendor-provided VNFs.

Providing the design framework and FCAPS goes a long way towards closing the gaps present in current MANO systems.

Switching gears, ONAP uses three (it's actually more, this is my view on the top 3) relatively modern architectural principles:

- Model driven

- Cloud native

- DevOps

In the next blog, we will look at each of these architectural principles.

Until then:

Sign up for our 1/2 day ONAP100 in-person training course (offered jointly with Cloudify) in Santa Clara on November-10, 2017. This course will give you a complete picture of what ONAP is, its architecture and 29 constituent projects in the upcoming Amsterdam release. There's early-bird discounts and special discounts for CORD/TIP conference attendees.

If you are unable to make the class, you can also attend the "ONAP Overview: Navigating its Many Projects" webinar (jointly with Mirantis) scheduled for November-15, 2017.

Amar Kapadia

Amdahl’s Law and the Revenge of Private Clouds
Find out more

Let’s face it -- those of us on the private cloud side of the house have not had great years. While public clouds have grown exponentially, private clouds have fallen significantly short of expectations. Vendors such as HPE, Cisco, RackSpace etc. have all exited or massively reduced their OpenStack efforts. And on the customer side, telcos (broad adoption) and a handful of Fortune 500 types have adopted private clouds built on OpenStack.

Why is this? The answer lies in a the Amdahl’s Law:

Slatency(s)= 1/((1-p) + p/s)

where

  • Slatency is the theoretical speedup of the execution of the whole task;  
  • s is the speedup of the part of the task that benefits from improved system resources;  
  • p is the proportion of execution time that the part benefiting from improved resources originally occupied.

I’m going to modify the law, to bring in into a business context. We will look at the improvement to a business in response to a technology investment:

Simprovement(sp)= 1/((1-p)/sl + p/sp))

Where

 

  • Simprovement is the theoretical business improvement;  
  • sp is improvement to the part of the business that benefits from the investment;  
  • sl is the degradation to the part of the business not benefiting from the investment;  
  • p is the proportion of business benefiting from the investment.

Having developed an elaborate total-cost-of-ownership (TCO) model that compared public clouds and private clouds, it is easy to show a 30-60% cost reduction for private. Let’s take the average: 45%. So if you can get 1 VM for $1 on a public cloud, you can get 1VM for $0.55 on a private cloud (or 1.8VMs/$1). So sp = 1.8. That sounds pretty good right? Yes, until you apply the modified Amdahl’s law.

Let’s assume that IT/OPS costs are 20% while app development costs are 80% of the total IT budget. That means, p = 0.2.

Next comes the non-intuitive part, private clouds actually degrade app development productivity as compared to public clouds since they lack the gazillion services available on a public cloud ranging from the mundane database to the fancy AI engine. Let’s assume the degradation is a modest 15%. So if it takes 1 developer/app on a public cloud, it takes 1 developer/0.85 app on a private cloud.

Crunching the numbers:

Simprovement = 1/(0.8/0.85 + 0.2/1.8) = 0.95

So! Instead improving the business, a private cloud investment is actually detrimental by looking at just the economics. This explains the current private cloud malaise. Net-net people investing in a private cloud are doing it for reasons other than app developer productivity.

But that’s all about to change. MEC and edge computing require private cloud technologies.

Will public cloud vendors win on the edge? I don’t think so. I equate public clouds to fancy restaurants. There’s complete chaos in the kitchen, but a soothing calm prevails on the customer-facing part of the restaurant. Imagine if the restaurant exposed the kitchen to customers. That's tantamount to public clouds exposing their technology to customers. Net-net, despite Green Grass from AWS and IOT Edge from Microsoft, I don’t expect public clouds to win in the edge. And the early stalwarts of private clouds aka big companies are bruised and fatigued. Therefore it will be new open source projects and startups around those projects that will win; plus these new projects are unlikely to be OpenStack based.

Case in point, check out the Container4NFV project (formerly known as OpenRetriever, a project where I’m a committer) and specifically the Next Gen VIM scheduler requirements document . The document lays out requirements for a next-gen edge computing infrastructure platform. On behalf of all of us contributors, we’d welcome feedback!

In summary, while not terribly successful in the datacenter, I think private cloud technologies are going to be back with a vengeance in the edge computing use case.

Amar Kapadia

Free "Understanding OPNFV" Book
Find out more

Have you been staying up at night wondering what the Open Platform for NFV  or OPNFV is? Or why you should care about it and what's in it for you? This short book (144 pages) will explain OPNFV and its benefits in an easy to understand language. You will also get a sense of the broader NFV transformation and why it's more than just technology. You will learn about the various upstream projects integrated in OPNFV and how OPNFV contributes back to these projects. Next, you will get exposed to the sophisticated CI pipeline built by the OPNFV community. And get an in-depth view into OPNFV deployment and testing projects. Finally, get a brief overview of how to write and onboard VNFs for OPNFV.

If you are a technical or business leader at a telecom operator or enterprise, looking to accelerate your NFV journey, this book is perfect for you. Technology providers aka vendors will gain the information they need to help position their products in the OPNFV ecosystem. Finally individuals looking to make a career change into the rapidly growing NFV space will get a great understanding of OPNFV.

Sign up for your free eBook copy here. The book is written by my good friend Nick Chase and me. Once you read it, I'd love to hear your feedback!

If you'd like a physical book instead, stop by the Mirantis booth at the OpenStack Boston summit or get one at the OPNFV Beijing summit. Or of course you can get a copy on Amazon in a month or two as well (but this will not be free).

Amar Kapadia

The OPNFV community's 4th release, Danube, came out roughly 2 weeks ago.
Find out more

The OPNFV community's 4th release, Danube, came out roughly 2 weeks ago. This release integrates the MANO layer (OPEN-O, OpenBaton, OpenStack tacker), so now there is an open source reference architecture for the entire NFV stack minus VNFs. Or in other words OPNFV integrates MANO + VIM + SDN Controller + NFVI software with continuous testing. Pretty cool, huh? Additionally there are numerous enhancements around data plane acceleration, architecture improvements, feature enhancements and hardening. See the official OPNFV blog.

Want to learn more? Join Serg Melikyan from Mirantis & me for a deep dive webinar where we will discuss "What's new in OPNFV Danube".

Amar Kapadia

AT&T proposed two new OPNFV projects yesterday: Armada and CORD.
Find out more

AT&T proposed two new OPNFV projects yesterday: Armada and CORD. Here's my assessment of why they did so.

Armada is a new OPNFV installer. There are 4 installers already: Fuel (Mirantis), Apex (RedHat), JOID (Canonical) and Compass (Huawei). Plus there is another one in incubation -- Daisy (ZTE). So why would anybody want to propose yet another installer? Actually there is a really good reason. The future of OpenStack lifecycle management seems to be in the direction of containerizing all OpenStack services, and then orchestrating them through a COE (container orchestration engine) such as Kubernetes. Day 1 management i.e. initial install gets dramatically simpler and more flexible and Day 2 management i.e. post deployment changes such as configuration changes, functionality additions, capacity expansion, architecture changes, updates, upgrades, rollbacks become possible without huge amounts of manual effort. Moreover, over time updates and upgrades can be totally eliminated in favor of a CI/ CD pipeline. As an added bonus, one also gets access to modern monitoring tools such as Prometheus and fluentd. Armada uses Kubernetes, containerized OpenStack services and Helm (a package manager project for Kubernetes). Armada is also independent of any particular vendor. The other installers discussed above all have affiliations with vendors. Net-net Armada is the future, and none of the existing projects offer what it's shooting for. Daisy goes part of the way by using OpenStack Kolla, but falls short.

OpenCORD is another (other than OPNFV) open source NFV project. The core of OpenCORD uses OpenStack as the VIM, ONOS as the SDN Controller, XOS as the VNFM and OCP servers and bare metal switches.  Since OpenCORD is prescriptive, it could be considered an OPNFV scenario and tested as such. OpenCORD focuses on development, and again there's good synergy with OPNFV where a lot of effort is spent on integration and testing. However, to date OpenCORD was operating separately and independently from OPNFV. That's the gap the OPNFV CORD project fills -- it introduces OpenCORD as an OPNFV scenario. OpenCORD actually has three profiles: Enterprise, Residential, Mobile all of which use the same core technology (called a POD not to be confused with a Pharos POD) but add different VNFs and access connectivity. Over time, these different flavors can be tested as part of the OPNFV CORD project as well. In summary, this project makes great sense since it increases collaboration between open source projects and reduces duplication of effort.

The timeline for both projects is the Euphrates release i.e. this fall. Exciting times!