"
Aarna.ml

Resources

resources

Blog

Amar Kapadia

The Telecom Industry Has Moved to Open Source. But What is Open Source? Why Open Source? What is a Healthy Open Source Project? Understand the Open Source Consumption Models.
Find out more

In a blog titled "The Telecom Industry Has Moved to Open Source", Arpit Joshipura, GM Networking & Orchestration + Edge/IOT at Linux Foundation, is quoted as saying, "We have seen a 140-year-old telecom industry move from proprietary and legacy technologies to open source technologies with LF Networking.” I couldn't agree more. As with enterprises, open source adoption will increase within Communication Service Providers (CSPs). And it will increase significantly.

At the same time, there is massive confusion amongst CSPs in terms of what open source is, and how they should consume it. I've heard statements such as, "I'm going to wait until project XYZ is a stable product" or "why should I pay for open source software support, the community will provide support." The implication here is that the community is working on full productization and/or providing L1-3 support. Here are some of my thoughts on open source:

What is Open Source?

Isn't open source code developed in an open repo using an open source license? Yes, but this is a very limited understanding. Just having open access to the code does not make it usable. True open source means that all aspects code development needs to be open:

  • Code development  
  • Test suites  
  • Continuous Integration (CI) pipeline  
  • Code review process/disposition of reviews  
  • Integration labs  
  • Backlog list (roadmap features/bugs) and process of deciding which items make it in a particular release  
  • Meetings  
  • Wiki pages/documentation  
  • Interop testing  
  • Use case discussions/demos

Linux Foundation projects have governance in place to make all these aspects open.

What Open Source is Not?

There are two major misconceptions — community (as opposed to a commercial distro) open source is a product and open source is free. These two points are obviously inter-related. An open source community's charter is to develop code. It's not to develop and support a fully finished product. For this reason, the open source community typically works on interesting technical problems. It doesn't do the laborious work of building a product. Unfortunately, a lot of productization in terms of stability, scalability, security, performance, documentation, manageability, installation, lifecycle management and monitoring is not the most exciting work. This same point also explains why open source is not free. Either the end user or a commercial distribution vendor has to do the work of creating a finished product from the open source code.

The above picture is analogous to the gap between open source and a commercial product.

Why Open Source?

If open source is not free and it is not a finished product, then why bother using it? Should't one keep using proprietary products? Not really. There are numerous very good reasons to embrace open source. Open source:

  • Supplements/replaces standards: Standards often move very slowly. Or they are not grounded in reality. Or they leave things open to interpretation. Open source projects can supplement standards with a reference implementation and in some cases replace them entirely.  
  • Results in faster innovation: Since programmers from a diverse backgrounds work on open source projects, they can typically innovate faster. In a project such as the Linux Foundation ONAP project, you have a diversity of programmers from classic telco to cloud technologies to hardware to security to OSS/BSS backgrounds. This means ONAP can innovate faster than any proprietary product.  
  • Roadmap influence: Traditionally an end user has to talk to a sales person who talks to a product manager who talks to an architect who talks to the engineers. Anybody remember the game of telephone? By the time the engineer gets the requirement, it's hardly recognizable. In open source, end users can influence the project directly.  
  • Reduced lock-in: A healthy open source project (see next section) typically has multiple commercial vendors. This reduces, if not eliminates, vendor lock-in. So even though open source is not free, this particular point might result in open source being less expensive than proprietary.  
  • Transparency: If an open source project is truly open as per the section above, there is full transparency in terms of what the community is doing, and more importantly how decisions are being made.

What is a Healthy Open Source Project?

A healthy open source, in my view, has five attributes:

  1. Truly open: A healthy open source project, see above, is truly open in all aspects.  
  2. Multiple contributors: An indication of a healthy project is that no one company dominates contributions. The more uniform the distribution of contributions, the better off is the community.  
  3. Active end-user participation: This is a critical point. Ideally end-users should be contributing code. But this need not always be the case; end-users can contribute in other ways, e.g. by defining requirements, participating in use cases, providing customer case studies etc.  
  4. Commercial activity: A project may do well for a while without commercial activity (commercial activity may be defined as both vendors generating revenue and end-users putting the project in production to generate value), especially if it is early in the project's life. However, commercial activity is the oxygen that sustains a project. Without commercial activity, it is very difficult to see how a project can survive long term.  
  5. Ecosystem: Related to the previous point, a healthy open source project has an ecosystem of interoperating products (proprietary or open source). Again, early in a project's life, the ecosystem may be small.

Let's take ONAP as a case study. By being a part of Linux Foundation, ONAP is automatically truly open.

Second, the distribution of contributions is quite uniform (contributions from Dec-26 2018 to Mar-26 2019).  

Third, ONAP boasts end user participation from AT&T, China Mobile, Orange, Bell Canada, Vodafone, Verizon, Turk Telecom, Reliance, China Telecom and others. Some contribute code. Others participate in use case/blueprint development. And yet others contribute by providing requirements. Finally, end user customer success stories are just starting to emerge.

Fourth, the ONAP commercial activity is early. We (Aarna) provide a development distribution (as of March 2019; if you are reading this in the future, we should have production distribution), training and some point professional services. SIs such as Tech Mahindra, Accenture and others provide a much broader set of ONAP professional services. Orange has put parts of ONAP in production and is mandating ONAP compliance in their RFPs.

For the last point, ONAP is still early in terms of an ecosystem. The upcoming OVP VNF Validation program should expand the ecosystem significantly. The ecosystem of interoperating cloud, SDN controller and OSS/BSS systems is also early and growing.

While the last 2 points are a bit weak, it is still early in the life of ONAP, so we can hazard a guess that ONAP is going to be a healthy project. You can use the same evaluation technique on other projects too.  

Open Source Consumption Models

Given the above background, there are three ways to consume open source projects. Two lead to success and one leads to failure.

Successful Model#1 Do-it-yourself (DIY): The DIY model requires that an end user support themselves with the open source technology. This requires architecture, engineering, testing and support teams. A meaningful open source project can easily require 20-30 full time people. This can be very expensive. Also, the end user has to have the ability to retain these individuals. There are many stories where end users spun up an internal OpenStack team, only to lose the staff once the team members had completed their actual goal — which was learning OpenStack and improving their resume appeal. Given the hassle and cost, an end user needs to determine whether a DIY model is a core differentiator for them. End users successful with a DIY technique are Google for Kubernetes and Amazon for Hadoop.

Successful Model#2 (Commercial distribution): This model requires that an end user work with a commercial distribution vendor for an open source project. The commercial vendor provides a finished product and L1-3 support. This is normally a safe default path for most end users.

Failed Model#1 (DIY without a long-term internal team): I can't tell you how many end users mistakenly believe that an open source project is a free finished product (reminds me of the saying — if something is too good to be true, it probably is). Even if the customer can do a successful PoCs using this technique, it is a doomed model once you go into production. There are numerous anecdotes of how a bunch of smart architects and engineers got OpenStack to run in a lab and then put it into production. 6 months later, either the engineers were gone. Or they were around but didn't know how to upgrade. Or the configuration changes got so complex that nobody could support the deployment after a year. This is a horrible lose-lose model. The end user directly experiences a negative business impact with very unhappy customers. And the open source project unfairly gets a bad name.

In short, open source is on a meteoric rise amongst CSPs. And I hope CSPs can embrace open source with their eyes wide open.

Needless to say, if you are looking for a commercial ONAP vendor, please contact us.

Rajendra Mishra

This blog explains the steps needed for using Robot scripts for onboarding and instantiation of VNF’s in an OOM based ONAP deployment, such as one used by Aarna.ml ONAP Distribution (ANOD).
Find out more

In this blog, I will explain the steps needed for using Robot scripts for onboarding and instantiation of VNF’s in an OOM based ONAP deployment, such as one used by Aarna.ml ONAP Distribution (ANOD). The Robot script comes with the default deployment and you can learn more about the Robot framework here.

Setup Requirements

Before we go deeper into various Robot scripts, we need to make sure that following setup is available and working correctly.

  1. ONAP should be setup with the correct Openstack configurations. You can refer to Aarna’s ANOD for deploying ONAP.  
  2. All Kubernetes containers in ONAP need to be up and running.  
  3. ONAP should be accessible over the GUI.

Robot Script Commands

The Robot framework in ONAP provides us various options to onboard/distribute/instantiate VNF packages.

First, you need to download latest ONAP source files from the git repo.

The base script for Robot is located at the following location:

oom/kubernetes/robot/

The first step is to initialize the Robot setup with ONAP. The command is as follows

cd oom/kubernetes/robot/

./demo-k8s.sh onap init

This will create test customer models and distribute them in ONAP. The script takes several minutes to complete. After the completion of the script one can log into ONAP GUI and see various test customers and test models.

The Robot framework provides following additional commands:

1. Create demo customer (Demonstration) and services, etc.

$>./demo-k8s.sh onap init_customer

2. Distribute demo models (demoVFW and demoVLB)

$> ./demo-k8.sh onap distribute  []

3. Preload data for VNF for a given  module

$> ./demo-k8.sh onap  preload    

4. Instantiate vFW module for the a demo customer. This will go all the way to create a stack in openstack.

$> ./demo-k8.sh onap instantiateVFW

5. Delete the module created by instantiateVFW

$> ./demo-k8.sh onap deleteVNF

The Robot framework runs as a kubernetes container on one of the ONAP server nodes. The Robot container has template files for the following VNF's.

  • vCPE  
  • vFW  
  • vFWCL  
  • vLB  
  • vLBMS  
  • vVG  

Using Robot scripts one can onboard above VNF's in ONAP. Some base functionality is provided as part of the ONAP package and the rest can be built by modifying the scripts. The scripts can also be further tweaked to create custom 'CUSTOMERS' and distribute them.

One can login to the Robot container using following commands.

# Get the Robot POD name

$> kubectl get pod --all-namespaces -o=wide | grep robot

onap dev-robot-54bbc764fb-2k2mw 1/1 Running 1 49d 10.42.108.186 beijing01

# Connect to Robot POD

$> kubectl exec -it -n onap dev-robot-54bbc764fb-2k2mw -- /bin/bash

The above kubectl command creates a bash shell within Robot container. In the shell we can view various Robot scripts that use python commands to operate on ONAP.

Above mentioned VNF packages and  corresponding Openstack Heat templates are located in following directories:

root@dev-robot-54bbc764fb-2k2mw:/# cd /share/heat/

root@dev-robot-54bbc764fb-2k2mw:/share/heat# ls

OAM-Network  ONAP  temp  vCPE  vFW  vFWCL  vLB  vLBMS  vVG

root@dev-robot-54bbc764fb-2k2mw:/share/heat# cd vFW

root@dev-robot-54bbc764fb-2k2mw:/share/heat/vFW# ls

MANIFEST.json  base_vfw.env  base_vfw.yaml

Above we can see the .yaml and .env file for virtual firewall (vFW).

When Robot scripts are invoked, it picks up the above files, creates a zip and then onboards it to ONAP.

More details on Robot scripts can be found here

/var/opt/OpenECOMP_ETE/robot/testsuites

After the completion of a Robot command, one can look at the logs located in following directory.

root@dev-robot-54bbc764fb-2k2mw:/share/logs/demo/InitDemo# pwd

/share/logs/demo/InitDemo

root@dev-robot-54bbc764fb-2k2mw:/share/logs/demo/InitDemo# ls

log.html  output.xml  report.html

# We can access the Robot logs from web UI

Logs URL = http://<PORTAL LOAD BALANCER  IP>:30209/logs/demo/

For copying the files from the container to host server you can use the following command.

kubectl cp onap/<robot container id>:<path to file>  <local name>

e.g.

kubectl cp onap/dev-robot-54bbc764fb-2k2mw:/var/opt/test  /tmp/test

For customization of Robot commands and addition of functionality one can look into following scripts.

root@dev-robot-54bbc764fb-2k2mw:/var/opt/OpenECOMP_ETE/robot/testsuites/demo.robot

The above file contains Robot tags for above mentioned commands.

In the next blog, I will explain how some of these Robot scripts can be customized for onboarding VNFs and Network services that you created, so that this process of onboarding and testing them can be automated.

Curious to learn more? Take our 2 day training in San Jose on April 1-2 just before ONS. Or give ANOD a spin.

Amar Kapadia

5G and multi-access edge computing (MEC) will require a number of brand-new skills.
Find out more

5G and multi-access edge computing (MEC) will require  a number of brand new skills. There simply aren't enough people you can hire or outsource to, and so CSPs will have to cultivate some of these new skills in-house. Stack disaggregation will put further burden on this critical need.  2019 is the right time to start gaining these skills. If you wait too long, you are the risk of not having enough in-house expertise to implement 5G and MEC!

Examples of news skills needed are as follows.

Non-functional skills

  • Agile programming  
  • DevOps and CI/CD  
  • API/CLI instead of GUI  
  • Modeling languages — TOSCA, Heat, YAML etc.  
  • Plan/build ops. mentality instead of break/fix  
  • Functional testing of NFV/SDN stack  
  • Performance testing NFV/SDN stack

General functional skills

  • SDN and NFV basics  
  • MEC basics  
  • Using OpenStack, Kubernetes (k8s), SDN controllers e.g. ODL or Tungsten Fabric  
  • Operating OpenStack, k8s, and SDN controllers  
  • Using big data stacks  
  • Modern monitoring techniques  
  • AI/ML  
  • Hypervisors (vSphere or KVM) and  containers  
  • Hardware acceleration/Enhanced Platform Awareness (EPA)

Specialized functional skills

  • Using ONAP/OSM  
  • Operating ONAP/OSM  
  • New VNFs such as vRAN, SD-WAN, NGFW etc.

There is no one-stop shop to acquire these skills. Nor does any one individual need to know all aspects.

Our courses fill a small aspect of the above list (ONAP, OPNFV that covers testing, and NFV101). If these are gaps you are looking to fill, consider taking one of our courses. We have public classes coming up in Berlin in late Feb (ONAP Bootcamp II) and then in San Jose in early April just before ONS (ONAP Bootcamp, ONAP+AI/ML intro). Or you can request an onsite private training just for your company employees.

Sriram Rupanagunta

This technical blog explains how to run NSB (Network Services Benchmarking) using Yardstick, in a virtual environment, using the Aarna.ml ONAP Distribution (ANOD) on GCP.
Find out more

This technical blog explains how to run NSB (Network Services Benchmarking) using Yardstick, in a virtual environment, using the Aarna.ml ONAP Distribution (ANOD) on GCP. You can get free access ANOD for GCP here.

We believe that Yardstick NSB will complement ONAP very well by helping validate and certify the performance of VNFs before they are onboarded. See my August blog that talks about this more. See also my YouTube video titled "OPNFV Yardstick NSB Demo".

  1. Deploy OPNFV on CentOS 7.x server or a GCE instance, with vcpus as 16, and memory as 32GB.

sudo -i

cd /opnfv_deploy

nohup opnfv-deploy -v --debug --virtual-cpus 16 --virtual-default-ram 32 -n network_settings.yaml -d os-nosdn-nofeature-noha.yaml

# This takes about 90 minutes to complete!

  1. The IP address of Openstack Horizon interface can be found in Openstack credentials file on the Undercloud instance of Openstack (log in using the command “opnfv-util undercloud” , and refer to the file overcloudrc.v3).

sudo -i

opnfv-util undercloud

cat overcloudrc.v3

  1. Log into Horizon dashboard to examine Openstack parameters such as number of resources for Hypervisors and so on (after setting up SOCKS proxy tunnel).  
  2. Create a KVM instance of Ubuntu 16.04, with the following resources. You can refer to Aarna’s Lab ONAP300 (which sets up Ubuntu 16 VM on GCE instance). If you are running this on a local server, you need to create this VM using a standard Ubuntu 16.04 cloud image. Instead of using this image for deploying ONAP, you can use the same VM to run NSB/Yardstick. This VM requires the following resources:

8 VCPUs

100GB RAM

(Note: NSB scripts do not work on CentOS distributions, so it cannot be run from the base Jump host)

  1. Login to Ubuntu instance as user “aarna”, and copy openstack credentials file to this (in directory /opnfv-yardstick). Edit this file, and remove comments and shell commands, and retain only environment variables (openstack.creds.sh).  
  2. Run the following as sudo user on Ubuntu VM:

sudo -i

cd /home/aarna

git clone https://gerrit.opnfv.org/gerrit/yardstick

cd yardstick

# Switch to latest stable branch

# git checkout

git checkout stable/euphrates


# Run as sudo user...

nohup ./nsb_setup.sh /opnfv-yardstick/openstack.creds.sh &

# This command take about 1 hour to complete

  1. Once this command completes successfully, you can see yardstick container created on this VM. Run the bash shell on this container. There is no need to explicitly start the container (nsb_setup does it).

docker ps -a # You should see yardstick container

docker exec -it yardstick /bin/bash # execute shell in the container

  1. The nsb_setup script also prepares the cloud image (yardstick-samplevnfs) for Openstack, and adds it to Glance on the Openstack. This image contains all the needed utilities for running all the NSB sample VNF’s.  
  2. Create config file for yardstick

cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf

# Edit this file, and add file as the destination (in addition to http). It can also

# be set to influxdb for viewing the results in Grafana

  1. For the purpose of this blog, we will use the sample application l2fwd, which does L2 forwarding functions. Edit the l2fwd test case from prox VNF (yardstick/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd-2.yaml), and reduce the resource requirements in the context section of the file.  
  2. vcpus to 8  
  3. memory to 10G    
  4. Edit the following file and make few changes. These changes are not needed in a hardware based deployment, but since the VM deployment takes more time to boot up the VMs on Openstack, these changes are required.  
  5. File: yardstick/ssh.py  
  6. Change the timeout from 120 to 480  
  7. Change SSH retry interval from 1 sec to 2 sec      
  8. Set up the environment to run the test:

source /etc/yardstick/openstack.creds

export EXTERNAL_NETWORK="external”

  1. Run the sample NSB test from yardstick

yardstick --debug task start  samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd-2.yaml

  1. This takes few minutes to complete. While this is running, you can examine Horizon dashboard, and look at the Stack created (which includes all the resources needed to run the tests), and the VM instances (2 of them) which are created. At the end of the test, the stack will be undeployed.  
  2. The results of the test will be in /tmp/yardstick.out, which you can examine.  
  3. Influxdb/Grafana can be configured to view the results graphically (NOTE: This is not included in the recorded session).

The L2 forwarding service consists of 2 VNF’s - TG (Traffic Generator) and L2FWD (L2 Forwarding function). Both these functions are implemented using the open source tool called prox (Packet PROcessing eXecution engine). Prox can be configured to run in various modes, and for the purpose of this network service, we will be running prox on both the VNF’s, in different modes. On the TG VNF, prox will be run as a traffic generator, by passing the appropriate configuration file (gen-l2fwd-2.cfg). On the L2FWD VNF, prox will be run as a L2 forwarding function, by passing the configuration file (handle-l2fwd-2.cfg).

The prox application is built using the open source library DPDK.

The network configuration of the service is as shown below:

Curios to learn more about ONAP? Consider signing up for one of our upcoming trainings.

Amar Kapadia

To Onboard or Not to Onboard: ONAP Presents a Dilemma for VNF Vendors.
Find out more

When it comes to NFV/SDN management, orchestration, and automation, ONAP is clearly a leading open source project. The number of operators behind ONAP represent more than 60% world-wide mobile subscribers. Needless to say, ONAP has a high likelihood of success and that should serve as a motivator for VNF (and PNF) vendors to onboard their products onto ONAP. On the other hand, ONAP is in very early stages of production deployment, and that might cause business decision makers to question the urgency.

Additionally, VNF vendors are hearing about ONAP nuances such as Heat vs. TOSCA VNF descriptors, sVNFM vs. gVNFM, ETSI compliance vs. non-ETSI compliance further creating confusion in their minds.

To aid in cutting through this confusion and help VNF vendors craft an ONAP strategy, we have a new "VNF onboarding for ONAP" white paper. The document is meant mostly for business decision makers. Check it out, let us know what you think.

Amar Kapadia

The Linux Foundation ONAP project promises to automate not just the orchestration and lifecycle management (LCM) of network services, but also service
Find out more

The Linux Foundation ONAP project promises to automate not just the orchestration and lifecycle management (LCM) of network services, but also service assurance through something called closed loop automation. Closed loop automation works as follows:

  • All monitoring data — events, alarms, logs, metrics, files — go to an analytics engine. A closed loop recipe, i.e. a sequence of big data analytics microservices, process that data. For example, a sustained increase in packet loss may trigger a packet loss event.  
  • The event from the analytics engine goes to a policy engine. The policy engine decides what action to take. For example, the policy engine may decide to do nothing if the packet loss is below a threshold. On the other hand, it may publish an action for the orchestration/LCM side of the house. In the above case, it could trigger a scale-out or configure throttling settings.

However, life is not usually this straightforward where every closed loop can be clearly defined ahead of time. Wouldn't it be nice if an AI/ML microservice was part of the closed loop recipe? This way, we wouldn't have to figure out every possible closed loop recipe permutation and the AI/ML microservice could assist.

Now we can assist you with the above needs. We have partnered with another SF Bay Area startup called Davinci Networks. Davinci Networks entire focus is on enabling intelligent networks through AI/ML microservices built using specialized deep neural networks. These deep neural networks use network internal monitoring data and combine it with external data to improve the quality of intelligence. Through our partnership, we will provide professional services, training, and over time products that span across ONAP and AI/ML microservices.

Curious to learn more? Sign up for one of our joint 1.5 courses. Get a boost in your career by learning about 2 hot technologies: ONAP + AI/ML.

Register for the Berlin Feb 26-27 ONAP+AI/ML Course

Register for the San Jose Apr 2-1 ONAP+AI/ML Course

Of course, you can always try out our ONAP distribution "ANOD" on GCP, sign up for our regular ONAP courses, or request professional services around ONAP.