"
Aarna.ml

Resources

resources

Blog

Sriram Rupanagunta

Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP (Part 2/2)
Find out more

In the previous installment of this two part blog series, we looked at why NFV clouds are likely to be highly distributed and why the management and orchestration software stack needs to support these numerous clouds. ONAP is one such network automation software stack. We saw the first three steps of what it takes to register multiple OpenStack cloud regions in

ONAP for the vFW use-case (other use cases might need slight tweaking).

Let’s pick up where we left off and look at the remaining steps 4-7:


Step 4: Associate Cloud Region object(s) with a subscriber's service subscription

With this association, this cloud region will be populated into the dropdown list of available regions for deploying VNF/VF-Modules from VID.

Example script to associate the cloud region  "CloudOwner/Region1x" with subscriber "Demonstration2" that subscribes to service "vFWCL":

curl -X PUT \  https://:8443/aai/v11/business/customers/customer/Demonstration2/service-subscriptions/service-subscription/vFWCL/relationship-list/relationship \

-H 'accept: application/json' \

-H 'authorization: Basic QUFJOkFBSQ==' \

-H 'cache-control: no-cache' \

-H 'content-type: application/json' \

-H 'postman-token: 4e6e55b9-4d84-6429-67c4-8c96144d1c52' \

-H 'real-time: true' \

-H 'x-fromappid: jimmy-postman' \

-H 'x-transactionid: 9999' \

-d ' {

  "related-to": "tenant",

  "related-link": "/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/tenants/tenant/",

  "relationship-data": [

      {

          "relationship-key": "cloud-region.cloud-owner",

          "relationship-value": "CloudOwner"

      },

      {

          "relationship-key": "cloud-region.cloud-region-id",

          "relationship-value": ""

      },

      {

          "relationship-key": "tenant.tenant-id",

          "relationship-value": ""

      }

  ],

  "related-to-property": [

      {

          "property-key": "tenant.tenant-name",

          "property-value": ""

      }

  ]

}’


Step 5: Add Availability Zones to AAI



Now we need to add an availability zone to the region we created in step 3.

Example script to add OpenStack availability zone name, e.g ‘nova’ to Region1x:

curl -X PUT \

https://:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/availability-zones/availability-zone/ \

-H 'accept: application/json' \

-H 'authorization: Basic QUFJOkFBSQ==' \

-H 'cache-control: no-cache' \

-H 'content-type: application/json' \

-H 'postman-token: 4e6e55b9-4d84-6429-67c4-8c96144d1c52' \

-H 'real-time: true' \

-H 'x-fromappid: AAI' \

-H 'x-transactionid: 9999' \

-d '{

  "availability-zone-name": "",

  "hypervisor-type": "",

  "operational-status": "Active"

}'


Step 6:  Register VIM/Cloud instance with SO



SO does not utilize the cloud region representation from A&AI. It stores information of the VIM/Cloud instances inside the container (in the case of OOM) as a configuration file. To add a VIM/Cloud instance to SO, log into the SO service container and then update the configuration file "/etc/mso/config.d/cloud_config.json" as needed.

Example steps to add VIM/cloud instance info to SO:

# Procedure for mso_pass(encrypted)

# Go to the below directory on the kubernetes server

//onap/mso/mso

# Then run:

$ MSO_ENCRYPTION_KEY=$(cat encryption.key)

$ echo -n "your password in cleartext" | openssl aes-128-ecb -e -K MSO_ENCRYPTION_KEY -nosalt | xxd -c 256 –p

# Need to take the output and put it against the mso_pass

# value in the json file below. Template for adding a new cloud

# site and the associate identity service

$ sudo docker exec -it  bash

root@mso:/# vi /etc/mso/config.d/mso_config.json

"mso-po-adapter-config":

  {

    "identity_services":

    [

      {

        "dcp_clli1x": "DEFAULT_KEYSTONE_Region1x",

        "identity_url": ">/v2.0",

        "mso_id": "",

        "mso_pass": "",

        "admin_tenant": "service",

        "member_role": "admin",

        "tenant_metadata": "true",

        "identity_server_type": "KEYSTONE",

        "identity_authentication_type": "USERNAME_PASSWORD"

      },

"cloud_sites":

    [

      {

        "id": "Region1x",

        "aic_version": "2.5",

        "lcp_clli": "Region1x",

        "region_id": "",

        "identity_service_id": "DEFAULT_KEYSTONE_Region1x"

      },

# Save the changes and Restart MSO container

# Check the new config

http://:8080/networks/rest/cloud/showConfig # Note output below should match parameters used in the CURL Commands

# Sample output:

Cloud Sites:

CloudSite: id=Region11, regionId=RegionOne, identityServiceId=DEFAULT_KEYSTONE_Region11, aic_version=2.5, clli=Region11

CloudSite: id=Region12, regionId=RegionOne, identityServiceId=DEFAULT_KEYSTONE_Region12, aic_version=2.5, clli=Region12

Cloud Identity Services:

Cloud Identity Service: id=DEFAULT_KEYSTONE_Region11, identityUrl=, adminTenant=service, memberRole=admin, tenantMetadata=true, identityServerType=KEYSTONE, identityAuthenticationType=USERNAME_PASSWORD

Cloud Identity Service: id=DEFAULT_KEYSTONE_Regopm12, identityUrl=https://auth.vexxhost.net/v2.0, msoId=, adminTenant=service, memberRole=admin, tenantMetadata=true, identityServerType=KEYSTONE, identityAuthenticationType=USERNAME_PASSWORD


Step 7: Change Robot service to operate with the VIM/Cloud instance



When using OOM, by default the Robot service supports the pre-populated cloud region where the cloud-owner is "CloudOwner" and cloud-region-id is specified via the parameters "openstack_region" during the deployment of the ONAP instance through OOM configuration files. This cloud region information can be updated in the file "/share/config/vm_properties.py" inside the robot container. Appropriate relationships between Cloud Regions and Services need to be setup in the same file for Robot Service Tests to pass.


Note:  Robot framework does not rely on Multi-VIM/ESR.

If you have done all 7 steps correctly, Robot tests should pass and both regions should appear in the VID GUI.

If you liked (or disliked) this blog, we’d love to hear from you. Please let us know. Also if you are looking for ONAP training, professional services or development distros (basically an easy way to try out ONAP on Google Cloud in <1 hour), please contact us. Professional services include ONAP deployment, network service design/deployment, VNF onboarding, custom training etc.

References:

ONAP Wiki

vFWCL Wiki

Sriram Rupanagunta

Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP (Part 1/2)
Find out more

NFV clouds are going to be distributed by their very nature. VNFs and applications will be distributed as per the below figure: horizontally across edge (access), regional datacenter (core) and hyperscale datacenters (could be public clouds) or vertically across multiple regional or hyperscale datacenters.

The Linux Foundation Open Network Automation Platform (ONAP) project is a management and orchestration software stack that automates network/SDN service deployment, lifecycle management and service assurance. For the above-mentioned reasons, ONAP is designed to support multiple cloud regions from the ground up.

In this two-part blog, we will walk you through the exact steps to register multiple cloud regions with ONAP for the virtual firewall (vFW) use-case that primarily utilizes SDC, SO, A&AI, VID and APP-C projects (other use cases will be similar but might require slightly different instructions). Try it out and let us know how it goes.

Prerequisites

  1. ONAP Installation (Amsterdam release)  
  2. OpenStack regions spread across different physical locations  
  3. Valid Subscriber already created under ONAP (e.g Demonstration2)

If you do not have the above, and still want to try this out, here are some alternatives:

ONAP Region Registration Steps

There are 3 locations where VIM/cloud instance information is stored: A&AI, SO & Robot. The following 7 steps outline the steps and provide sample API calls.


Step 1: Create Complex object(s) in AAI


A complex object in A&AI represent the physical location of a VIM/Cloud instance. Create a complex object for each OpenStack Region that needs to be configured under ONAP


Example script to do create complex object named clli1x:

# Main items to be changed are highlighted, but most of the below

# information should be customized for your region

curl -X PUT \ https://:8443/aai/v11/cloud-infrastructure/complexes/complex/clli1x \

-H ‘X-TransactionId: 9999’ \

-H ‘X-FromAppId: jimmy-postman’ \

-H ‘Real-Time: true’ \

-H ‘Authorization: Basic QUFJOkFBSQ==’ \

-H ‘Content-Type: application/json’ \

-H ‘Accept: application/json’ \

-H ‘Cache-Control: no-cache’ \

-H ‘Postman-Token: 734b5a2e-2a89-1cd3-596d-d69904bcda0a’ \

 -d   ‘{

          "physical-location-id": "clli1x",

          "data-center-code": "example-data-center-code-val-6667",

          "complex-name": "clli1x",

          "identity-url": "example-identity-url-val-28399",

          "physical-location-type": "example-physical-location-type-val-28399",

          "street1": "example-street1-val-28399",

          "street2": "example-street2-val-28399",

          "city": "example-city-val-28399",

          "state": "example-state-val-28399",

          "postal-code": "example-postal-code-val-28399",

          "country": "example-country-val-28399",

          "region": "example-region-val-28399",

          "latitude": "example-latitude-val-28399",

          "longitude": "example-longitude-val-28399",

          "elevation": "example-elevation-val-28399",

          "lata": "example-lata-val-28399"

      }’

Step 2: Create Cloud Region object(s) in A&AI

The VIM/Cloud instance is represented as a cloud region object in A&AI and ESR. This representation will be used by VID, APP-C, VFC, DCAE, MultiVIM, etc. Create a cloud region object for each OpenStack Region.

Example script to create cloud region object for the same cloud region:

curl -X PUT \

'https://:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x \

-H 'accept: application/json' \

-H 'authorization: Basic QUFJOkFBSQ==' \

-H 'cache-control: no-cache' \

-H 'content-type: application/json' \

-H 'postman-token: f7c57ec5-ac01-7672-2014-d8dfad883cea' \

-H 'real-time: true' \

-H 'x-fromappid: jimmy-postman' \

-H 'x-transactionid: 9999' \

-d ‘{

  "cloud-owner": "CloudOwner",

  "cloud-region-id": "Region1x",

  "cloud-type": "openstack",

  "owner-defined-type": "t1",

  "cloud-region-version": "",

  "cloud-zone": "",

  "complex-name": "clli1x",

  "identity-url": "/v3>",

  "sriov-automation": false,

  "cloud-extra-info": "",

  "tenants": {

      "tenant": [

          {

              "tenant-id": "",

              "tenant-name": ""

          }

      ]

  },

  "esr-system-info-list":

  {    

      "esr-system-info":

      [

          {

              "esr-system-info-id": "",

              "service-url": "/v3>",

              "user-name": "",

              "password": "",

              "system-type": "VIM",

              "ssl-cacert": "",

              "ssl-insecure": true,

              "cloud-domain": "Default",

              "default-tenant": ""

          }

      ]

  }

}’

Step 3: Associate each Cloud Region object with corresponding Complex Object



This needs to be setup for each cloud region with the corresponding complex object.

Example script to create the association:

curl -X PUT \ https://:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/relationship-list/relationship \

-H 'accept: application/json' \

-H 'authorization: Basic QUFJOkFBSQ==' \

-H 'cache-control: no-cache' \

-H 'content-type: application/json' \

-H 'postman-token: e68fd260-5cac-0570-9b48-c69c512b034f' \

-H 'real-time: true' \

-H 'x-fromappid: jimmy-postman' \

-H 'x-transactionid: 9999' \

-d '{

  "related-to": "complex",

  "related-link": "/aai/v11/cloud-infrastructure/complexes/complex/clli1x",

  "relationship-data": [{

          "relationship-key": "complex.physical-location-id",

          "relationship-value": "clli1x"

  }]

}’

We will cover the remaining 4 steps in the next and final installment of this blog series.

In the meantime if you are looking for ONAP training, professional services or development distros (basically an easy way to try out ONAP < 1 hour), please contact us.

Amar Kapadia

Lessons Learned From Being the First ONAP Training Developer.
Find out more

As the first ONAP training provider that has trained around 60 participants on ONAP (and OPNFV) and created the LF online ONAP/OPNFV courses, we have definitely learned a few lessons!

Here are some observations:

  • Covering the entire ONAP project in half-day of lectures is definitely packing in a lot. There is a lot to digest; however, participants seem to be able to keep up. Areas such as VNF onboarding, OSS/BSS interfaces, closed-loop automation seem to generate the most questions.  
  • Creating labs was challenging for us. ONAP + OPNFV require more resources that a laptop can provide. To make self-service labs possible for online LF courses, we had to install both these software stacks on one VM in a public cloud. This took multiple man-months of effort. Also, we had to create numerous workarounds for issues with the ONAP Amsterdam OOM release. I'm confident all of this will be a lot simpler with the Beijing release.  
  • We initially thought that the vFW demo -- from ONAP+OPNFV deployment to vFW network service deployment could be completed in 0.5 day. We were so wrong! The entire lab end-to-end is more like 1.5 days which would make the overall course a 2 day course instead of 1 (given 1/2 day of training). We had to cut the lab down twice to make everything fit in 1 day. And based on the ONS training experience, we will need to trim the lab a little more.

Being a pioneer around ONAP has been both challenging and fun! If you are interested in ONAP/OPNFV training or professional services, do not hesitate to contact us.

Amar Kapadia

Akraino is a brand new Linux Foundation Edge Stack project with the seed code coming from AT&T.
Find out more

It's Friday, I couldn't resist the urge to try out some alliteration!

Akraino is a brand new Linux Foundation Edge Stack project with the seed code coming from AT&T. The move is very similar to what happened almost exactly an year ago with OpenECOMP (now ONAP), and ONAP has had a very successful first year.

There aren't many details available on Akraino, my guess is more will be made available at the Open Networking Summit later in March'18.

I have no inside knowledge of what Akraino is, but I was fortunate enough to attend a presentation by Kandan & Rodolfo from AT&T at OpenDev 2017 where they presented the various AT&T open source efforts around edge computing. Let me describe what I heard then, since this might be pertinent to Akraino.

While OpenStack and Kubernetes (NFVI/VIM) are available as underlying edge stack technologies, there's nothing available to deploy and manage the lifecycle of these stacks at scale. With tens of thousands of central offices i.e. edge locations, and even more possible through radio tower colocation, on the side of the highway or even at customer premises, manual orchestration of the edge stack is simply not an option.

The challenges laid out by AT&T during OpenDev for an edge stack are:

1. The deployment and lifecycle management needs to be fully automated

2. The automated process needs to support a very large scale -- tens or hundreds of thousand edge stacks if not millions

3. The stack needs to be modular to allow for different sizes/capabilities depending on how constrained the edge environment is

4. The stack needs to support new hardware acceleration technologies ranging from GPU, TPU, NPU, FPGA etc.

5. Finally, the edge stack needs to support integration with ONAP and other related software stacks

Given these requirements, AT&T talked about 6 open source software projects. These are all edge related and have been started by AT&T:

The git repositories are at: OpenStack Helm, Promenade, Shipyard, Drydock, Armada, Deckhand.

If Akraino has anything to do with these initiatives, I think it is a huge move towards the enablement of edge computing and making the edge stack open source (It's hard to imagine a proprietary edge stack with something like this on the horizon). Like you, I'm looking forward to finding out more.

And oh by the way, if you are attending ONS, do sign up for ONAP or OPNFV training courses conducted by my colleagues and me.

Amar Kapadia

Verizon joined ONAP as a platinum member today.
Find out more

Verizon joined ONAP as a platinum member today. This is actually a very big deal. This means that the two of the largest US and two of the largest Chinese communications service providers (CSPs) back ONAP (along with the leading operators from France, UK, India, Turkey and Canada). In the case of AT&T and Verizon, these are two traditional competitors putting their collective wood behind the ONAP arrow.

The diversity, vibrancy and will of a community generally ensures the success of an open source project. Seems like we have all of those ingredients with ONAP.

I have no doubt ONAP will be the default SDN/NFV automation engine. It will also be the default MEC management and orchestration engine. If you are a CSP or a telecommunications vendor, you can't hold off looking at ONAP any more.

Want to learn more? Join the Aarna-Cloudify "An Introduction to ONAP" webinar or request our ONAP onsite training.

Amar Kapadia

Let's discuss the latest Amsterdam release of ONAP.
Find out more

I have been talking about ONAP via a blog series (last blog here).  It's perhaps useful to take a break and discuss the latest Amsterdam release. I think this release is important for three reasons:

1. It shows that the mega-merger of ECOMP and Open-O actually works. Perhaps more important that the codebase merger is collaboration between people and that they were able to get a release out, when they said they would. This is a strong confidence building indicator.

2. It proves that the key architectural principles of ONAP such as model drive, cloud native, DevOps, real-time, closed loop automation etc. actually work in real-life. These are new and rather futuristic concepts that help move operators from a break-fix mindset to plan-build one. And with Amsterdam, the future is here.

3. It shows that the release is useful through two concrete use case blueprints: VoLTE and residential vCPE. Of course, you can create any service using ONAP; it's a general purpose platform not restricted to these narrow services, but having these two use cases shows that ONAP can do something useful. It also provides potential users with a concrete step-by-step example of how to extract value from ONAP.

So now we get to the key question:

For operators:

Telecom operators should be thinking about an ONAP POC with the residential vCPE use case. The residential vCPE use case is 100% open source, so there are no vendor dependencies. Doing a POC will expose you to all the architectural concepts of ONAP.

For VNF vendors:

Maybe you don't have a strong enough business case to package your VNF for ONAP yet, but we all know that sooner or later you will have to bite the bullet on this. It's a matter of time. An ONAP POC with the residential vCPE use case can help you understand how ONAP works and how you could package up your VNF for ONAP in the future. You can even start thinking about how you might differentiate your VNF for ONAP.

Looking for help? Feel free to contact us for training,  deployment services and VNF packaging around ONAP.