This post will walk you through the integration process between Juniper Contrail networking (as SDN solution) and Cisco NSO (as service orchestrator). The final goal is to have a seamless integration between different service components inside Telco network and create orchestration workflow that can build, create, deploy and manage network services especially in large environment
So, without further ado, let’s start
where exactly each solution fits?
Juniper contrail is expanding the networking service provided to a virtualized services to other physical devices existing in same cloud environment. it uses a set of protocols such as BGP, MPLSoUDP, MPLSoGRE, XMPP to communicate between different components and ensure the service routes are installed in appropriate location to build both the control and forwarding planes
Also controller component inside the contrail ecosystem work as a route reflector, whenever a route need to be injected to outside world or into underlay fabric, it will advertise it on behalf of virtual router (installed in each compute node) and add the correct reachability information. same idea also when controller receive a prefix matching a route target of a virtual service. it will make sure the route is installed inside the vRouter so it will be available to instances.
Forwarding plane is build by leveraging the overlay tunnels. contrail can use MPLS over (GRE/UDP) and VxLAN as overlays technologies to forward the packets from/to network services. The tunnel should be built automatically whenever there’s a need to move a packet back and forth.
in other hand, Cisco NSO is a service orchestrator that leverage the YANG modeling to describe the service model. you can develop a service using YANG then NSO will translate the service components to device model configuration (also based on YANG) and finally get pushed to devices through NED (Network Element Driver).
needless to say both solutions provide a robust northbound API access to manage component.
The idea here for both solution is to provide as much as abstraction as possible by either creating a service model using Cisco NSO or managing the overlay/underlay communication details using Juniper Contrail
So why to not get the best of both world and integrating them together?
Physical Lab topology
The lab topology consists of normal CLOS architecture that connect Red Hat openstack nodes (RHOSP10 shipped with 1 controller and 2 compute nodes) and Juniper contrail networking (Controller, Analytics and Analytics DB each in standalone server). The contrail is already integrated with openstack and vRouter is installed on each compute node and responsible for any forwarding. while the cisco orchestrator set on another leaf. Finally the POD gateways are Juniper MX (not used in this lab) and Cisco CSR1000v that connect to spines. All nodes can communicate with each other through the fabric and with Internal API subnet (10.99.100.0/24). Additionally the controller will establish a BGP session with both gateways. Again, think of contrail controller as a Route reflector that reflect a routes to/from vRouters to the POD gateways
Normally, the contrail will wait for a external trigger to create a network, add a route target to it then advertise it to outside world. we will trigger this call from Cisco NSO.Not only that, but also we will use NSO to create additional configuration on cisco gateway required to complete service orchestration such as VRF configuration, Adding this VRF under BGP VPNv4 address family and finally establish GRE tunnel back to vRouter in the compute node that host the service VM. Recall that the service prefix itself not present in any intermediate nodes routing tables between the the gateway and vrouter and hence will drop the packet. I have to add GRE headers to the packet between gateway and in order to reach to vRouter (VRF-Lite with GRE).
This setup is suitable for datacenter environment that usually don’t run MPLS in the core network.
Cisco Gateway base configuration
in cisco base configuration, I’m peering with controller on AS64512 through MP-BGP. Then I will apply a route map on each received prefix from my peer that will set the next hop reachability to MGRE profile. This profile is configured to establish a tunnel . The tunnel source will be GigabitEthernet1 and destination will be the prefix next-hop
interface Loopback0 ip address 192.0.2.1 255.255.255.255 ! interface Loopback1 ip address 192.0.2.2 255.255.255.255 interface Tunnel102 ip address 192.168.0.129 255.255.255.0 tunnel source Loopback0 tunnel destination 192.0.2.2 interface GigabitEthernet1 ip address 10.99.100.254 255.255.255.0 negotiation auto ! interface GigabitEthernet2 no ip address shutdown negotiation auto l3vpn encapsulation ip MGRE transport ipv4 source GigabitEthernet1 router bgp 64512 bgp router-id 10.99.100.254 bgp log-neighbor-changes neighbor 10.99.100.40 remote-as 64512 neighbor 10.99.100.40 update-source GigabitEthernet1 ! address-family ipv4 no neighbor 10.99.100.40 activate default-information originate exit-address-family ! address-family vpnv4 neighbor 10.99.100.40 activate neighbor 10.99.100.40 send-community extended neighbor 10.99.100.40 route-map SELECT_UPDATE_FOR_L3VPN in exit-address-family ! route-map SELECT_UPDATE_FOR_L3VPN permit 10 set ip next-hop encapsulate l3vpn MGRE ! end
Notice also I have a couple of loopbacks used as source and destination for route leaking in any tunnels created between the VRF and GRT (Global Routing Table). This Tunnel102 will be always in GRT while the tunnel created through NSO will be in VRF.
most of the configuration will be in NSO as already Juniper contrail integrated and configured with Red Hat openstack. for configuration we will create a service model that contrains YANG and python. YANG will be responsible for modeling the service while python will send API request to the contrail to create desired network using contrail with the help of contrail VNC_API package. the format for sending the request is as follow
The vn_subnet and vn_ipaddr will be passed from used to NSO. Similarly if you want to attach a route target to created network. you have to update the network and add a new properties (which is a route target in that case)
for more details about the contrail API, please check this link
Moving forward, we will design the YANG part by modeling the required information for creating the VN (Virtual Network). For our service, we will need the network name (modeled as vn_name), IP Address (modeled as vn_ipaddr with type inet:ipv4-address), the network subnet (vn_subnet). Also we will need the route target, GRE IP Address, Tunnel number and finally which device will be receive the configuration (Juniper MX or Cisco CSR). so the final YANG model will be as following (I choose not to post it in blog body as it will not formatted correctly)
notice we didn’t define any specific configuration for any device. this is just modeling the service. Also we can set a condition on the value. for example we’re forcing the gre tunnel number in specific ranges (1 to 199 or 1300 to max only)
Next we will define a template for cisco configuration in XML format. NSO will populate this template with values obtained through YANG model and push the configuration to the router
You can check the XML code here for this service.
Final thing in NSO configuration is define the Python code that will glow everything together. it will be responsible for calling the Contrail API, create a network and attach a route target to it, then populate the XML template with YANG values and finally communicate with the CSR device using SSH and apply the configuration on it
I just use a small hack (with help of netaddr module) to convert the subnet mask from dot notation (x.x.x.x) to CIDR (/y). Contrail accept network subnet with the CIDR and user might enter it in dot format. so python script will take care of this
Now, The following video will demo the solution in details (mind the notes that wrote in Cisco terminal and Openstack shell session. Also better to watch in HD to not miss the details)
Some Packet Captures
let’s look at packet capture for both contrail-controller and compute node. First we will notice the BGP open session sent between controller and gateway to agree on capabilities
Next, the controller will advertise the reachability information for the created subnet such as the Next hop (so the router can know where the dynamic gre tunnel should be terminated at the end) and subnet router_target and route_distinguisher
The control plane is now BUILT and populated with all needed information to start the forwarding
let’s see the forwarding plane in the compute.
yup, as expected. The outer packet is the tunnel endpoints (Gateway and Compute-1 ip addresses) then come the MPLSoGRE header and finally the inner ip addresses (service addresses)
Running your services in virtual environment doesn’t mean you have to sacrifice with your ability to control the traffic or mean you have to design a complex solution to internetworking between nodes. The key here is to understand where each component stand and how to integrate them together.
I hope this been informative for you and I’d like to thank you for reading.