Integrating Juniper SDN Contrail with Cisco Orchestrator (NSO)

This post will walk you through the integration process between Juniper Contrail networking (as SDN solution) and Cisco NSO (as service orchestrator). The final goal is to have a seamless integration between different service components inside Telco network and create orchestration workflow that can build, create, deploy and manage network services especially in large environment

So, without further ado, let’s start

where exactly each solution fits?

Juniper contrail is expanding the networking service provided to a virtualized services to other physical devices existing in same cloud environment. it uses a set of protocols such as BGP, MPLSoUDP, MPLSoGRE, XMPP to communicate between different components and ensure the service routes are installed in appropriate location to build both the control and forwarding planes


Also controller component inside the contrail ecosystem work as a route reflector, whenever a route need to be injected to outside world or into underlay fabric, it will advertise it on behalf of virtual router (installed in each compute node) and add the correct reachability information. same idea also when controller receive a prefix matching a route target of a virtual service. it will make sure the route is installed inside the vRouter so it will be available to instances.

Forwarding plane is build by leveraging the overlay tunnels. contrail can use MPLS over (GRE/UDP) and VxLAN  as overlays technologies to forward the packets from/to network services. The tunnel should be built automatically whenever there’s a need to move a packet back and forth.

in other hand, Cisco NSO is a service orchestrator that leverage the YANG modeling to describe the service model. you can develop a service using YANG then NSO will translate the service components to device model configuration (also based on YANG) and finally get pushed to devices through NED (Network Element Driver).


needless to say both solutions provide a robust northbound API access to manage component.

The idea here for both solution is to provide as much as abstraction  as possible by either creating a service model using Cisco NSO or managing the overlay/underlay communication details using Juniper Contrail

So why to not get the best of both world and integrating them together?

Physical Lab topology


The lab topology consists of normal CLOS architecture that connect Red Hat openstack nodes (RHOSP10 shipped with 1 controller and 2 compute nodes) and Juniper contrail networking (Controller, Analytics and Analytics DB each in standalone server). The contrail is already integrated with openstack and vRouter is installed on each compute node and responsible for any forwarding. while the cisco orchestrator set on another leaf. Finally the POD gateways are Juniper MX (not used in this lab) and Cisco CSR1000v that connect to spines. All nodes can communicate with each other through the fabric and with Internal API subnet ( Additionally the controller will establish a BGP session with both gateways. Again, think of contrail controller as a Route reflector that reflect a routes to/from vRouters to the POD gateways



Normally, the contrail will wait for a external trigger to create a network, add a route target to it then advertise it to outside world. we will trigger this call from Cisco NSO.Not only that, but also we will use NSO to create additional configuration on cisco gateway required to complete service orchestration such as VRF configuration, Adding this VRF under BGP VPNv4 address family and finally establish GRE tunnel back to vRouter in the compute node that host the service VM. Recall that the service prefix itself not present in any intermediate nodes routing tables  between the the gateway and vrouter and hence will drop the packet. I have to add GRE headers to the packet between gateway and in order to reach to vRouter (VRF-Lite with GRE).


This setup is suitable for datacenter environment that usually don’t run MPLS in the core network.

Cisco Gateway base configuration

in cisco base configuration, I’m peering with controller on AS64512 through MP-BGP. Then I will apply a route map on each received prefix from my peer that will set the next hop reachability to MGRE profile. This profile is configured to establish a tunnel . The tunnel source will be GigabitEthernet1  and destination will be the prefix next-hop

interface Loopback0
 ip address
interface Loopback1
 ip address

interface Tunnel102
 ip address
 tunnel source Loopback0
 tunnel destination

interface GigabitEthernet1
 ip address
 negotiation auto
interface GigabitEthernet2
 no ip address
 negotiation auto

l3vpn encapsulation ip MGRE
 transport ipv4 source GigabitEthernet1

router bgp 64512
 bgp router-id
 bgp log-neighbor-changes
 neighbor remote-as 64512
 neighbor update-source GigabitEthernet1
 address-family ipv4
 no neighbor activate
 default-information originate
 address-family vpnv4
 neighbor activate
 neighbor send-community extended
 neighbor route-map SELECT_UPDATE_FOR_L3VPN in
route-map SELECT_UPDATE_FOR_L3VPN permit 10
 set ip next-hop encapsulate l3vpn MGRE

Notice also I have a couple of loopbacks used as source and destination for route leaking in any tunnels created between the VRF and GRT (Global Routing Table). This Tunnel102 will be always in GRT while the tunnel created through NSO will be in VRF.


NSO Configuration

most of the configuration will be in NSO as already Juniper contrail integrated and configured with Red Hat openstack. for configuration we will create a service model that contrains YANG and python. YANG will be responsible for modeling the service while python will send API request to the contrail to create desired network using contrail  with the help of contrail VNC_API package. the format for sending the request is as follow


The vn_subnet and vn_ipaddr will be passed from used to NSO. Similarly if you want to attach a route target to created network. you have to update the network and add a new properties (which is a route target in that case)


for more details about the contrail API, please check this link

Moving forward, we will design the YANG part by modeling the required information for creating the VN (Virtual Network). For our service, we will need the network name (modeled as vn_name), IP Address (modeled as vn_ipaddr with type inet:ipv4-address), the network subnet (vn_subnet). Also we will need the route target, GRE IP Address, Tunnel number and finally which device will be receive the configuration (Juniper MX or Cisco CSR). so the final YANG model will be as following (I choose not to post it in blog body as it will not formatted correctly)


notice we didn’t define any specific configuration for any device. this is just modeling the service. Also we can set a condition on the value. for example we’re forcing the gre tunnel number in specific ranges (1 to 199 or 1300 to max only)

Next we will define a template for cisco configuration in XML format. NSO will populate this template with values obtained through YANG model and push the configuration to the router

You can check the XML code here for this service.


Final thing in NSO configuration is define the Python code that will glow everything together. it will be responsible for calling the Contrail API, create a network and attach a route target to it, then populate the XML template with YANG values and finally communicate with the CSR device using SSH and apply the configuration on it


I just use a small hack (with help of netaddr module) to convert the subnet mask from dot notation (x.x.x.x) to CIDR (/y). Contrail accept network subnet with the CIDR and user might enter it in dot format. so python script will take care of this



Now, The following video will demo the solution in details (mind the notes that wrote in Cisco terminal and Openstack shell session. Also better to watch in HD to not miss the details)

Some Packet Captures

let’s look at packet capture for both contrail-controller and compute node. First we will notice the BGP open session sent between controller and gateway to agree on capabilities


Next, the controller will advertise the reachability information for the created subnet such as the Next hop (so the router can know where the dynamic gre tunnel should be terminated at the end) and subnet router_target and route_distinguisher


The control plane is now BUILT and populated with all needed information to start the forwarding

let’s see the forwarding plane in the compute.


yup, as expected. The outer packet is the tunnel endpoints (Gateway and Compute-1 ip addresses) then come the MPLSoGRE header and finally the inner ip addresses (service addresses)


Wrap up

Running your services in virtual environment doesn’t mean you have to sacrifice with your ability to control the traffic  or mean you have to design a complex solution to internetworking between nodes. The key here is to understand where each component stand and how to integrate them together.

I hope this been informative for you and I’d like to thank you for reading.




Exploding Juniper Devices with NAPALM

In this multi-posts series, We will deep dive into Juniper network automation and how to automate both configuration and operation for Juniper devices using different tools available such as PyEZ, NAPALM and Ansible.

if you missed the first part, Building Basic Configuration using Jinja2 Template, Please Read it first to understand the network topology that we will work on it and generate the initial configuration.

Intro to NAPALM

in this part, we will explore a python library called N.A.P.A.L.M. This is a shortcut for Network Automation and Programmability Abstraction Layer with Multi-vendor Support. It’s vendor neutral, cross-platform open source project that provides a unified API to network devices.

Multi-Vendor?! How?

NAPALM works such that it connect to network devices through SSH interface or using vendor API (API of course is much easier to execute and parse) and execute commands directly on device CLI then parse the output using TextFSM modules and regular expressions to return the final output to use in structured format (lists and dictionaries) so you could basically use the same NAPALM getters methods(get_bgp, get_interfaces,get_mac_address…etc) to connect to devices from different vendors and get the output without even knowing the vendor command.

Moreover, The returned output will be in structured format so you can easily parse it and get the needed data. This allow you to have a unified access to data from all supported vendors with a few lines of codes

Our Use Cases

I’ll implement three use cases using NAPALM. First one is to get the interfaces TX/RX Errors and discards and print it in tabular format


Then I want to get summarized information for specific route like the next-hop, outgoing interface and protocol that advertise this route


Finally I want to generate compliance report for the running configuration

Note: I’ll upload all scripts to my GitHub repo

“Talk is Cheap, Show me the code”  Linus Torvalds

NAPALM Installation

Step 1

we will start first by installing the napalm module in python using PIP like what we did before in netmiko

pip install napalm


Step 2

Then we will import the get_network_driver from the installed napalm module. This method should be inatalized with vendor name as an input in order to prepare the proper configuration for each vendor and use the correct API

from napalm import get_network_driver
junos_driver = get_network_driver("junos")

Step 3

Finally we should provide the username and password for the device that we want to connect

mx_router = junos_driver(hostname=ip,username="root",

At this moment, The NAPALM will open session to the router and we will have the ability to run multiple getters and return the data. Lets try something simple like get_facts(). This method provide a general information about the device like the hostname, number of interfaces, running version and so on. I’ll create for loop to iterate over all devices in the topology and get the same information


Note that I’m checking the device is reachable and live before getting the data from it. The result of is_alive() function is either True or False so I could use it inside if statement


Note that the output is structured and actually a result of many executed commands and API calls to juniper device (like “show interface terse” , “show system uptime”, “show chassis hardware detail “). This allow you to focus on development not on finding and parsing the data returned from the device.

Use Case 1: Getting interfaces with errors

Ok, Now let’s implement our first use case. As mentioned, I want to get the Rx/Tx errors per interface in Tabular format. this could be used along with some scheduled jobs to generate nicely looking reports or even trigger an event like sending email or orchestrating another configuration and so on.

I’ll use additional module called “prettytable” that generate the take care of Table format like headers, rows, spacing

First I will ask user to enter the MX IP address as an argument to my script


Then I will create two tables with headers using the “prettytable” module. One that hold the Rx/Tx errors and the 2nd for Tx/Rx Discards (you can also create for unicast, multicast,..etc)


Finally I’m connecting to device and query the infromation and just selecting the data that I want from the returned output then populate the table with them. I’m using “get_interfaces_counters()” to get the  needed data and also “get_interfaces()” to get the MAC address for each interface




Use Case 2: Searching in[et0] Routing Table

The second use case is to parse the device routing table and search for specific route attributes. This is useful when you’re troubleshooting large network and want to get a summarized view for a route. Also you can use it to visualize the control plane for the routing table (but this will require some additional modules like matplotlib and networkx).

Anyways, we just need to print the next-hop, protocol and outgoing interface for the provided route. I’ll use the same script above but this time I’ll use another method called “get_route_to()

First I will add also another argument for the route


Then I will parse the returned output and search only for the needed attributes.


Two things worth to mention. First I wrapped the above code in “try..except” block to catch a case where the route is not exist on a target device. so instead of printeing ugly exception, I will print a custom message. Second, I choosed the first learned path “[0]”. You could enhance the script by getting the length of all route entries and iterate over it. but for sake of bevrity, I just choosed the first occurance.


Use Case 3: Compliance Reporting

Our final use case is quite interesting! and it’s related to auditing and compliance stuff. Assume that I want to make sure a specific configuration exist in all my network devices such as TACACS servers, default routes and SNMP communities. or I want to make sure all my devices are running on a specific vendor OS version and so on

NAPALM has a cool feature that allow you to compare the returned information (either operational or in configuration) with a YAML template and if there’s a deviation from the template, it will mention that in the final report. You can check & compare ANY value that returned from device whether it’s IP addresses, interface counters, bgp peer status, OS version..etc

Moreover, You can check if a specific item is just exist regardless it’s value. for example you can check if the router-id is configured, local ASN, ipv4 on specific interface. This what we will do exactly next.

First I will define my YAML template with all values that need to verified and checked against the values inside the device


in the above template, I’m verifiying 4 things:

1- JunOS version is “14.1R1.10”

2- There’s IP address configured under ge-0/0/0.0
3-Device has an active BGP connection and peering

4- A router-id is configured under bgp_neighbors

So I’m basically checking both operational and configuration data.

Then I will use compliance_report() function and provide the YAML file created in previous step to check


The result will be also structured and will indicate which parts from the template are complied and which are not




The network automation is evolving rapidly!. Previously we just had PExpect and Paramiko modules that just establish a connection then execpt & read_until specific output. Now, we have a solid modules that able to handle different vendors and return a structured output in such a way that make network engineer/developer focus on building robust network automation solution & open a door for other modules that automate the daily operation tasks for network engineer.

I hope this been informative for you and I’d like to thank you for reading


Juniper Network Automation using Python–Part 2

In This multi-posts series, We will deep dive into Juniper network automation and how to automate both configuration and operation for Juniper Nodes.

if you miss the first part, Building Basic Configuration using Jinja2 Template, Please Read it first to understand the network topology that we will work on it and generate the initial configuration.

In this part we will cover and excellent and powerful python library from juniper called PyEZ (some folks in Juniper say it’s abbreviation for Python Easy!).  Anyways the PyEZ provides an abstraction layer built on top of the NETCONF protocol  (so you need to enable netconf over ssh first on juniper device before getting the information)

So Why it’s Easy?

Assume you need to get a “show version” from specific juniper device. if you used a Raw Python (i.e paramiko for ssh then open a socket for receiving data then parse the returned data..etc) you will need 48 lines to get only the device version. but using the PyEZ will minimize the code to only 7 lines and will gather more data !


Actually in the backend, PyEZ use the raw python and some additional libraries like lxml for sending and receiving messages to Juniper Device. Again, Over NETCONF Transport protocol




You just need to open the command line (windows cmd or linux shell) and write

pip install junos-eznc


in the background, the pip will install additional packages required by original Juniper PyEZ

Operation using PyEZ

The Below sample script is used to iterate over all devices and gather a specific information from them


First , we import the installed python library

Second, we define all management IP Address for juniper devices in our network and provide the credentials

Third, we iterate over devices and get facts  for this devic. By default, the PyEZ library gathers basic information about the device and stores this information in a Python dictionary. This dictionary can be easily accessed with the facts attribute

The result of executing the above script will be as following


What’s the facts available to be collected?

All the below data could be retrieved from the device. For example the Hostname,Model,Version,Current Routing Engine, Virtual or Physical and so on

{'2RE': None,
  'HOME': '/root',
  'RE0': None,
  'RE1': None,
  'RE_hw_mi': None,
  'current_re': ['re0'],
  'domain': None,
  'fqdn': 'PE1-Region1',
  'hostname': 'PE1-Region1',
  'hostname_info': {'re0': 'PE1-Region1'},
  'ifd_style': 'CLASSIC',
  'junos_info': {'re0': {'object': junos.version_info(major=(14, 1), type=R, minor=1, build=10),
                         'text': '14.1R1.10'}},
  'master': None,
  'model': 'VMX',
  'model_info': {'re0': 'VMX'},
  'personality': 'MX',
  're_info': None,
  're_master': None,
  'serialnumber': None,
  'srx_cluster': None,
  'srx_cluster_id': None,
  'srx_cluster_redundancy_group': None,
  'switch_style': 'BRIDGE_DOMAIN',
  'vc_capable': False,
  'vc_fabric': None,
  'vc_master': None,
  'vc_mode': None,
  'version': '14.1R1.10',
  'version_RE0': '14.1R1.10',
  'version_RE1': None,
  'version_info': junos.version_info(major=(14, 1), type=R, minor=1, build=10),
  'virtual': True}

in the background, PyEZ send a lot of RPC (Remote Procedure Calls) to the device and requesting specific information then store it to the above table, then another RPC requesting another info and so on. Finally it will collect all of these information into the nice dictionary above. So COOL.

if you need to see the exact RPC call send on the wire to juniper device. You can get it from JunOS CLI directly by writing the command and displaying the xml rpc request


You can instruct the PyEZ to send the XML request and return the reply also in XML (but in this case you will need to parse the returned data yourself). This is useful if you’re designing some sort of management system and you need to monitor output from specific command. The RPC provide structured data that could be easily parsed by any XML parser available in python

For Example I need to  get the route_summary from devices and I know the exact RPC call, so I wrote the below script to get the data and convert it to string for prettyprint. but again you write any parser to get just specific information





PyEZ use concept of python metaprogramming to generate any kind of RPC (because it’s really hard to hard-code each RPC. it will only be generated only when requested). Additionally you can use xpath expressions to match specific data from returned response

Configuration using PyEZ

PyEZ provides a “Config” class which simplifies the process of loading and committing configuration to the device and it integrates with the Jinja2 templating engine that we used before to generate configuration from Template

Also PyEZ offers utilities for comparing configurations, rolling back configuration changes, and locking or unlocking the configuration database

The load() method can be used to load a configuration snippet, or full configuration, into the device’s candidate configuration

The configuration can be specified in text (aka “curly brace”), set, or XML syntax. Alternatively, the configuration may be specified as an lxml.etree.Element object

I will define a simple configuration file with “set” style to change hostname for specific host and use the “Config” class to push configuration


Please note this only change the candidate config and will not commit change unless you request that.


Also there’re commit parameters available on PyEZ excatly like CLI. For Example you can comment, commit_check, rollback. commit_sync.etc

Other Utilities inside PyEZ

There’re a lot of classes to be covered inside the PyEZ. I’ll mention some of them briefly  in case you need to deep dive into them

The FS class
of the jnpr.junos.utils.fs module provide common commands that access the filesystem on the Junos device

The jnpr.junos.utils.start_shell module provides a StartShell class that allows an SSH connection to be initiated to a Junos device

The SW class in the jnpr.junos.utils.sw module provides a set of methods for upgrading the Junos software on a device as well as rebooting or powering off the device.

You can check and visualize the full architecture of PyEZ in this post where I generate a full visualization for every package and utility inside the PyEZ

Wrapping Up

in this post we walk-through an excellent python library used to communicate with Juniper Devices. The real power of PyEZ (beside the  usability) is it’s provided from the Vendor itself so you’re 100% sure the source code is compatible with device and won’t break anything in the device.

In next post of this series, we will walk through another python library called “NAPALM”

I hope this been informative for you and I’d like to thank you for reading

Juniper Network Automation using Python–Part 1

In This multi-posts series, We will deep dive into Juniper network automation and how to automate both configuration and operation for Juniper Nodes.  I’ll divide the series into 4 main parts. I will store the configuration for each part at my GitHub Account under JuniperAutomation Repo


Building the Lab, Provision Basic Interfaces,OSPF,BGP,MPLS configuration using Jinja2 Templating and verification using Netmiko


Use Juniper PyEZ and NAPALM modules to get useful Data from Nodes


using Netconf and Ansible in Juniper


TBD (your suggestions are highly welcomed though)
Read More »

Using Python Multi-Processing for Networking–Part 1

Python became the de-facto standard for Network Automation  nowadays, Many Network Engineers already use it on daily basis to automate networking tasks starting from configuration to operation till troubleshooting the network problems. in This post, we will visit one of advanced -yet- topics in python and scratch the multi-processing nature of python and learn how to use it to accelerate the script execution time.

First, We need to understand, How Computer execute a python script?


1- When you type #python <your_awesome_automation_script>.py in the shell, Python (which run as a process)  instruct your computer processor to schedule a thread  (which is the smallest unit of processing)


2- The allocated thread will start to execute your script , line by line. Thread can do anything, starting from interacting with I/O devices, connecting to router, printing output, doing mathematical stuff..anything

3- Once The script hit the EOF (End of File), the thread will be terminated and returned to the free pool to be used by other processes

in linux, You can use #strace –p <pid> to trace a specific thread execution

The more threads you assigned to script (and permitted by your processor or OS), the faster your script will run. Actually threads sometimes called “Workers” or “Slaves

I have a feeling that you’ve that little idea in your head, Why wouldn’t we assign a LOT of threads from all cores to python script in order to get the job done,  Quickly!

The problem with assigning a lot of threads to one process without special handling is what’s called “Race Condition”. The operating systems will allocate memory to your process (in this case it’s python process) to be used at the runtime and accessed by all Threads, ALL OF THEM AT THE SAME TIME. Now imagine one of those threads is reading a data before it’s actually written by another thread!.you don’t know the order in which the threads will attempt to access the shared data and this is called Race Condition.


You can read about the race condition and how to avoid it in this link

one of the available solutions is to make thread acquire lock, in fact, Python, by default is optimized to run as a single threaded process and has something called GIL(Global Interpreter Lock). GIL does not allow multiple threads to execute Python code at the same time.

but rather than have Multi-threads, Why Don’t we have Multi-processes?!!

The beauty of the multi-processes over multi-thread is you won’t be afraid from data corruption due to shared data among the threads.  Each spawned process will have their own allocated memory that won’t accessed by other python processes. This will allow us to execute parallel tasks at the same time!


Also, from Python PoV, each process has it’s owned GIL. so there’s no resource conflict or race condition here.

Enough Talk, Let’s jump to the code.

First you need to import the module to your python script

import multiprocessing as mp

Second, you need to wrap your code with a python function, this will allow the process to “target” this function and mark it as a parallel execution

let’s say we have a code that connect to router and execute command on it using netmiko library and we want to connect to all devices in parallel, This is a sample “serial” code


We need to assign number of process equal to number of devices (one process will connect to one device and execute the command) and set the target of the process to execute the function we wrapped around our previous code. This is an example


Finally we need to launch the process


behind the scene, the main thread that execute the main script will start to “Fork” a number of processes equal to number of devices and each of them targeting one function that execute “show arp” on all devices at the same time and store the output in a variable without affecting each other, brilliant!

this is a sample view for the processes inside the python now when you execute the full code.


one final thing need to be done, is to “join” the “forked” process again to the main thread/truck in order to finish the program execution smoothly.



I have a script used to push some initial configuration to lab devices (9 routers) and I decided to re-write it again using same above procedures and measure the time and here’s the findings (the value that we’re looking for is “real” row)

With Multi-Processing


Without Multi-Processing


You can spot the difference, it’s 6 time faster than the serial execution! and the gap will be increased as you add more devices.

Wrapping Up

in one simple word,

Don’t just Automate the task,  make it fast!

In next post, we will explore some additional flags for multi-processing library inside the python

Finally , I hope this has been informative for your and I’d like to thank you for reading.

BGP Visualization Using Python


During my Network study, I always admire the way that BGP works and operate. The black magic that handle how the packets are exit from one country (Autonomous System=ASN for short!) and enter the another without any “Boarding-Pass” or “Visa”. Not just that, but BGP strive to make the travel time and path is the shortest one amongst the available routes to destination using of exchanged path attributes between those ASN. very clever, robust and old Winking smile

But as the internet routing table grows and number of assigned ASN increase each day, I think it’s become harder to visualize the interconnection between ASNs. Everyday a new ASN is being connected to a bunch of other ASNs and it’s really hard to trace those connections with just a raw data provided by Route Glass Servers or RIRs.

So I think it’s time to involve Python to solve this problem. Basically I tried to build a python module that answer the below questions:

1- How ASNs are being connected to each other given a list of ASNs?

2- How ASNs are being connected in one country?

3- Which ASNs are considered as an Service Provider or IXP? (has more than 15 BGP peering with other ASN)

4- Which ASN is considered an Upstream to a specific ASN? That will help in defining the ASN gateway for a country for example.

5- Which ASN is considered a Downstream (Customer) to a specific Service Provider (Operator)? and how are they distributed comparing to other Service Providers?

6- Finally I need all of that in one picture, visually! You know the old say,

A picture is worth a thousand words

and those thousand words are being stored in a lot of  RIRs like RIPE and AFRNIC that provide useful BGP data publically but in raw format.

I started developing a python module to address the above questions and in two days I have a promising prototype and able to visualize the first country then later on added a capability to visualize a portion set of ASNs. Final stage is adding some console logs for troubleshooting and publishing the package into PyPI . I called it  (bgp_visualize) , Yeah, Couldn’t find a better one Open-mouthed smile.

I tried to design the bgp_visualize to work with a minimum set of possible data. For  example in case of visualizing BGP in specific country, you need to just provide the country code. However you can customize the way and look for the generated graph by providing few parameters like node_color , node_size , desired_background  and so on.

Working on Module

First you need to install it (and install python 2.7 of course if you didn’t have it already in your machine)

Using CLI

pip install bgp_visualize

Using GUI (Pycharm IDE)

Open Settings | Project Interpreter | | Add New

Then search for bgp_visualize python module and install it

Then Run the below code to visualize a set of ASNs (You can run it also from Python native IDLE if you’re using Windows OS)

from bgp_visualize import bgp_visualize_asn
ASNs= bgp_visualize_asn.bgp_visualize(asns=[8452,24835],dark=True)

The resulting graph will be something like below (Click on Image for better resolution) and to visualize all autonomous systems in specific country, you need to provide the country code to the object

from bgp_visualize import bgp_visualize_asn
country= bgp_visualize_asn.bgp_visualize(country='sa')

There’re a lot of screenshots for different BGP graphs are available in my Github page, so please check them out!. Also you can send me your generated graph and I’ll add it to the Github

Color Map

bgp_visualize module use different colors to represent the Autonomous System role in the graph. below is the list of colors and meaning of each in the generated graph

First if AS is considered to be  a service provider or IXP, then it will be colored with one of below colors

if AS is an upstream for specific ASN, then it will be colored as blue

ASN is Downstream:

Transit or not defined

Wrapping Up

I really enjoyed working on this package!.You can use it to troubleshoot and visualize any ASN in your network or in your country and understand the upstreams and downstreams for each one and easily identify the service providers, All in one graph!

For me, I had that idea long time ago to visualize every ASN, every connection, every prefix in the planet and draw them in nice and presentable way and I think this package is a good start, That’s my dream!.

Finally , I hope this has been informative for your and I’d like to thank you for reading.

Visualizing Python Module for Network Libraries (Netmiko and PyEZ)

Ever wondering How a python custom module or class is manufactured ? How does the developer write the python code and glue it together to create this nice and amazing “x” module ? What’s going on under-the-hood?

Documentation is good start of course, but we all know that it’s not usually updated with every new step or detail that developer add.

For Example,

we all know the powerful netmiko library created and maintained by Kirk Byers that utilize another popular SSH library called Paramiko. but we don’t understand the details and how the classes are connected together. we just write the below code  to execute a specific command in Cisco IOS platform

from netmiko import ConnectHandler

device = {"device_type":"cisco_ios",


net_connect = ConnectHandler(**device)
output = net_connect.send_command("show arp")

and booom, it’s working like charm

if you need to understand the magic behind the “self.charm” that netmiko use to return the result,  Please follow the below steps (require Pycharm IDE)

Netmiko Module


Open the Netmiko location inside the python library location folder (usually C:\Python27\Lib\site-packages) in Pycharm IDE


Step 2:

Right click on python module and choose Diagrams then Show Diagram


it will take some time to generate the diagram based on Your Java xms settings ( usually I assign 1024MB for it to work properly)

Step 3:

Now save the resulting  image into your desktop



And Here’s it (click on it to enlarge and zoom as much as you can)


Understand the result UML graph

Based on the resulting graph you can see that Netmiko is supporting a lot of vendors  like HP Comware, entrasys , Cisco ASA, Force10, Arista, Avaya,..etc and all of these classes are are inheriting from parent netmiko.cisco_base_connection.CicsoSSHConnection class (I think because they use the same SSH style as Cisco) which in turn inherit from another Big parent Class called netmiko.cisco_base_connection.BaseConnection

Also you can see that Juniper has it’s own class ( that connect directly to the Big parent

And finally we connect to the parent of all parents in python. The Object class (remember everything in Python is an object at the end)

Also you can find a lot of interesting things like SCP transfer class, SNMP class and with each one your will find the methods and parameters used to initialize this class

So the ConnectHandler method is primarily  used to check the device_type availability in above vendor classes and based on it , it will use the corresponding SSH Class. A well designed module really, Thanks Kirk Byers !



Juniper PyEZ Module

Applying the same procedure in Juniper Python wrapper (PyEZ) used to connect to JunOS based platforms. we can get another useful diagram as below (Again click to enlarge)


Also you can see the support for the netconf, serial and telnet connection and Views and Tables classes that Juniper use to define the facts about the device. Very clever.

That’s all. Please share any other interesting findings in python modules

I hope this has been informative for your and I’d like to thank you for reading.