Exploding Juniper Devices with NAPALM

In this multi-posts series, We will deep dive into Juniper network automation and how to automate both configuration and operation for Juniper devices using different tools available such as PyEZ, NAPALM and Ansible.

if you missed the first part, Building Basic Configuration using Jinja2 Template, Please Read it first to understand the network topology that we will work on it and generate the initial configuration.

Intro to NAPALM

in this part, we will explore a python library called N.A.P.A.L.M. This is a shortcut for Network Automation and Programmability Abstraction Layer with Multi-vendor Support. It’s vendor neutral, cross-platform open source project that provides a unified API to network devices.

Multi-Vendor?! How?

NAPALM works such that it connect to network devices through SSH interface or using vendor API (API of course is much easier to execute and parse) and execute commands directly on device CLI then parse the output using TextFSM modules and regular expressions to return the final output to use in structured format (lists and dictionaries) so you could basically use the same NAPALM getters methods(get_bgp, get_interfaces,get_mac_address…etc) to connect to devices from different vendors and get the output without even knowing the vendor command.

Moreover, The returned output will be in structured format so you can easily parse it and get the needed data. This allow you to have a unified access to data from all supported vendors with a few lines of codes

Our Use Cases

I’ll implement three use cases using NAPALM. First one is to get the interfaces TX/RX Errors and discards and print it in tabular format


Then I want to get summarized information for specific route like the next-hop, outgoing interface and protocol that advertise this route


Finally I want to generate compliance report for the running configuration

Note: I’ll upload all scripts to my GitHub repo

“Talk is Cheap, Show me the code”  Linus Torvalds

NAPALM Installation

Step 1

we will start first by installing the napalm module in python using PIP like what we did before in netmiko

pip install napalm


Step 2

Then we will import the get_network_driver from the installed napalm module. This method should be inatalized with vendor name as an input in order to prepare the proper configuration for each vendor and use the correct API

from napalm import get_network_driver
junos_driver = get_network_driver("junos")

Step 3

Finally we should provide the username and password for the device that we want to connect

mx_router = junos_driver(hostname=ip,username="root",

At this moment, The NAPALM will open session to the router and we will have the ability to run multiple getters and return the data. Lets try something simple like get_facts(). This method provide a general information about the device like the hostname, number of interfaces, running version and so on. I’ll create for loop to iterate over all devices in the topology and get the same information


Note that I’m checking the device is reachable and live before getting the data from it. The result of is_alive() function is either True or False so I could use it inside if statement


Note that the output is structured and actually a result of many executed commands and API calls to juniper device (like “show interface terse” , “show system uptime”, “show chassis hardware detail “). This allow you to focus on development not on finding and parsing the data returned from the device.

Use Case 1: Getting interfaces with errors

Ok, Now let’s implement our first use case. As mentioned, I want to get the Rx/Tx errors per interface in Tabular format. this could be used along with some scheduled jobs to generate nicely looking reports or even trigger an event like sending email or orchestrating another configuration and so on.

I’ll use additional module called “prettytable” that generate the take care of Table format like headers, rows, spacing

First I will ask user to enter the MX IP address as an argument to my script


Then I will create two tables with headers using the “prettytable” module. One that hold the Rx/Tx errors and the 2nd for Tx/Rx Discards (you can also create for unicast, multicast,..etc)


Finally I’m connecting to device and query the infromation and just selecting the data that I want from the returned output then populate the table with them. I’m using “get_interfaces_counters()” to get the  needed data and also “get_interfaces()” to get the MAC address for each interface




Use Case 2: Searching in[et0] Routing Table

The second use case is to parse the device routing table and search for specific route attributes. This is useful when you’re troubleshooting large network and want to get a summarized view for a route. Also you can use it to visualize the control plane for the routing table (but this will require some additional modules like matplotlib and networkx).

Anyways, we just need to print the next-hop, protocol and outgoing interface for the provided route. I’ll use the same script above but this time I’ll use another method called “get_route_to()

First I will add also another argument for the route


Then I will parse the returned output and search only for the needed attributes.


Two things worth to mention. First I wrapped the above code in “try..except” block to catch a case where the route is not exist on a target device. so instead of printeing ugly exception, I will print a custom message. Second, I choosed the first learned path “[0]”. You could enhance the script by getting the length of all route entries and iterate over it. but for sake of bevrity, I just choosed the first occurance.


Use Case 3: Compliance Reporting

Our final use case is quite interesting! and it’s related to auditing and compliance stuff. Assume that I want to make sure a specific configuration exist in all my network devices such as TACACS servers, default routes and SNMP communities. or I want to make sure all my devices are running on a specific vendor OS version and so on

NAPALM has a cool feature that allow you to compare the returned information (either operational or in configuration) with a YAML template and if there’s a deviation from the template, it will mention that in the final report. You can check & compare ANY value that returned from device whether it’s IP addresses, interface counters, bgp peer status, OS version..etc

Moreover, You can check if a specific item is just exist regardless it’s value. for example you can check if the router-id is configured, local ASN, ipv4 on specific interface. This what we will do exactly next.

First I will define my YAML template with all values that need to verified and checked against the values inside the device


in the above template, I’m verifiying 4 things:

1- JunOS version is “14.1R1.10”

2- There’s IP address configured under ge-0/0/0.0
3-Device has an active BGP connection and peering

4- A router-id is configured under bgp_neighbors

So I’m basically checking both operational and configuration data.

Then I will use compliance_report() function and provide the YAML file created in previous step to check


The result will be also structured and will indicate which parts from the template are complied and which are not




The network automation is evolving rapidly!. Previously we just had PExpect and Paramiko modules that just establish a connection then execpt & read_until specific output. Now, we have a solid modules that able to handle different vendors and return a structured output in such a way that make network engineer/developer focus on building robust network automation solution & open a door for other modules that automate the daily operation tasks for network engineer.

I hope this been informative for you and I’d like to thank you for reading



Juniper Network Automation using Python–Part 2

In This multi-posts series, We will deep dive into Juniper network automation and how to automate both configuration and operation for Juniper Nodes.

if you miss the first part, Building Basic Configuration using Jinja2 Template, Please Read it first to understand the network topology that we will work on it and generate the initial configuration.

In this part we will cover and excellent and powerful python library from juniper called PyEZ (some folks in Juniper say it’s abbreviation for Python Easy!).  Anyways the PyEZ provides an abstraction layer built on top of the NETCONF protocol  (so you need to enable netconf over ssh first on juniper device before getting the information)

So Why it’s Easy?

Assume you need to get a “show version” from specific juniper device. if you used a Raw Python (i.e paramiko for ssh then open a socket for receiving data then parse the returned data..etc) you will need 48 lines to get only the device version. but using the PyEZ will minimize the code to only 7 lines and will gather more data !


Actually in the backend, PyEZ use the raw python and some additional libraries like lxml for sending and receiving messages to Juniper Device. Again, Over NETCONF Transport protocol




You just need to open the command line (windows cmd or linux shell) and write

pip install junos-eznc


in the background, the pip will install additional packages required by original Juniper PyEZ

Operation using PyEZ

The Below sample script is used to iterate over all devices and gather a specific information from them


First , we import the installed python library

Second, we define all management IP Address for juniper devices in our network and provide the credentials

Third, we iterate over devices and get facts  for this devic. By default, the PyEZ library gathers basic information about the device and stores this information in a Python dictionary. This dictionary can be easily accessed with the facts attribute

The result of executing the above script will be as following


What’s the facts available to be collected?

All the below data could be retrieved from the device. For example the Hostname,Model,Version,Current Routing Engine, Virtual or Physical and so on

{'2RE': None,
  'HOME': '/root',
  'RE0': None,
  'RE1': None,
  'RE_hw_mi': None,
  'current_re': ['re0'],
  'domain': None,
  'fqdn': 'PE1-Region1',
  'hostname': 'PE1-Region1',
  'hostname_info': {'re0': 'PE1-Region1'},
  'ifd_style': 'CLASSIC',
  'junos_info': {'re0': {'object': junos.version_info(major=(14, 1), type=R, minor=1, build=10),
                         'text': '14.1R1.10'}},
  'master': None,
  'model': 'VMX',
  'model_info': {'re0': 'VMX'},
  'personality': 'MX',
  're_info': None,
  're_master': None,
  'serialnumber': None,
  'srx_cluster': None,
  'srx_cluster_id': None,
  'srx_cluster_redundancy_group': None,
  'switch_style': 'BRIDGE_DOMAIN',
  'vc_capable': False,
  'vc_fabric': None,
  'vc_master': None,
  'vc_mode': None,
  'version': '14.1R1.10',
  'version_RE0': '14.1R1.10',
  'version_RE1': None,
  'version_info': junos.version_info(major=(14, 1), type=R, minor=1, build=10),
  'virtual': True}

in the background, PyEZ send a lot of RPC (Remote Procedure Calls) to the device and requesting specific information then store it to the above table, then another RPC requesting another info and so on. Finally it will collect all of these information into the nice dictionary above. So COOL.

if you need to see the exact RPC call send on the wire to juniper device. You can get it from JunOS CLI directly by writing the command and displaying the xml rpc request


You can instruct the PyEZ to send the XML request and return the reply also in XML (but in this case you will need to parse the returned data yourself). This is useful if you’re designing some sort of management system and you need to monitor output from specific command. The RPC provide structured data that could be easily parsed by any XML parser available in python

For Example I need to  get the route_summary from devices and I know the exact RPC call, so I wrote the below script to get the data and convert it to string for prettyprint. but again you write any parser to get just specific information





PyEZ use concept of python metaprogramming to generate any kind of RPC (because it’s really hard to hard-code each RPC. it will only be generated only when requested). Additionally you can use xpath expressions to match specific data from returned response

Configuration using PyEZ

PyEZ provides a “Config” class which simplifies the process of loading and committing configuration to the device and it integrates with the Jinja2 templating engine that we used before to generate configuration from Template

Also PyEZ offers utilities for comparing configurations, rolling back configuration changes, and locking or unlocking the configuration database

The load() method can be used to load a configuration snippet, or full configuration, into the device’s candidate configuration

The configuration can be specified in text (aka “curly brace”), set, or XML syntax. Alternatively, the configuration may be specified as an lxml.etree.Element object

I will define a simple configuration file with “set” style to change hostname for specific host and use the “Config” class to push configuration


Please note this only change the candidate config and will not commit change unless you request that.


Also there’re commit parameters available on PyEZ excatly like CLI. For Example you can comment, commit_check, rollback. commit_sync.etc

Other Utilities inside PyEZ

There’re a lot of classes to be covered inside the PyEZ. I’ll mention some of them briefly  in case you need to deep dive into them

The FS class
of the jnpr.junos.utils.fs module provide common commands that access the filesystem on the Junos device

The jnpr.junos.utils.start_shell module provides a StartShell class that allows an SSH connection to be initiated to a Junos device

The SW class in the jnpr.junos.utils.sw module provides a set of methods for upgrading the Junos software on a device as well as rebooting or powering off the device.

You can check and visualize the full architecture of PyEZ in this post where I generate a full visualization for every package and utility inside the PyEZ

Wrapping Up

in this post we walk-through an excellent python library used to communicate with Juniper Devices. The real power of PyEZ (beside the  usability) is it’s provided from the Vendor itself so you’re 100% sure the source code is compatible with device and won’t break anything in the device.

In next post of this series, we will walk through another python library called “NAPALM”

I hope this been informative for you and I’d like to thank you for reading

Juniper Network Automation using Python–Part 1

In This multi-posts series, We will deep dive into Juniper network automation and how to automate both configuration and operation for Juniper Nodes.  I’ll divide the series into 4 main parts. I will store the configuration for each part at my GitHub Account under JuniperAutomation Repo


Building the Lab, Provision Basic Interfaces,OSPF,BGP,MPLS configuration using Jinja2 Templating and verification using Netmiko


Use Juniper PyEZ and NAPALM modules to get useful Data from Nodes


using Netconf and Ansible in Juniper


TBD (your suggestions are highly welcomed though)
Read More »

Using Python Multi-Processing for Networking–Part 1

Python became the de-facto standard for Network Automation  nowadays, Many Network Engineers already use it on daily basis to automate networking tasks starting from configuration to operation till troubleshooting the network problems. in This post, we will visit one of advanced -yet- topics in python and scratch the multi-processing nature of python and learn how to use it to accelerate the script execution time.

First, We need to understand, How Computer execute a python script?


1- When you type #python <your_awesome_automation_script>.py in the shell, Python (which run as a process)  instruct your computer processor to schedule a thread  (which is the smallest unit of processing)


2- The allocated thread will start to execute your script , line by line. Thread can do anything, starting from interacting with I/O devices, connecting to router, printing output, doing mathematical stuff..anything

3- Once The script hit the EOF (End of File), the thread will be terminated and returned to the free pool to be used by other processes

in linux, You can use #strace –p <pid> to trace a specific thread execution

The more threads you assigned to script (and permitted by your processor or OS), the faster your script will run. Actually threads sometimes called “Workers” or “Slaves

I have a feeling that you’ve that little idea in your head, Why wouldn’t we assign a LOT of threads from all cores to python script in order to get the job done,  Quickly!

The problem with assigning a lot of threads to one process without special handling is what’s called “Race Condition”. The operating systems will allocate memory to your process (in this case it’s python process) to be used at the runtime and accessed by all Threads, ALL OF THEM AT THE SAME TIME. Now imagine one of those threads is reading a data before it’s actually written by another thread!.you don’t know the order in which the threads will attempt to access the shared data and this is called Race Condition.


You can read about the race condition and how to avoid it in this link

one of the available solutions is to make thread acquire lock, in fact, Python, by default is optimized to run as a single threaded process and has something called GIL(Global Interpreter Lock). GIL does not allow multiple threads to execute Python code at the same time.

but rather than have Multi-threads, Why Don’t we have Multi-processes?!!

The beauty of the multi-processes over multi-thread is you won’t be afraid from data corruption due to shared data among the threads.  Each spawned process will have their own allocated memory that won’t accessed by other python processes. This will allow us to execute parallel tasks at the same time!


Also, from Python PoV, each process has it’s owned GIL. so there’s no resource conflict or race condition here.

Enough Talk, Let’s jump to the code.

First you need to import the module to your python script

import multiprocessing as mp

Second, you need to wrap your code with a python function, this will allow the process to “target” this function and mark it as a parallel execution

let’s say we have a code that connect to router and execute command on it using netmiko library and we want to connect to all devices in parallel, This is a sample “serial” code


We need to assign number of process equal to number of devices (one process will connect to one device and execute the command) and set the target of the process to execute the function we wrapped around our previous code. This is an example


Finally we need to launch the process


behind the scene, the main thread that execute the main script will start to “Fork” a number of processes equal to number of devices and each of them targeting one function that execute “show arp” on all devices at the same time and store the output in a variable without affecting each other, brilliant!

this is a sample view for the processes inside the python now when you execute the full code.


one final thing need to be done, is to “join” the “forked” process again to the main thread/truck in order to finish the program execution smoothly.



I have a script used to push some initial configuration to lab devices (9 routers) and I decided to re-write it again using same above procedures and measure the time and here’s the findings (the value that we’re looking for is “real” row)

With Multi-Processing


Without Multi-Processing


You can spot the difference, it’s 6 time faster than the serial execution! and the gap will be increased as you add more devices.

Wrapping Up

in one simple word,

Don’t just Automate the task,  make it fast!

In next post, we will explore some additional flags for multi-processing library inside the python

Finally , I hope this has been informative for your and I’d like to thank you for reading.

BGP Visualization Using Python


During my Network study, I always admire the way that BGP works and operate. The black magic that handle how the packets are exit from one country (Autonomous System=ASN for short!) and enter the another without any “Boarding-Pass” or “Visa”. Not just that, but BGP strive to make the travel time and path is the shortest one amongst the available routes to destination using of exchanged path attributes between those ASN. very clever, robust and old Winking smile

But as the internet routing table grows and number of assigned ASN increase each day, I think it’s become harder to visualize the interconnection between ASNs. Everyday a new ASN is being connected to a bunch of other ASNs and it’s really hard to trace those connections with just a raw data provided by Route Glass Servers or RIRs.

So I think it’s time to involve Python to solve this problem. Basically I tried to build a python module that answer the below questions:

1- How ASNs are being connected to each other given a list of ASNs?

2- How ASNs are being connected in one country?

3- Which ASNs are considered as an Service Provider or IXP? (has more than 15 BGP peering with other ASN)

4- Which ASN is considered an Upstream to a specific ASN? That will help in defining the ASN gateway for a country for example.

5- Which ASN is considered a Downstream (Customer) to a specific Service Provider (Operator)? and how are they distributed comparing to other Service Providers?

6- Finally I need all of that in one picture, visually! You know the old say,

A picture is worth a thousand words

and those thousand words are being stored in a lot of  RIRs like RIPE and AFRNIC that provide useful BGP data publically but in raw format.

I started developing a python module to address the above questions and in two days I have a promising prototype and able to visualize the first country then later on added a capability to visualize a portion set of ASNs. Final stage is adding some console logs for troubleshooting and publishing the package into PyPI . I called it  (bgp_visualize) , Yeah, Couldn’t find a better one Open-mouthed smile.

I tried to design the bgp_visualize to work with a minimum set of possible data. For  example in case of visualizing BGP in specific country, you need to just provide the country code. However you can customize the way and look for the generated graph by providing few parameters like node_color , node_size , desired_background  and so on.

Working on Module

First you need to install it (and install python 2.7 of course if you didn’t have it already in your machine)

Using CLI

pip install bgp_visualize

Using GUI (Pycharm IDE)

Open Settings | Project Interpreter | | Add New

Then search for bgp_visualize python module and install it

Then Run the below code to visualize a set of ASNs (You can run it also from Python native IDLE if you’re using Windows OS)

from bgp_visualize import bgp_visualize_asn
ASNs= bgp_visualize_asn.bgp_visualize(asns=[8452,24835],dark=True)

The resulting graph will be something like below (Click on Image for better resolution) and to visualize all autonomous systems in specific country, you need to provide the country code to the object

from bgp_visualize import bgp_visualize_asn
country= bgp_visualize_asn.bgp_visualize(country='sa')

There’re a lot of screenshots for different BGP graphs are available in my Github page, so please check them out!. Also you can send me your generated graph and I’ll add it to the Github

Color Map

bgp_visualize module use different colors to represent the Autonomous System role in the graph. below is the list of colors and meaning of each in the generated graph

First if AS is considered to be  a service provider or IXP, then it will be colored with one of below colors

if AS is an upstream for specific ASN, then it will be colored as blue

ASN is Downstream:

Transit or not defined

Wrapping Up

I really enjoyed working on this package!.You can use it to troubleshoot and visualize any ASN in your network or in your country and understand the upstreams and downstreams for each one and easily identify the service providers, All in one graph!

For me, I had that idea long time ago to visualize every ASN, every connection, every prefix in the planet and draw them in nice and presentable way and I think this package is a good start, That’s my dream!.

Finally , I hope this has been informative for your and I’d like to thank you for reading.

Introduction to SDN and NFV

If you’re confused about what’s the difference between SDN, NFV, Overlays and automation or what’s the role of each technology and how they’re connected

These are introductory slides for explaining the SDN and NFV technologies. what’s the difference between them and when each one is used. Also it talk about some of Cisco products in each area either SDN or NFV or the Automation with some of real use cases deployed in today’s service provider network.

Hope it’s useful and you like it.


NFV ETSI Lab in Egypt

in last few weeks, I’ve been involved on building and designing the NFV lab according to ETSI standard in my company. The ETSI standard is shown in below snapshotimage

The concept of NFV is simple. it tends to convert the functions that exist in your “physical network” to a virtual. functions like DPI, BNG and route reflectors will be converted to Virtual machines.

The left hand side of the picture is called “MANO = Management and Orchestration” where the right hand side of the standard is the real hardware and bunch of hypervisors (KVM, ESXI)


This allow to create many use cases such as service chaining on which subscriber traffic could be easily passed by any type of VNFs (Virtual Network Functions) regardless of it’s physical location. For example the below subscriber traffic is passed by virtual firewall, DDoS and virtual DPI before sending it out to the internet. Other subscriber traffic could be passed by a different “chains”


Cisco has a wide portfolio that cover most of the ETSI components, Let’s explore them in brief


NFV-O : Orchestrator

Cisco has the NSO product (Network Service Orchestrator, before it’s called Tail-F NCS). Tail-F has a huge contribution in defining the YANG language used in service modeling in the NFV. Cisco acquire the Tail-F company two years ago and it’s one of the most successful acquisition in cisco. You can read more about the Tail-F in thislink. The orchestrator job is to orchestrate the service creation over the VNFs and push the correct configuration on them based on many triggers

NSO use a concept of NEDs (Network Element Drivers) that capable of communicate with many many vendors like Sandvine, Palo-alto, Juniper and of course Cisco. it also capable of communicate using the NetConf protocol that allow it to not only orchestrate the VNF but also the PNF (Physical Network Functions – The real hardware and ASICs).

VNF-M: VNF Manager

because your network functions will be a bunch of VMs (Firewall, DPI,..etc). You will need to have a “manager” that manage the CRUD(Creation, Redeploy, Update and Deletion) operation of those VMs. Cisco has a product that called ESC (Elastic Service Controller) that integrate very well with NSO and any type of orchestrator in northbound. In southbound it ‘s capable to communicate with Openstack and VMware through standard RESful services.


The virtual infrastructure manager (openstack or vcenter) are responsible of creating the actual HDD, RAM and CPU for the VNF. Cisco recommend to integrate with RedHat Openstack (RDO)

EMS (or VNF)

This is the network function that become virtual! . I used in this lab the Cisco Cloud Service Router (CSRv) that capable on running most of the ASR functions without a problem (side note: I used it to build a complete SP-WiFi lab for one of the operator here in Egypt and it work very well in EAP-SIM and Portal based scenarios). below is the available VNFs from cisco


My NFV Lab


First I tried  to integrate the Cisco Elastic Service controller with Vmware vCenter but not having much luck on this integration. I stucked in starting the orchestrator service in vCenter process on which I’m thinking it’s one of the component that used by cisco ESC for communicating with VMware infrastructure. also vCenter seems complex solution to me on which limiting my options


Although the ESC is connected to vCenter, and able to read all “tenants” or VMs from it, but it was unable to administrate them



Hmmm, Ok. let’s seal to the other destination, The Openstack Smile

Second I imported the ESC to the openstack and installed it using the bootvm.py python script.


Great. next step is to integrate the ESC with Openstack that took only two minutes! (Thank you VMware for wasting my time!!)

Third, Once Integration is done, You can see that ESC can successfully retrieve all the tenants from the openstack


Also it’s capable of communicating with Openstack services like nova, cinder


And finally it’s capable of reading all compute hosts and hosted instances


Fourth, Push the configuration from orchestrator to ESC and watch Openstack create the images, flavors and attach all networks to CSRv (through the ESC)



Instances page


if you check the ESC portal, you will see immediately the CSR VNF Active, up and running in the ESC


And you can access the CSRv console directly from the ESC through the built-in VNC utility


But really, what’s the job of the ESC?

ESC play a vital role as a VNF manager in monitoring the VNF operations.  for example if one of the VNF that created through the ESC is deleted by mistake. the ESC will detect this event and immediately re-deploy the impacted machine without any intervention from your side. You can program it with many events to be monitored like overloading the VM, underloading, License experience..etc


The ESC communicate with openstack through REST messages over HTTP and order it to create VM (VNF) with specific flavor, Image ID and attached networks) as shown in blow snoop between ESC and Openstack



NFV is one of the hottest topics in service provider area and it soonest they will convert to this model to save much power and space in datacenter and more important is to introduce agility and harmony in today’s complex network. I really recommend you to choose open standard solutions and not limit your options to propriety software. Learn openstack and YANG modeling and be open minded to automation mythology

Finally I’ve the complete NFV lab integrated components (Orchestrator, VNF-Manager, Openstack and VNF) up and running in my company lab. I think it’s the first lab of it’s type here in Egypt to the best of my knowledge Smile

for any questions, please post a comment and we can discuss it together

Thank You