Juniper Network Automation using Python–Part 2

In This multi-posts series, We will deep dive into Juniper network automation and how to automate both configuration and operation for Juniper Nodes.

if you miss the first part, Building Basic Configuration using Jinja2 Template, Please Read it first to understand the network topology that we will work on it and generate the initial configuration.

In this part we will cover and excellent and powerful python library from juniper called PyEZ (some folks in Juniper say it’s abbreviation for Python Easy!).  Anyways the PyEZ provides an abstraction layer built on top of the NETCONF protocol  (so you need to enable netconf over ssh first on juniper device before getting the information)

So Why it’s Easy?

Assume you need to get a “show version” from specific juniper device. if you used a Raw Python (i.e paramiko for ssh then open a socket for receiving data then parse the returned data..etc) you will need 48 lines to get only the device version. but using the PyEZ will minimize the code to only 7 lines and will gather more data !


Actually in the backend, PyEZ use the raw python and some additional libraries like lxml for sending and receiving messages to Juniper Device. Again, Over NETCONF Transport protocol




You just need to open the command line (windows cmd or linux shell) and write

pip install junos-eznc


in the background, the pip will install additional packages required by original Juniper PyEZ

Operation using PyEZ

The Below sample script is used to iterate over all devices and gather a specific information from them


First , we import the installed python library

Second, we define all management IP Address for juniper devices in our network and provide the credentials

Third, we iterate over devices and get facts  for this devic. By default, the PyEZ library gathers basic information about the device and stores this information in a Python dictionary. This dictionary can be easily accessed with the facts attribute

The result of executing the above script will be as following


What’s the facts available to be collected?

All the below data could be retrieved from the device. For example the Hostname,Model,Version,Current Routing Engine, Virtual or Physical and so on

{'2RE': None,
  'HOME': '/root',
  'RE0': None,
  'RE1': None,
  'RE_hw_mi': None,
  'current_re': ['re0'],
  'domain': None,
  'fqdn': 'PE1-Region1',
  'hostname': 'PE1-Region1',
  'hostname_info': {'re0': 'PE1-Region1'},
  'ifd_style': 'CLASSIC',
  'junos_info': {'re0': {'object': junos.version_info(major=(14, 1), type=R, minor=1, build=10),
                         'text': '14.1R1.10'}},
  'master': None,
  'model': 'VMX',
  'model_info': {'re0': 'VMX'},
  'personality': 'MX',
  're_info': None,
  're_master': None,
  'serialnumber': None,
  'srx_cluster': None,
  'srx_cluster_id': None,
  'srx_cluster_redundancy_group': None,
  'switch_style': 'BRIDGE_DOMAIN',
  'vc_capable': False,
  'vc_fabric': None,
  'vc_master': None,
  'vc_mode': None,
  'version': '14.1R1.10',
  'version_RE0': '14.1R1.10',
  'version_RE1': None,
  'version_info': junos.version_info(major=(14, 1), type=R, minor=1, build=10),
  'virtual': True}

in the background, PyEZ send a lot of RPC (Remote Procedure Calls) to the device and requesting specific information then store it to the above table, then another RPC requesting another info and so on. Finally it will collect all of these information into the nice dictionary above. So COOL.

if you need to see the exact RPC call send on the wire to juniper device. You can get it from JunOS CLI directly by writing the command and displaying the xml rpc request


You can instruct the PyEZ to send the XML request and return the reply also in XML (but in this case you will need to parse the returned data yourself). This is useful if you’re designing some sort of management system and you need to monitor output from specific command. The RPC provide structured data that could be easily parsed by any XML parser available in python

For Example I need to  get the route_summary from devices and I know the exact RPC call, so I wrote the below script to get the data and convert it to string for prettyprint. but again you write any parser to get just specific information





PyEZ use concept of python metaprogramming to generate any kind of RPC (because it’s really hard to hard-code each RPC. it will only be generated only when requested). Additionally you can use xpath expressions to match specific data from returned response

Configuration using PyEZ

PyEZ provides a “Config” class which simplifies the process of loading and committing configuration to the device and it integrates with the Jinja2 templating engine that we used before to generate configuration from Template

Also PyEZ offers utilities for comparing configurations, rolling back configuration changes, and locking or unlocking the configuration database

The load() method can be used to load a configuration snippet, or full configuration, into the device’s candidate configuration

The configuration can be specified in text (aka “curly brace”), set, or XML syntax. Alternatively, the configuration may be specified as an lxml.etree.Element object

I will define a simple configuration file with “set” style to change hostname for specific host and use the “Config” class to push configuration


Please note this only change the candidate config and will not commit change unless you request that.


Also there’re commit parameters available on PyEZ excatly like CLI. For Example you can comment, commit_check, rollback. commit_sync.etc

Other Utilities inside PyEZ

There’re a lot of classes to be covered inside the PyEZ. I’ll mention some of them briefly  in case you need to deep dive into them

The FS class
of the jnpr.junos.utils.fs module provide common commands that access the filesystem on the Junos device

The jnpr.junos.utils.start_shell module provides a StartShell class that allows an SSH connection to be initiated to a Junos device

The SW class in the jnpr.junos.utils.sw module provides a set of methods for upgrading the Junos software on a device as well as rebooting or powering off the device.

You can check and visualize the full architecture of PyEZ in this post where I generate a full visualization for every package and utility inside the PyEZ

Wrapping Up

in this post we walk-through an excellent python library used to communicate with Juniper Devices. The real power of PyEZ (beside the  usability) is it’s provided from the Vendor itself so you’re 100% sure the source code is compatible with device and won’t break anything in the device.

In next post of this series, we will walk through another python library called “NAPALM”

I hope this been informative for you and I’d like to thank you for reading


Juniper Network Automation using Python–Part 1

In This multi-posts series, We will deep dive into Juniper network automation and how to automate both configuration and operation for Juniper Nodes.  I’ll divide the series into 4 main parts. I will store the configuration for each part at my GitHub Account under JuniperAutomation Repo


Building the Lab, Provision Basic Interfaces,OSPF,BGP,MPLS configuration using Jinja2 Templating and verification using Netmiko


Use Juniper PyEZ and NAPALM modules to get useful Data from Nodes


using Netconf and Ansible in Juniper


TBD (your suggestions are highly welcomed though)
Read More »

Using Python Multi-Processing for Networking–Part 1

Python became the de-facto standard for Network Automation  nowadays, Many Network Engineers already use it on daily basis to automate networking tasks starting from configuration to operation till troubleshooting the network problems. in This post, we will visit one of advanced -yet- topics in python and scratch the multi-processing nature of python and learn how to use it to accelerate the script execution time.

First, We need to understand, How Computer execute a python script?


1- When you type #python <your_awesome_automation_script>.py in the shell, Python (which run as a process)  instruct your computer processor to schedule a thread  (which is the smallest unit of processing)


2- The allocated thread will start to execute your script , line by line. Thread can do anything, starting from interacting with I/O devices, connecting to router, printing output, doing mathematical stuff..anything

3- Once The script hit the EOF (End of File), the thread will be terminated and returned to the free pool to be used by other processes

in linux, You can use #strace –p <pid> to trace a specific thread execution

The more threads you assigned to script (and permitted by your processor or OS), the faster your script will run. Actually threads sometimes called “Workers” or “Slaves

I have a feeling that you’ve that little idea in your head, Why wouldn’t we assign a LOT of threads from all cores to python script in order to get the job done,  Quickly!

The problem with assigning a lot of threads to one process without special handling is what’s called “Race Condition”. The operating systems will allocate memory to your process (in this case it’s python process) to be used at the runtime and accessed by all Threads, ALL OF THEM AT THE SAME TIME. Now imagine one of those threads is reading a data before it’s actually written by another thread!.you don’t know the order in which the threads will attempt to access the shared data and this is called Race Condition.


You can read about the race condition and how to avoid it in this link

one of the available solutions is to make thread acquire lock, in fact, Python, by default is optimized to run as a single threaded process and has something called GIL(Global Interpreter Lock). GIL does not allow multiple threads to execute Python code at the same time.

but rather than have Multi-threads, Why Don’t we have Multi-processes?!!

The beauty of the multi-processes over multi-thread is you won’t be afraid from data corruption due to shared data among the threads.  Each spawned process will have their own allocated memory that won’t accessed by other python processes. This will allow us to execute parallel tasks at the same time!


Also, from Python PoV, each process has it’s owned GIL. so there’s no resource conflict or race condition here.

Enough Talk, Let’s jump to the code.

First you need to import the module to your python script

import multiprocessing as mp

Second, you need to wrap your code with a python function, this will allow the process to “target” this function and mark it as a parallel execution

let’s say we have a code that connect to router and execute command on it using netmiko library and we want to connect to all devices in parallel, This is a sample “serial” code


We need to assign number of process equal to number of devices (one process will connect to one device and execute the command) and set the target of the process to execute the function we wrapped around our previous code. This is an example


Finally we need to launch the process


behind the scene, the main thread that execute the main script will start to “Fork” a number of processes equal to number of devices and each of them targeting one function that execute “show arp” on all devices at the same time and store the output in a variable without affecting each other, brilliant!

this is a sample view for the processes inside the python now when you execute the full code.


one final thing need to be done, is to “join” the “forked” process again to the main thread/truck in order to finish the program execution smoothly.



I have a script used to push some initial configuration to lab devices (9 routers) and I decided to re-write it again using same above procedures and measure the time and here’s the findings (the value that we’re looking for is “real” row)

With Multi-Processing


Without Multi-Processing


You can spot the difference, it’s 6 time faster than the serial execution! and the gap will be increased as you add more devices.

Wrapping Up

in one simple word,

Don’t just Automate the task,  make it fast!

In next post, we will explore some additional flags for multi-processing library inside the python

Finally , I hope this has been informative for your and I’d like to thank you for reading.

BGP Visualization Using Python


During my Network study, I always admire the way that BGP works and operate. The black magic that handle how the packets are exit from one country (Autonomous System=ASN for short!) and enter the another without any “Boarding-Pass” or “Visa”. Not just that, but BGP strive to make the travel time and path is the shortest one amongst the available routes to destination using of exchanged path attributes between those ASN. very clever, robust and old Winking smile

But as the internet routing table grows and number of assigned ASN increase each day, I think it’s become harder to visualize the interconnection between ASNs. Everyday a new ASN is being connected to a bunch of other ASNs and it’s really hard to trace those connections with just a raw data provided by Route Glass Servers or RIRs.

So I think it’s time to involve Python to solve this problem. Basically I tried to build a python module that answer the below questions:

1- How ASNs are being connected to each other given a list of ASNs?

2- How ASNs are being connected in one country?

3- Which ASNs are considered as an Service Provider or IXP? (has more than 15 BGP peering with other ASN)

4- Which ASN is considered an Upstream to a specific ASN? That will help in defining the ASN gateway for a country for example.

5- Which ASN is considered a Downstream (Customer) to a specific Service Provider (Operator)? and how are they distributed comparing to other Service Providers?

6- Finally I need all of that in one picture, visually! You know the old say,

A picture is worth a thousand words

and those thousand words are being stored in a lot of  RIRs like RIPE and AFRNIC that provide useful BGP data publically but in raw format.

I started developing a python module to address the above questions and in two days I have a promising prototype and able to visualize the first country then later on added a capability to visualize a portion set of ASNs. Final stage is adding some console logs for troubleshooting and publishing the package into PyPI . I called it  (bgp_visualize) , Yeah, Couldn’t find a better one Open-mouthed smile.

I tried to design the bgp_visualize to work with a minimum set of possible data. For  example in case of visualizing BGP in specific country, you need to just provide the country code. However you can customize the way and look for the generated graph by providing few parameters like node_color , node_size , desired_background  and so on.

Working on Module

First you need to install it (and install python 2.7 of course if you didn’t have it already in your machine)

Using CLI

pip install bgp_visualize

Using GUI (Pycharm IDE)

Open Settings | Project Interpreter | | Add New

Then search for bgp_visualize python module and install it

Then Run the below code to visualize a set of ASNs (You can run it also from Python native IDLE if you’re using Windows OS)

from bgp_visualize import bgp_visualize_asn
ASNs= bgp_visualize_asn.bgp_visualize(asns=[8452,24835],dark=True)

The resulting graph will be something like below (Click on Image for better resolution) and to visualize all autonomous systems in specific country, you need to provide the country code to the object

from bgp_visualize import bgp_visualize_asn
country= bgp_visualize_asn.bgp_visualize(country='sa')

There’re a lot of screenshots for different BGP graphs are available in my Github page, so please check them out!. Also you can send me your generated graph and I’ll add it to the Github

Color Map

bgp_visualize module use different colors to represent the Autonomous System role in the graph. below is the list of colors and meaning of each in the generated graph

First if AS is considered to be  a service provider or IXP, then it will be colored with one of below colors

if AS is an upstream for specific ASN, then it will be colored as blue

ASN is Downstream:

Transit or not defined

Wrapping Up

I really enjoyed working on this package!.You can use it to troubleshoot and visualize any ASN in your network or in your country and understand the upstreams and downstreams for each one and easily identify the service providers, All in one graph!

For me, I had that idea long time ago to visualize every ASN, every connection, every prefix in the planet and draw them in nice and presentable way and I think this package is a good start, That’s my dream!.

Finally , I hope this has been informative for your and I’d like to thank you for reading.

Visualizing Python Module for Network Libraries (Netmiko and PyEZ)

Ever wondering How a python custom module or class is manufactured ? How does the developer write the python code and glue it together to create this nice and amazing “x” module ? What’s going on under-the-hood?

Documentation is good start of course, but we all know that it’s not usually updated with every new step or detail that developer add.

For Example,

we all know the powerful netmiko library created and maintained by Kirk Byers that utilize another popular SSH library called Paramiko. but we don’t understand the details and how the classes are connected together. we just write the below code  to execute a specific command in Cisco IOS platform

from netmiko import ConnectHandler

device = {"device_type":"cisco_ios",


net_connect = ConnectHandler(**device)
output = net_connect.send_command("show arp")

and booom, it’s working like charm

if you need to understand the magic behind the “self.charm” that netmiko use to return the result,  Please follow the below steps (require Pycharm IDE)

Netmiko Module


Open the Netmiko location inside the python library location folder (usually C:\Python27\Lib\site-packages) in Pycharm IDE


Step 2:

Right click on python module and choose Diagrams then Show Diagram


it will take some time to generate the diagram based on Your Java xms settings ( usually I assign 1024MB for it to work properly)

Step 3:

Now save the resulting  image into your desktop



And Here’s it (click on it to enlarge and zoom as much as you can)


Understand the result UML graph

Based on the resulting graph you can see that Netmiko is supporting a lot of vendors  like HP Comware, entrasys , Cisco ASA, Force10, Arista, Avaya,..etc and all of these classes are are inheriting from parent netmiko.cisco_base_connection.CicsoSSHConnection class (I think because they use the same SSH style as Cisco) which in turn inherit from another Big parent Class called netmiko.cisco_base_connection.BaseConnection

Also you can see that Juniper has it’s own class ( that connect directly to the Big parent

And finally we connect to the parent of all parents in python. The Object class (remember everything in Python is an object at the end)

Also you can find a lot of interesting things like SCP transfer class, SNMP class and with each one your will find the methods and parameters used to initialize this class

So the ConnectHandler method is primarily  used to check the device_type availability in above vendor classes and based on it , it will use the corresponding SSH Class. A well designed module really, Thanks Kirk Byers !



Juniper PyEZ Module

Applying the same procedure in Juniper Python wrapper (PyEZ) used to connect to JunOS based platforms. we can get another useful diagram as below (Again click to enlarge)


Also you can see the support for the netconf, serial and telnet connection and Views and Tables classes that Juniper use to define the facts about the device. Very clever.

That’s all. Please share any other interesting findings in python modules

I hope this has been informative for your and I’d like to thank you for reading.

Install Cacti on CentOS 7– The Definitive Guide in 2017


Cacti is one of the most robust  monitoring tool available in the market. It has a lot of features and options that able to give you a complete visibility for your infrastructure. In this guide I will walk through installing Cacti on CentOS 7, Configuring the MariaDB (The new Database for the CentOS 7 ), Adding few devices and finally plotting the result. Also  I will cover some troubleshooting points during the configuration. So Let’s start


Read More »

Troubleshoot Openstack Networking with Python

As an Openstack Administrator for a while, I found the most complicated topic to be understood in openstack project is Networking and how instances ( formerly virtual machines) are communicated with each other and with external world.

Unlike VMWare ESXI, where you can just create vSwitch and attach a VM to it, Openstack Networking is much more complex that that. You need first to define Network type itself(Flat, VLAN, VxLAN, GRE), attach it to the subnet with IPv4 or IPv6 Block, Create a Floating IP address if this network will be connected externally to a provider network and optionally create an internal router to route between different networks and subnets. lots of steps!


Floating ip and neutron router in nutshellimage

Also Neutron itself doesn’t provide an actual networking to the instances. it just a wrapper to a drivers called “Mechanism Drivers” on which they provide the actual networking(switching, routing and so on). The most famous one is the OpenVswitch which provide basic and advanced switching between instances and external world


But openvswtich lack the capability of enforcing security policy over incoming and outgoing packets, That’s why Openstack community choose to connect the linuxBridge with OpenVswitch to solve this problem which introduce another layer of complexity!

Imagine that you need to attach one ethernet interface to instance. Openstack creates additional Four different interfaces to satisfy the need of OpenVswitch and LinuxBridge. Very complex approach really!. Below is an example of these interfaces. You can find more about it by clicking on image itslef!

So Where’s the problem?

when you face a problem in openstack networking like instance is not pingable from outside world, You can’t reach the instance gateway, You can’t get an IP address from DHCP pool or even you don’t see any incoming or outgoing traffic from it. Chances are you’ve missed something in networking configuration and you need to fix it.

Let’s start by answering the following questions:

1- How many interfaces assigned to an instance

2- What’s  the MAC address of each interface?

3- What’s the IP address of each interface?

4- What’s the internal VLAN assign by OVS to our interface?

5- How will the External Network (Provider Network) treat the traffic from each interface? (Strip VLAN, Add a VLAN, Modify a VLAN..etc)

6-Which ports in integration bridge (br-int) and External Bridge (br-ex) are connecting our instance ? and which flow table rules are applied on them

Answering the above questions will help us , a lot , in troubleshooting any networking problem in openstack.

You can use some useful commands like (ip a , ovs-vsctl show, neutron port-list ..etc) in your troubleshooting. However you will spent a lot of time trying to connecting everything together especially if you’ve environment with hundreds of instances and hundreds of networks.

So let’s Automate this job by using Python!

I wrote a python script that can do this job easily. it utilize two famous python libraries requests and netmiko  to connect to Openstack Keystone API service , grep the required information from it and parse the returned info and finally connecting the dots . it will print a nice report with all detailed information.

So How Does it Work?

First we define the Openstack Credentials (if you’ve multi-node installation then define the keystone ip address)


Then will send API request to Openstack Keystone to generate a Token. Token will be used later to authenticate us against any other openstack service


You can see the Openstack Token workflow in below picture. Don’t forget that Token has an expiration time so you have to use it before that date.

Next we will parse the returned output to find out the MAC address and IP address. I wrote a function on which you give it an instance name and it will do the rest. I thought it would be better to write it like that in case I need to use it later in any of my other projects


Unfortunately, OVS doesn’t provide an API interface like openstack. So I had to use the netmiko  library to send the required commands and parse the output using the linux text stream like cut and grep commands. The returned output require additional handling on which I choose to do from Python itself.


Finally I defined a function that use the above two methods to generate the required report. The returned output is concatenated together and grouped per physical (or should I say Virtual!) network interface


This is where you connect the dot for each part of neutron project either neutron itself or any defined mechanism drivers.



Running this code against one my Openstack environment, I can easily identify how openstack networking handle and forward the traffic from each interface in my instance.

You can answer the above mentioned questions, whether the IP address, MAC , VLAN tagging and OVS bridge handling per Network interface attached to the instance

You can even visualize it Smile


Finally you can Find the code in my GitHub repo here

Wrapping Up

Many people find openstack is complex and hard to understand, I partially agree with them. However it provides a lot of tools and interfaces on which can be used to get the job done and make your life easy. Neutron is a great and modular project under openstack umbrella and you can automate a lot of neutron tasks by using Python. The Sky is your only limit.

I hope this has been informative for your and I’d like to thank you for reading. Feel free to comment or share your experience in troubleshooting problems in openstack