The DevOps paradigm is really cool. Everyone wants to deploy their environments with Puppet on top of OpenStack. It brings reproducibility and scalability at a tremendous speed. This is simply awesome and undoubtedly the way to go forward.
Almost nobody will discuss the benefits of adopting a standard DevOps paradigm, but this implies a major shift in mindset for most companies, especially Telcos, which have a high moment of inertia. In this article we will describe a success story based on our experience with customers.
Let me introduce you to Elora, a young configuration manager that has recently begun working for a large telecom organisation. She has been commissioned to install and configure what we here will call a RCLSP (Really Complex Legacy Stable Product) in their bare metal datacenter. Elora’s managers want to drastically increase installation speed for testing process in order to save costs and improve their competitiveness.
In the past, Elora’s new team mates used to do fully manual installations, but tired of performing all those repetitive tasks, they started to write and use a set of bash and expect scripts to automate those steps. Elora was given access to a shared folder with the scripts. While trying to understand them she decided to ask a tricky question:
– Can’t we just use any standard configuration management tool such as Chef, Puppet or Ansible?
In addition to the inherent problems of installing Chef or Puppet agents on the old stable versions of the Linux distributions supported, the managers were not open to introduce new 3PPs to customers. Elora was told that the “Customer Released Installation Instructions” should be followed as closely as possible in laboratory.
So Elora continued studying and using the current script set. But she was a tough nut to crack and soon discovered several possibilities for optimisation.
Usually, all these scripts were launched manually and consecutively. But Elora realised they could be run in parallel with the positive effect of producing a linear decrease in the installation time. This was immediately seen as a quick win by her managers, so Elora got going. She decided to use Python as glue code because it was easy to integrate, was already installed by default in production environments (no extra dependencies) and provided a simple and strong thread control for parallelisation.
Additionally, by automating the launch of the whole set of scripts instead of running them by hand would also reduce the installation time. So this was her second positive contribution to the installation process. When this was finished, with a minimal input of configuration files, the whole installation process was launched completely unattended. The time savings just kept growing and growing.
With these first positive results, Elora was encouraged to review and start refactoring the complete set of installation scripts.
She identified lots of duplicated lines and hard coded variables along them. That made maintenance complex as bugs had to be fixed in multiple scripts… This called for a refactorisation, with the aim of creating a set of common libraries and smaller scripts importing these.
With all that in mind, once again Python seemed to be the horse to bet on; a scripting language with an object oriented approach but simple to understand and to maintain for non programmers.
Once Elora’s colleagues realised how elegant, time saving and easy it was (much easier than ever before), they also started using it and helped with the maintenance. Script by script, all of them were refactored in Python with a common set of libraries managing connections and configuration tasks.
Soon everyone started to see the whole thing not only as a set of scripts but as a real framework for interacting with their hardware and software.
And eventually, one day, the product released a new cloud-ready version to be deployed on top of OpenStack.
Migration to cloud environments in former projects had shown managers the need for new installation tools which meant a big effort in the adaptation, but Elora’s team was already prepared for that.
The modular architecture of their new framework made it easy to create abstraction layers substituting ssh and telnet connections and commands against real hardware with calls to OpenStack’s Python API.
In reality, they had quietly made a huge step towards implementing the so called DevOps Paradigm.
Adopting the DevOps Paradigm does not mean using an entirely new set of tools and technologies. It’s more about a change of mindset and building bridges between system administrators and developers. For this type of project, Python stands out as the ideal canvas as it provides not only an affordable object oriented scripting language with truly specialised libraries for this purpose, but it is also deeply integrated in the core of most of the new state of the art technologies. Today it’s difficult to find any modern application not including a Python API.
Building your solutions on top of Python with a well designed object oriented architecture almost guarantees connectivity and adaptability to almost any challenge encountered in your product development process.
Our experience shows that the cornerstone for a success story is counting on someone like Elora, a Python expert with strong Linux background and capacity to lead the way and educate teams. In the past few years, Blue Telecom Consulting has been betting on hiring enthusiast Python developers who can help our customers to gently reach the new state of the art technologies and methodologies for their products.
Leave a comment