Monthly Archives: April 2017

#DEVOPS: Getting our organizations aligned

This was first published on January 15, 2015

Beginning: I attended a session recently on “DevOps:The Big Picture”, Thank you Richard Seroter for such a simple and visual breakdown, which got me thinking; Can we really be able to align our current way of working ( mostly in silos) in terms of Development and Operations, in such a way to become what is being called as DEVOPS? Can we really implement this concept and gain from the common synergies without starting from the scratch?

Traditionally: IT function in most large firms has been comfortable following the traditional model and are mostly averse to speeding up functional delivery in contrast to the need by business stakeholders whose aim is to stay competitive & ensure the growing customer demand is met quickly. Remember that Waterfall versus Agile discussions, in fact in my opinion, we are still following a hybrid form of Waterfall methodology, like mini waterfalls or shorter cycles using agile and scrum runs. Interestingly, I found it’s already called as W-agile.

We all know how the quarterly or major release cycle works. Deploying a set of new features along with a few defect fixes in the old fashioned way, in a structured format of following from requirement gathering and analysis to coding, deploying and testing across multiple environments, running around for signoffs, with rollback strategy (showing our confidence) and finally a successful release with a few hiccups. Take a deep breath, rest a little and get ready for the new cycle, we repeat again, reason being; why change the process which has worked very nicely over the years, we are delivering right?

Developers have to wade through the exhaustive documentation prepared by the Analysts to break down the requirements even for few bug fixes or adding a minor feature, and then run through the hoops to get it deployed in multiple test environments, get it tested and attested by multiple teams, before getting the go ahead to push it in production. Every team you speak to say’s this lengthy flow reduces risk and impact on production environment, the real reason being maybe that no one wants to take the blame and end up paying the price for mistakes. Ultimately the perception persists that the turnaround time is far too much when compared to the features added.

Operations are skeptical about all change to Production environment, as they have been tasked with stability and maximum availability. Remember that old adage, “If it ain’t broke, don’t fix it.”

Having moved from the DEV to the OPS side, I can most assuredly vouch that all changes are viewed suspiciously to say the least, as mostly these guys are never part of the discussions nor are aware of what the release encompasses functionally. The minimal automation, environment differences, deployment experience, along with security policies around accesses, builds up more friction, when the blame is put on the artifacts provided or differences overlooked during the testing. A release in itself creates so much stress across all teams, especially when the deployment breaks and a fire drill ensures to either fix it or roll back to the previous state within the change window. In order to minimize experiencing that stress, they create stricter controls, and reduced number of deployment windows.

DEVOPS: Combining a right mix of interdependent variables like People, Process and tools for fast and quality software delivery. So, can we align ourselves with this concept? Yes We Can! (Sounds clichéd and overused but in this case it is true) We start with breaking the Berlin wall, demolishing the divide between the DEV and OPS teams, SILO should be the next 4 letter word!

Let’s summarize a few important components which we should focus on to promote collaboration.

People: First on the list are the People. Create or identify and distribute teams for each W-agile sprint, this team would comprise of analysts, coders, delpoyers, testers and leads from both Dev and the Ops as required, with minimal redundancy. A team, very much doing all the things from start to finish, ideally having a common Management oversight to enable adoption of this result oriented approach. We need to merge or recreate the teams in such a way that synergies are beneficially used, kind of make a lean and mean team focused solely on the time bound task for the full cycle with shared responsibility.

Please read the rest of the article here..https://askhurram.com/?cat=4

#DevOps & Continuous Change

This post was originally first published on DEVOPS.COM

http://devops.com/features/devops-continuous-change/

A remark by a colleague while waiting for the coffee machine to complete its cycle started my train of thought. “Should we have multiple minor releases or just do a few major ones in a year?”

In large organizations, due to many factors, the turnaround time for a single successful release is quite extensive; but if the conceptual change which we are talking about bringing whilst using the #DEVOPS methodology would not only reduce these long cycles but deliver quicker and quality software.

I have already discussed in my earlier article “DEVOPS: Getting our organizations aligned” on the traditional approach and how the combination of this two (DEV & OPS) has been proven beneficial in various organizations and how many are still hesitant in embarking on that path.

Continuous Change which encompasses both Continuous Integration and Delivery is paramount to achieving this objective. To attain the objective of delivering rapid and consistent value to the stakeholders regularly, we have to have Continuous Change happening.

I am sure we have already moved out from the Mainframe era to a more Distributed systems state. But unfortunately we have yet to let go of that culture where we pile up components on a single quasi Distributed Application, instead of creating smaller independent, yet interdependent systems.

So, back to the same question; shall we go with a few Dinosaur sized, labored releases or make multiple short and swifter runs? In my opinion, we should strive towards the leaner model. Easier said than done you say! Agree. If I have to, then I would go about it this way:

In an environment having a suite of Distributed Applications, start with Analyzing the components which make up the Major release, List out the changes (fixes and new features) which are planned, asses functional and technological impact, Identify dependencies, upstream/downstream connectivity’s, architectural deficiencies, list out the identified subset of components or group of components by functionality, which can go as an independent release.

I am resisting the urge to say ‘standalone’ as with the current set of complex intervened collection of applications, it would be an incorrect statement. It would rather be a subset of components which can be independently upgraded. Now the big task is to review and decide if this subset or group would add functional value to the overall suite of applications without the rest of components going in.

Once identified, the success of this release working as expected would be in the extensive testing (Unit/ Regression/Performance) of these components, and as we gain confidence in this split and deploy procedure, we can start with, prioritizing and scheduling critical must have features or bug fixes which also should be backward compatible with other components lagging behind. We can even come up with scenarios where these components are not just getting updated frequently, but would also have multiple independent tracks. Of course it goes without saying the testing and signoff process is strictly followed albeit in a shorter window.

Deployments to production have always been a source of heartburn to the OPS teams, we have often seen finger pointing and firefights to contain change fails. In order to maintain stability and maximize availability, stricter controls and reduced deployment windows are put in place. In promoting frequent and smaller component level changes, OPS will also be in control of what’s going in, and smaller component level releases can be rolled back quickly within the Green-Zone window if the change is not working as expected. Though I would categorize this rollback, as a process fail than a deployment fail, and should be reviewed very seriously with right mitigation plan and lessons learnt retrospection, Continuous learning after every successful or failed sprint ensures competent and confident releases.

Another major factor impacting multiple runs of Production releases is the long Green-Zone windows, by moving to the quicker and leaner deployments cycles, these requests can be substantially reduced, and can eventually aim to reach a stage where changes are deployed online without extensive downtime.

Similarly, frequent maintenance activities when there are no planned deployments, effectively has the same risk on environment unavailability. Analyzing and moving to a state where these activities are conducted online without a Green-Zone would alleviate the maintenance downtime’s and ensure seamless availability, adding stakeholder value.

I have not touched on the Continuous Integration or the Automation of the deployment process for obvious reasons, we can’t achieve the objective of Continuous Change without having the Integration process in place where incremental changes to code are build into a package swiftly and seamlessly, even a sanity or basic testing might be integrated to ensure the quality of the builds. Same goes for automating the deployments, develop & integrate tools required for rapid deployments. We have seen instances where automating a single activity or revisiting a process has created substantial time savings.

While it’s easy to propose changes to the Release Process or advocate rapid deployment cycles, we have to ensure adequate checks and balances are in place, compliance adherence and audit trails are created.

Now to the most important measure of this article, Cost Savings!

Proponents of the Dinosaur Releases would argue that the common activities which entail a single release like the builds, Testing, Release Management, OPS availability or reduced GZ would cut costs and multiple runs to PROD would logically increase costs.

But it’s definitely the other way around, we wouldn’t need to feed the Dino anymore, it will be a cost worthy rapid delivery with optimized resources, a leaner process where DEV and OPS are working together to ensure quality value added deliverables are made available to business faster.. This cultural change and collaborative effort also creates new vistas of growth and provides a perfect platform to excel. The cost benefit analysis would eventually be in favor of Continuous Delivery.

Opportunity cost is another variable which has not been considered; an innovation or a new feature available to customers quickly with reduced time to market cycle will provide the much needed edge in these competitive times. I recall some conversations where a few must have’s were pushed out due to various reasons, frustrating the business.

[#DEVOPS] was never meant to replace the traditional approach to software development; rather it is an efficient use of the existing resources within a collaborative model to rapidly deliver quality software products. A major cultural shift in our approach is required and to reach that goal we need to embrace the shades of grays, many a times over 50 (sorry, couldn’t resist) when it’s not truly black or white.

Thank you for the time.

Visit my blog https://askhurram.com/