What IT service management can learn from DevOps

By on
What IT service management can learn from DevOps

And vice versa.

There is a battle raging in management circles about the best way to approach the business of IT, with two major schools of thought vying for dominance: IT Service Management (ITSM) and DevOps.

ITSM favours a formal, planned process for organising IT, while DevOps emphasises a more fluid, dynamic style, free from the shackles of bureaucracy. They are, if you ask the most strident advocates of either approach, the antithesis of each other.

So who is right? And what even are these approaches, anyway?

What is ITSM?

Information Technology Service Management began life inside IBM in 1972, the fruit of eight years of research into Information Systems Management Architecture (ISMA) which culminated in the publishing of A Management System for the Information Business in 1980.

These ideas were further built upon in 1986 in the United Kingdom by the Central Computer and Telecommunications Agency (CCTA) - a government body given the unenviable task of improving IT quality and efficiency. CCTA by this stage had already developed the Structured Systems Analysis and Design Method (SSADM) for software development, and the PRojects IN Controlled Environments (PRINCE) approach to project management.

A team led by Peter Skinner and John S. Stewart worked with several consulting companies, including IBM, to develop the somewhat cumbersomely named Government IT Infrastructure Management Method, or GITTMM. IBM supplied the CCTA team with a set of IT service management checklists derived from their work on the aforementioned ISMA, and the team expanded on these concepts to define known good (‘best’) practice. The guiding principle was, according to Stewart, simple:

“A standardised approach could be tailored by individual organisations as a basis for their own, repeatable processes.”

GITTMM was later renamed to IT Infrastructure Library (ITIL) for two main reasons: firstly, because it wasn’t a method, and secondly, having the word ‘government’ in the name would have discouraged adoption of the ideas outside of government departments.

ITIL essentially defines what process an IT organisation should follow for everything from how to deploy a new application to how to define security policy to how to track software licenses to how to handle support calls. And over three decades after the need for such an idea was first recognised, ITIL is today more or less ubiquitous. If judged by Stewart’s conception of the Library’s central principle, the project would have to be considered a resounding success.

And yet the original problem it set out to solve – that IT projects were not living up to expectations of high quality or low cost - doesn’t appear to have been solved.

After nearly 30 years of ITIL in practice, we still read of routine and large-scale IT failures, such as the Queensland Health payroll debacle, Victoria’s failed CenITex experiment, and a series of outages - some over a week long - at institutions as proud as the Royal Bank of Scotland.

The pages of iTnews are continually filled with arguments as to why.

Networking consultant Greg Ferro is an outspoken critic of ITIL.

“The fundamental ITIL premise is that technology work can be segmented like machines or work functions in a factory where each task can be assigned to a machine with fixed human resources applied to the task and funding applied to the machine,” he says. “This simply doesn't work when the factory machines and processes undergo transformational change every three to five years.”

“ITIL is not about delivery or excellence. In my experience, ITIL and PRINCE2 prevent excellence through a focus on deliverables and cost management.”

He is adamant that ITIL has had its day and it’s time to move on.

“Over the last decade I have worked for tens of companies that use ITIL/ITSM models and all of them are were miserable and unhappy workplaces,” he says. “When I have worked in companies that don't use ITIL , I found they were a great places to work while real business value was being created and delivered.

“It's about happiness. ITIL equals misery and unhappiness. Who wants that?"

Read on as we explain the opposing school of thought...

What is DevOps?

Many that have expressed dissatisfaction with ITIL processes have found that the DevOps model – an extension of the agile methodology – ticks many of the boxes ITIL and IT Service Management can’t.

DevOps as a concept rose to prominence in 2009, mostly by the launch of “DevOps Days” in Belgium by Patrick Debois. DevOps is a portmanteau word combining Development and Operations, which describes what appears to be the core ethos of the approach: development and operations working together.

The greatest difficulty with DevOps is that no one, not even its practitioners, seems quite sure of what DevOps is exactly. Some call it a software development method, some an approach to IT management, while others call it a “global movement”.

The common thread in describing DevOps is largely a reaction to the silo approach taken by many companies when they implement ITIL processes.

In the ITIL world, developers are responsible for updates and changes, while IT operations are responsible for keeping everything running. This approach often leads to mismatched incentives, where operations is motivated to reduce change (and keep things stable) while development is all about changing things.

The rise of the Agile approach to software development in the early 2000s - and its emphasis on rapid release cycles - placed pressure on the formal change management and service transition processes recommended by ITIL. If a change board only meets once a week, releases into production cannot happen any faster. But if a company follows Agile to the letter, as many online companies do, you might release changes into production multiple times per day. The two worlds don't fit together very neatly.

So just as the Agile approach to software development replaces the waterfall method of SSADM, DevOps aims to replace the slow formality of ITIL-like processes when it comes to operations. DevOps requires that developers own the full lifecycle of an application, from development, through testing, deployment, and production support, all the way to decommissioning.

Large, online-centric companies like Flickr have shared their approach of merging development and operations at various conferences, and the technique has resonated with those also in a hurry to deliver value to customers.

A cornerstone of the DevOps approach is automation. Without it, large organisations couldn't conceive of such fast release cycles without introducing errors. Tools such as Puppet, Jenkins, and Selenium are all geared towards automating what were previously human-centric tasks. Instead of a change board of humans that meets once a week, automated software testing determines if a release is ready for deployment to production.

Automation tools have existed for decades, but their use has always been somewhat limited. The humble UNIX make utility was created in 1976, and subsequent tools, even HP’s internal MEDUSA tool, can all claim seniority over more recent tools like Puppet, Ansible, and Jenkins.

But as with so many technologies, arguably the previous tools arrived too early. There just wasn’t enough widespread need for them to be used outside of specific niches or companies. DevOps style automation has exploded in popularity because the timing was right.

Automation has very much been in demand since the turn of the millennia, initially for large online companies like Google, Yahoo, Facebook, and others. The success of these firms all depended on economies of scale for commercial success, and none could afford to hire a great many humans to achieve it. The task of curating search results, running AdWords auctions, and showing you which of your friends just became single is impossible for humans to do when you have millions of members. Paying one really good developer three times market rate to write software that replaces 15 systems administrators feels like a good deal.

Automation also fit the culture of online development in the ‘digital era’. Techniques and code originally developed by these large, online companies has been released into the world at large (consider Apache Hadoop and the Yahoo! User Interface Library), usually long after they conveyed any significant competitive advantage to the original company. It is easier for a tool or practice to become widely adopted if many people know that the tool exists, and even easier if the cost of acquisition is low. Today’s ‘digital’ operations within banks or telcos are often recycling code developed for social networks a decade earlier.

By the late 2000s, a critical mass of tools and techniques had arisen that would begin to seriously challenge the dominance of ITIL.

But does it really have to be a stark choice between ITIL or DevOps?

Can the two approaches co-exist? Read on as business leaders discuss this option...

Lessons from past, present and future

Cultural changes aside, the root cause of the rise of DevOps is the benefits accrued via an increased use of automation.

It avoids many of the famed communication issues between silos, mostly because in most cases, the humans that might have communicated have been replaced by computers. Unlike humans, computers do exactly what they’re told, so there is no such thing as miscommunication. Computers also do repetitive tasks with great accuracy. Would an ITIL approach still work if most of the humans were replaced with Jenkins, Puppet, and shell scripts?

Don Meij, CEO of Domino’s Pizza, says the operational problems besetting most IT organisations are usually more to do with the implementation than the choice of approach.

“Too many companies become all about process,” he says. “CEOs fall in love with process. It’s almost how you justify what you do. It’s the cancer of an organisation if you don’t manage it properly.”

Peter Nikoletatos, acting IT director at the University of New England, says that IT Service Management and ITIL will still be relevant in the future. But organisations need to get better at how they apply it.

“ITIL is a framework, it requires tailoring,” he says. “Most organisations err on the side of too much sophistication. This makes things too bureaucratic.

“The enthusiasm for execution when implementing ITIL should be tempered. You need to put a realistic timeline in place. Build the services incrementally. Start with the easy things: incident management and problem management.

“Ground zero to fully deployed ITIL could take two to three years. That’s a significant investment in time. You don’t have to do all of it.

“Not every organisation lends itself to Agile,” he continued. “ITIL is just a way of thinking about a problem, but not the only one. ITIL is convenient because most people understand it. With Agile, we’re still learning how to use it. It takes a few years to build evidence that it works.”

Confounding everything is the rapid pace of change in the IT industry generally, courtesy of Moore’s Law. The kind of automation possible today was unthinkable in the mid-80s, while at the same time the explosion of data and data processing has created new problems that didn’t exist then. With the landscape shifting under your feet this much, can one approach to managing things really cover all bases?

To the delight of consultants everywhere, the answer on which of ITIL and DevOps to pursue seems perpetually to be: “It depends.”

As the old project management joke goes: you can usually only choose two out of three variables - fast, cheap and good. Both ITIL and DevOps purport to have the same ultimate goal—better business outcomes. It could be the case that ITIL has been optimised for Cheap and Good Quality, with less emphasis on speed while DevOps offers a different optimisation point - much faster and invariably cheaper. The question many are waiting to answer is whether it will deliver to the same quality.

A more constructive way to make a choice between the two is to assess the cost of change for any given solution.

Software benefits from rapid change because the cost of change is low. The lower the cost of change, the more change you can afford to contemplate. But the hardware it runs on is rarely so easy to change. Those who deploy hardware still need to consider the longer-term ramifications of their actions, or at least the cost impact of getting it wrong and having to change.

It makes sense to use the technique the matches the amount and cost of changes to your environment. Something that doesn’t change often, and costs a lot when it does, requires careful planning and change management. But for things that are relatively easy to change and don’t cost as much, trying many different options quickly makes a lot more sense.

On that basis, the need for complete reinvention is somewhat overstated. There is nothing in ITIL that says processes can’t be automated. It is, after all, just a framework, ready to be adapted to the specifics of your business, while still providing a standardised way of thinking about business problems.

ITIL folks stand to learn a lot by borrowing ideas from DevOps, just as DevOps people tend to recycle their configuration management software and trade Puppet recipes over the internet.

Compare and Contrast




Optimised for

Economies of scale

Speed to market

Cost of implementation



Time to implement

2-3 years

6+ months

Staffing levels required

Medium to High

Low to Medium


Well established

Still evolving

Available market skills

Widely available

Few, but growing fast

Better for

Standardised, repeatable processes


Multi page
Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.

Most Read Articles

Log In

Username / Email:
  |  Forgot your password?