Bernard Golden Five Rules for Today's ALM

Bernard Golden, VP strategy, Active state software | Monday, 26 September 2016, 09:57 IST

With all of today’s buzz about containers, cloud computing, and how developers are the new kingmaker, one might be forgiven for thinking that ALM is passe. But nothing could be further from the truth. In fact, the elements of ALM -end-to-end tracking of applications from requirements to deployment, with supporting tooling and integrated tracking and reporting -- are more relevant today than ever before.

However, it is true that the environment in which ALM operates is vastly different today than a decade ago. The pace of change in today’s business environment is visibly faster than ten years ago, and the role of IT is significantly more important which means ALM must evolve to meet today’s requirements.

Here are five rules for today’s ALM:

Applications are the business

Ten years ago, IT was primarily concerned with automating internal processes - inventory, payroll, HR, and so on. And when IT looked outward, its systems were primarily informative. For example, websites and document display. Since IT carried an inward focus, the speed of its application process was not critical to the operation of the overall company. Moreover, because infrastructure took so long to provision, any application lifecycle inefficiencies were typically masked by the provisioning process.

However, neither of these two factors are still relevant today. Computing infrastructure is no longer a weeks-long or months-long process. The rise of cloud computing makes infrastructure available in mere minutes. This exposes application development, deployment and management as the “long pole” constraining IT performance.

Even more important, IT is no longer merely a cost center devoted to automating internal processes. Instead, applications are increasingly the way companies and their customers, partners, and suppliers interact. In other words, applications are the business. That means IT is now directly responsible for increasing the speed of the application lifecycle.

Agility is the new norm

I recently heard a telecom executive state that they used to plan new services for ten years and then roll them out with an expected 50 year lifespan. Today, he said, lifespans are two years and shrinking. That shrinking product cycle is true for everyone, which means IT has to operate faster, with more agility, than ever before.

Most people assume agility means rolling out applications faster than in the past. And that’s true but only partly true. Just as important are other aspects of business agility: the ability to prototype quickly, to deliver business experiments into production quickly and get immediate feedback, and to set up (and terminate) business partnerships more easily. Each of these requires IT agility: the ability to develop applications fast, roll them out quickly, and modify or terminate them speedily.

It’s a new business world, and IT needs to adjust to the new pace of the economy.

New architectures require new tools and processes

It’s obvious that the tools and processes appropriate for yesterday’s IT that are monolithic, difficult to change, and subject to repetitive manual work processes are unsuitable for today’s new normal. Simply put, IT requires a wholesale replacement of tools and processes with the ones capable of supporting necessary business agility.

Cloud computing is necessary, of course. But that’s only the beginning. Monitoring and management of components capable of tracking and controlling dynamic application topologies are subjected to great variability of load and running on unstable infrastructure. Streamlined application development and deployment pipelines combining continuous integration and deployment and programming frameworks reduce the need for developers to deal with underlying infrastructure.

From a process perspective, the trouble ticket is dead. Standard processes should be automated with manual handling used only for exceptions. And application artifacts should be shared across milestones so that each group works with already existing application components rather than recreating them each step of the way.

Don’t build the new legacy

Many IT organizations, intrigued by the benefits they’ve seen with the use of open source software look at the last sections’ list of requirements and decide to build their own customized toolchain. While this makes sense for some enterprises, yet for the vast majority, constructing a DIY application pipeline means creating yet another legacy system that will, someday, represent technical debt and an ongoing investment requirement.

IT has to figure out where it adds value in the overall business and focus its energies there. Every IT organization has constraints, particularly in the area of talent. Devoting precious resources to a homegrown application toolchain that provides no significant differentiation is a mistake. Far better to use a commercial or commercially-supported open source product for the application toolchain so that the talent can be directed toward application functionality. Gartner has named 2015 as the year of the PaaS, so it makes sense to direct some attention to evaluating which of them can be leveraged to avoid the homegrown legacy syndrome.

ALM: More important than ever

As I noted at the beginning of this piece, some consider ALM a passe concept, made obsolete by DevOps and cloud computing. Nothing could be further from the truth. While IT tools and processes are changing dramatically, the need for integrated application end-to-end tracking and management is more important than ever.

If you believe, as I do, that IT staffs in the near future will be subject to the “Law of 10X.” They’ll need to management ten times as many applications that are ten times as large with ten times the application volatility as in the past. Then, you’ll agree with me that IT organizations need structured ALM processes to succeed. Only with the right tools at hand do IT organizations have a chance to succeed in today’s business environment.