How to Manage Application Enhancements

By John Parkinson  |  Posted 10-24-2006

How to Manage Application Enhancements

One of the endless challenges for a large corporate software engineering organization is to develop an effective process for handling the steady stream of small changes, fixes and enhancements that seem to accumulate around installed software, both custom developed and Commercial of the Shelf (COTS)—no matter how good it is initially. Typically, these changes can be handled by a single engineer, don't require much resource or elapsed time and are generally pretty well defined. Users know that they are not asking for much, and don't, therefore, expect to have to wait very long to get their change request implemented.

The problem for the engineering managers, however, is that each change generally requires considerable impact analysis before it is made (to see what, if anything, the change could adversely affect) and considerable testing after it has been implemented but before the new "version" of the software goes into production (to prove that nothing actually broke). This "engineering process" effort can easily dwarf the core engineering resource requirements as each small change includes all the aforementioned associated "overhead." From a release manager's point of view, it's better and much more efficient to accumulate all of these small changes into the release cycle for large changes—which may well include many of the small changes anyway—and simplify regression testing and verification. But that makes users wait for their "small" changes to get implemented, and that makes them less than happy.

Try to deal with each change individually, however and you can easily create a version control, configuration management and testing morass.

So what to do?

Back when I was helping put together plans for an "application management" practice—basically an outsourcing business focused on supporting installed applications for clients—I looked in detail at what kinds and levels of effort would be involved in doing this kind of work, and how do design and deploy an efficient process for it. Here's what we found.

There are basically four kinds of "changes" that need to be managed:

  • Application Enhancement: The constant stream of small changes that occur as the business adjusts to changes in the marketplace, to regulation and compliance and to growth (or shrinkage) in parts of the business.
  • Performance Engineering: There are changes that occur because workloads are dynamic and information accumulates. As volumes of stored data grow and numbers of users increase, application and platform parameters need to be adjusted, often in a complex coordinated fashion, so that response times are maintained and storage is used effectively.
  • Sustaining Engineering: These changes result from the regular stream of vendor supplied patches, fixes and version upgrades that have to be accommodated if you want to stay current on hardware and software versions and are generally required for continuing maintenance and support. Not everyone implements every patch, fix and upgrade right away, and the scale of effort required varies. But sooner or later you have to get caught up and a steady, well managed process may be preferable to an occasional major upheaval.
  • Emergency Repair: These are changes you have to make because something broke unexpectedly. You could argue that a well designed operational environment with adequate performance instrumentation and good monitoring of performance against predicted models (you do have all of this, don't you?) wouldn't have emergencies, but every organization we looked at had a few, largely the result of human error somewhere in the system. These changes have to be done right away and done as fast as possible so that you can be back in operation with minimum disruption.

    Over the range of companies I looked at between roughly 1998 and 2004, there was quite a lot of individual variation in the effort devoted to each of these aspects of application management. But on average the breakdown went something like this:

  • Application Enhancement: 65 percent
  • Performance Engineering: 5 percent
  • Sustaining Engineering: 20 percent
  • Emergency Repair: 10 percent

    One interesting statistic that was universal was the growth trend in the aggregate application management effort—it was positive everywhere. This was a major concern (and hence drove the idea of a service to supplement internal resources) because it was clear that the demand for application management was growing faster than the improvements in productivity that were being achieved. Every new application deployment brought with it a "long tail" of application management needs and sooner or later, the demand would exceed the resources available. In many cases, the better an IT department was at satisfying demand for new capabilities, the worse the application management problem became. Even if you never add new functionality, however, the "entropy effect" that requires long term support for installed software will still eat up a lot of resources.

    Let's look inside the 65 percent application enhancement number. We were interested in the best way to manage the stream of change requests and started by looking at how much engineering and associated effort went into satisfying requests on a historic basis (at least where there was reliable data—a far from universal feature of "maintenance" processes). We discovered that there were generally three kinds of enhancements being made:

  • Small: The engineering effort was less than 100 hours and the total effort was less than 400. This accounted for about 10 points of total effort, but 65 percent of total enhancement requests.
  • Medium: Requiring between 100 and 1,000 hours of engineering effort and less than 2,500 hours of total effort. This group accounted for 25 points of the total effort and 30 percent of total requests.
  • Large: Requiring more than 1,000 hours of engineering effort and over 2500 hours of total effort. Thirty points of resource but only 5 percent of total requests fell here.

    Remember that the total amount of resources going into these activities is growing steadily (by about 11 percent a year according to the data we collected) but the allocation across types of enhancements remained pretty constant.

    Most organizations grouped medium and large enhancement requests into periodic "releases" spaced anywhere from 90 days to a year apart. Users appeared resigned to the consequent delay, and most attention was focused on ensuring that they got into a release far enough ahead of time that their changes would be available by the time they were needed. But the small enhancement requests were a problem for everyone. Waiting a year for a change that took only a day or two to implement just didn't seem reasonable. Back to our dilemma about what to do.

    Turns out that there a several things you can and should do in combination to solve this issue.

    Next page: Getting a Handle on System Change Management

    1 | 2

    Getting a Handle on

    System Change Management">

    First, recognize that a lot of these requests are for things that users could do for themselves if they had the right tools, a little training and support and a "clean" information environment. Changes to reports, even changes to web pages and transaction screens can often be handled by the requestor, if the information management environment is robust, the platform architecture is solid and the right tools are deployed. IT departments don't seem to want to take this route very often (at least I have seen a great deal of resistance to it) because the necessary cleanup efforts almost always require medium or large changes to be made, and there's no budget for such efforts (or willingness to admit that they are needed). But the productivity gain from an effective self-service strategy is significant, and the improvement in customer satisfaction equally worthwhile. In addition, doing some things themselves soon teaches users to assess the real value of everything they ask for.

    Secondly, many small changes cluster around specific sections of application code, so an investment in improving these code sections to aid understanding pays big dividends. Application understanding is the largest single engineering task associated with making a small change to an installed application. Over time, the engineering effort can be reduced by up to half.

    Thirdly, extensive automation to the testing, code management and build processes that support small changes can trim the total effort needed to deliver a small change by over two-thirds.

    Fourth, implementing many of the techniques of "extreme programming"—especially pair programming, test-driven development and refactoring—helps the staff assigned to the small enhancements process to be both productive and quality focused. It might seem foolish to assign two engineers to an effort that's only expected to take a few hours, but because code understanding is such a critical part of the process, using two people actually speeds things up. It also gives you the opportunity to pair an experienced engineer with a novice, broadening the pool of engineers who understand the code.

    We implemented all of these strategies (to the extent possible in different client situations) within the applications management practice and for a number of clients and although the mix varies somewhat from place to place, the overall effect is to support a continuous stream of small enhancements that can be delivered quickly at high levels of quality. And a lot of happy users.

    Of course there are some unanticipated consequences as well. Because we effectively reduced the effort required to implement a small change, many of the smaller "medium" changes got scoped into the "small" category, shifting the proportion of changes from 10 points to closer to 20. We also had to invest in additional support desk resources and user training to allow the "self service" model to work, and to collaborate with HR departments to assess which end-users simply couldn't learn to use the new tools safely and effectively.

    But overall, the efforts were extremely worthwhile and pushed off into the future the day when 100 percent of engineering resources will be needed to enhance what we already have deployed. We'll get to that problem a little further down the road.

    John Parkinson is the former Chief Technologist at Capgemini and has advised hundreds of large companies on their IT strategies.

    1 | 2