Collaborations That Count

John Parkinson Avatar

Updated on:

2009 will mark the 40th anniversary of my first IT-related paycheck. When I started, much of the technology was pretty new, and every kind of capacity was scarce. So code optimization was an essential part of the development and test cycle for virtually all the software we wrote. We had to focus on memory footprint, context switching and execution path length because the targeted machines had relatively little memory, very slow direct-access storage and relatively slow processors.

This took a lot of work, specialized skills and time. As processors became faster, memory sizes grew larger and disk-drive performance improved, developers tended to rely more and more on the compilers they used to get the optimizations right. Tuning skills atrophied. Speed to market came to matter more than efficient code. Moore’s Law seemingly eliminated the need for tuning skills.

For the past three decades, this has been a good trade-off for the mainstream of corporate IT, and, it remains so in many instances today. But I don’t work in the mainstream of corporate IT, and the code we develop runs workloads that must either execute complex transactions very fast to meet customer service levels or support batch processes that can take hours or days to complete.

Traditionally, we have thrown hardware at the problem, but we are getting to the financial and technical limits of that option. As the volumes of data we process continue to grow and the complexity of the products we develop increases, we are once again faced with the need to tune and optimize our code, without, however, sacrificing time to market.

Technology has changed a lot in the past 30 years, and the tuning skills we now need don’t reside in the heads of individual programmers anymore. We need vendor technology specialists to look at the performance characteristics of hardware, firmware and operating systems.

We need application environment specialists to look at how we use development tools. We need application architects and designers to test our designs for performance potential. And we need integration specialists to ensure that the pieces of the puzzle work together.

Recently, we were challenged to speed up a very large-scale batch system, rather than add more hardware to the platform. The code is already highly parallelized and while, in theory, adding more processors should have improved performance, we were seeing diminishing returns and had no appetite for the added cost.

By bringing together the chip set, hardware, operating system and development environment vendors and reviewing our application development team’s approach, we improved the execution efficiency of the code by more than one-third. This, combined with changing some design details, reduced execution times by nearly 50 percent. On a job that runs for 18 hours a day and must be completed in a 24-hour window, that’s a big gain for almost no cost.

The insight here is not that we succeeded, but that we received enthusiastic help from our vendors–despite the fact that success would inevitably decrease the immediate potential for more sales. With their help, we avoided several hundred thousand dollars of new hardware and software purchases, and we identified a series of opportunities that we can pursue in the coming months.

That’s the kind of collaboration with vendors that really counts.