Transforming Banks for a Digital Future: The Winners, The Losers, and the Strategies to Beat the Odds
I've never been a fan of best-of-breed infrastructure approaches. Too many moving parts usually overcome the promised improvements from using the best part in every place.
Nor am I a fan of technological monocultures. No vendor is the best at everything all the time. So for most of my career, I have tried to balance the two approaches: on one hand, enough of a vendor mix to keep everyone honest, plus get some benefits from truly superior technologies; on the other, sufficient simplicity to keep total cost of ownership and operations, plus reliability, at a reasonable level.
That may be about to change.
One of the consequences of a multivendor (or even a single-vendor, multiple-product-line) approach is the constant need to integrate a portfolio of technologies that all change at their own rate. Recently I have been looking at the costs of this constant update/integrate/test effort (the keep the lights on, or KTLO, marathon we all run) to get an idea of what it would save us if we had a simpler architecture.
When you consider that we (like pretty much everyone else) have a stack with compute, store and connect hardware; hypervisor; a couple of operating systems; various common services for directory, identity and access control; monitoring, logging and management; database; application servers; Java VM; and custom application code sets, there's plenty of room for mismatches as we change out the various parts on schedules that seem designed to maximize inconvenience. And you can't opt out for very long--at least not if you want support.
After analyzing a couple of years of work effort data, I have come to the conclusion that "integration" efforts cost us between 50 percent and 80 percent of our KTLO budget, which is perhaps 40 percent of our total IT budget.
That's a bigger number than I expected. And the trends aren't encouraging. Despite a major simplification push, I'm not seeing much of a gain in KTLO. There's just too much change to manage, even among fewer vendors' products.
And did I mention optimization efforts? There are more than 5,000 tunable parameters in our infrastructure. Different sets of parameters can easily step on each other as we tune our infrastructure for optimum throughput. It's more art than science, and it's a lot of work.
Oracle would like to make this complexity and effort go away, at least for some set of common workloads. If you are willing to believe that they can really do so, it could change the face of enterprise computing just as radically as infrastructure as a service promises to do.
At the recent Oracle Open World, a constant theme from Oracle was the "COI": complete, open and integrated. It's the rationale for the Sun acquisition and for the factory integration of hardware and software stacks into preconfigured, preoptimized appliances.
Parse the rhetoric--which is as much a bash on IBM as anything I've seen--and there is the kernel of a really powerful idea.
There are some places where I'd really like to just specify a set of performance and capacity parameters and have a rack delivered that only requires power, a set of IP addresses and network connections to be up and running--and that won't require me to constantly update the parts every few months to keep it current.
If anyone can do this at a reasonable price, I will have to take a hard look. If we like what we see and the vendors deliver, this could be a really big idea.
Still, it's a big "if."
IT Solutions Builder TOP IT RESOURCES TO MOVE YOUR BUSINESS FORWARD
Which topic are you interested in?
What is your company size?
What is your job title?
What is your job function?
Searching our resource database to find your matches...