The public cloud is emerging as a disruptive innovation for IT — and not just for small and medium-size businesses. At this point, most large enterprises are experimenting with the public cloud as a development-and-test resource, or for production applications with low security, privacy and service level requirements. The conventional wisdom is that the public cloud may even be just a niche interest for large companies, given their extensive legacy investments and the mission-critical nature of their systems. Nevertheless, a number of these companies do see big potential in the public cloud; they feel an acute need to choose between proactively working with the public cloud or getting left behind.
At the Corporate Executive Board, we have spoken with many of the early enterprise adopters of public cloud services from vendors such as Amazon and Rackspace. Naturally, the applications that these enterprises have considered for the public cloud are studied for their cost-effectiveness in this model. Below is our consolidated feedback from these early adopters on the 10 hidden costs in the public cloud. We’ve broken these cost areas into four broad categories to watch:
- One-Time Migration Costs
- Billing Model Limitations
- Retained Management Costs
- Risk Premium
One-Time Migration Costs
These are costs associated with the migration of existing applications from traditional, physical infrastructure to public cloud, including applications retrofitting time, server migration time, and potential impact on depreciation write-offs.
In this category, there are two potential costs to watch:
- Application retrofitting. The majority of a typical company’s in-house application portfolio is not yet cloud-ready. Some of applications already suitable for virtual machines, or developed according to platform standards, can be ported easily. But most applications would require considerable retrofitting or recoding to become compatible. This is particularly true of legacy applications. Organizations need to evaluate the cost effectiveness of porting these applications, versus leaving them as-is or even totally decommissioning them in favor or new applications. Promoting platform standards, and building the business case for a technology refresh, have been perennial challenges for application teams. This remains true when considering the public cloud.
- Depreciation write-offs. Companies that choose to accelerate application or infrastructure refresh in order to jump-start public cloud migration face the possibility of no longer writing-off depreciation on existing hardware. That explains why many companies are thinking of evaluating the cloud at existing refresh points.
Billing Model Limitations
There are three features of the current public cloud billing model that may be a poor fit for your enterprise applications.
- Elasticity premium. One of the most heralded features of the public cloud is its pay-as-you-go billing feature, which enables companies to tackle increasing application loads at peak usage time. Since prices are set accordingly, this may come at a premium for applications that sit in the public cloud full-time and don’t peak greatly above their routine base. For example, consider that Amazon’s Large On-Demand Windows Instance costs 48 cents per hour while its comparable Reserved Instance is just 20 cents. Choosing the right type of cloud instance for each application is important; applications with steady or predictable workload will not be most cost effective in the on-demand models.
- Toll charges. The in-bound and out-going data transfer charges in the public cloud are a significant factor to keep in mind, especially for applications that experience heavy data usage. For instance, Amazon charges 10 cents per GB inbound and between 8 cents and 15 cents per GB transferred out. The additional latency introduced due to large-scale data transfer requests on the cloud servers is also a cause for concern.
- Storage costs. A virtual, multitenant server architecture introduces new storage costs and complexity, creating the need for optimization tools such as storage virtualization, thin provisioning and data de-duplication. These are tools with which most companies are just beginning to become familiar.