The Open Compute Project can change the way organizations think about and manage their data. Stanford University research fellow Jonathan Koomey offers insights about the Facebook-led initiative.
By Samuel Greengard
Since its launch in 2011, the Open Compute Project (OCP) has increasingly garnered positive attention, especially among tech companies. Established by engineers at Facebook, the initiative aims to share openly information about how to scale data centers and computing infrastructure in a more energy-efficient and economical way. Facebook claims that Open Compute has saved the social media company more than $1 billion, and other technology companies, such as AMD, IBM, Microsoft and Rackspace, now support the infrastructure.
OCP reports that an Open Compute approach typically leads to 38 percent greater data center efficiency and 24 percent lower costs. However, resistance to the Facebook-led initiative, which challenges some commonly accepted data center practices, remains—and its acceptance has suffered as OCP certification of hardware and systems has lagged. CIO Insight recently caught up with Stanford University research fellow Jonathan Koomey, a leading expert on energy and data center efficiency, to discuss OCP. Here are his insights on the topic:
CIO Insight: Why should CIOs and other IT executives be thinking about the Open Compute Project?
Jonathan Koomey: Data centers contain a lot of needless variability and a lot of archaic assumptions. All of this winds up embedded in the way organizations design, buy and build infrastructure. The goal for Open Compute is to remove some of this needless variability and reduce the cost of deploying IT infrastructure. OCP introduces new data center designs, creates standards and, in the end, alters some of the underlying assumptions. It also changes the way organizations think about and manage data. It's all about standardization.
CIO Insight: What are IT executives currently overlooking in regard to data center operations?
Too often, enterprises don't examine the total cost of compute. You can't just throw equipment into a data center. It's critical to understand how to deliver the necessary performance in the least expensive way possible. If you use equipment that doesn't take into account the data center design, you wind up wasting energy, capacity and money. Unfortunately, very few organizations analyze their data center design and optimize their infrastructure. Oftentimes, the result is a massive waste of IT resources. However, we're moving toward an approach that requires organizations to evaluate the cost of delivering compute.
CIO Insight: How should CIOs and their organizations approach an Open Compute initiative?
A starting point is to connect to the Open Compute Project. It has developed specifications for servers and guidelines for data centers and designs. But it's also important to take into the account the business you're operating and what IT demands exist. It's critical to understand the compute environment in a comprehensive way. Many companies are stuck in silos and wind up with huge inefficiencies.
CIO Insight: What companies are putting Open Compute concepts to work and achieving gains?
In addition to Facebook, we see Google and eBay adopting these principles. They are bringing the cost to bear on their software developers and, in the process, breaking down traditional silos. eBay also has achieved gains by moving to the cloud. The company recognized that it could speed the rate of innovation, increase revenues and reduce costs. These companies are ahead of the game. They are showing that it's possible to achieve greater overall efficiencies—and better results.
CIO Insight: What final advice do you have for CIOs and others looking to implement an OCP framework?
Find ways to rewire and rethink IT processes. You really can't do anything until you get rid of separate teams, budgets and silos. Otherwise, it's like trying to run a race when you have the flu. No matter how hard you work, you won't achieve the results. The end goal is: one team, one budget and one boss. Fix that and good things will follow. Then focus on minimizing the cost per compute cycle by looking at everything in a total cost framework. This may mean changing the fundamental way you approach IT. It may mean shifting to a cloud model. Today, an enterprise that is able to innovate and able to innovate rapidly is at an advantage.
Photo credit: As part of OCP, Microsoft is open sourcing the code used to manage its data center hardware operations. Photo courtesy of Microsoft.
About the Author
Samuel Greengard is a contributing writer for CIO Insight. To read his previous CIO Insight article, "Consulate Takes a Healthy Approach to Wireless," click here.