Has the Open Compute Project Come of Age?

Written By:
Published:
Content Copyright © 2017 Bloor. All Rights Reserved.
Also posted on: IT Infrastructure

As I read the announcements and reviews coming out of the Open Compute Project Summit in San Jose earlier this month I got a strong sense that a tipping point had been reached in OCP history. While Facebook’s project, started in 2011, to drive open collaboration in data centre design has garnered a lot of attention and high profile supporters, its impact in the Enterprise IT and data centre operator world has been limited. However, that could be about to change.

The absence of real OCP activity in Enterprises and amongst most data centre operators is understandable. After all OCP focus has been on hyper-scale data centres. Yet this just seems to mimic past developments of new approaches that were deemed inappropriate for Enterprise IT. First Unix, then Linux were derided by the Enterprise IT vendors as not being robust enough to handle the demands of the Enterprise. Then Cloud computing wasn’t secure enough. Eventually these broke through into the Enterprise and I expect the Open Compute Project will do the same.

The influence of OCP on industry thinking is clear to see in the differing reactions of vendors, where some like HPE have embraced OCP designs in their Cloudline server range; others have tried to focus on hyper-convergence as a means of simplifying infrastructure and reducing costs without being drawn into a commodity battle. It has certainly helped ignite the software defined data centre argument and vendors like Microsoft have also taken to using OCP designs in their own data centres.

So, how have events at this month’s OCP summit moved the debate along? On the face of it Google’s announcement that it was joining the Open Compute Project merely re-enforces the sense that this is still all about hyper-scale. But the hyper-scale companies have redefined the cost infrastructure of running servers… they have reduced it tenfold, and having now both Google and Facebook in the camp gives the OCP real clout. This is not something that Enterprise CIOs can ignore in their battle to transform their IT systems and significantly reduce costs. If Goldman Sachs is saying that 80% of the servers it has acquired since last summer are based on OCP designs, then you can be sure that other Financial Services companies will be looking at the implications very seriously.

The announcement that Equinix is to work with Facebook on the development of a new open source platform for both hardware and software, that includes the use of Facebook designed Wedge switches, within its data centres, sends a strong message to the market about its intention to provide very agile, powerful and cost effective eco-systems for its customers to take advantage of. Other data centre operators will need to look to their plans and their business models.

Last, but not least, the OCP Telco Project sees AT&T, Deutsche Telekom, EE, SK Telecom, Verizon Communications, Equinix and Nexius come together to “collaborate on the development of new technologies and reimagine traditional approaches to building and deploying telecom network infrastructure”.

We now have major hyper-scale players, hardware and software vendors, data centre operators and telcos all working to deliver innovations and improvements in IT infrastructure based on open source collaboration. If, as IDC predicts, half of all hyper-scale servers sold in 2020 will be based on OCP designs it is likely that most Enterprises will be accelerating their move to the Cloud, even if they don’t use OCP designs in-house, to take advantage of the increased agility and cost savings that will be available. In such an environment data centre operators need to understand how they too can take advantage of the developments coming out of the Open Compute Project.

This post first appeared on the old Cassini Reviews website.