Infrastructure

Last Updated: 20th October, 2017 (RSS Feed)
Analyst Coverage: Paul Bevan

You might be forgiven for thinking, that in today's Cloud-centric World an understanding of IT infrastructure is no longer a critical requirement. If you can spin up new virtual servers in the public cloud in a matter of minutes and your developers are designing serverless apps and services, why should it be necessary to think about the physical infrastructure?

The reality is somewhat different. Mobile, IoT, Big Data, Analytics, Media Streaming and AI are all driving strategic change, in where apps and data are stored and processed and how and where services are consumed. These changes are driving a rapid evolution in business models. In effect, businesses are in a state of constant change. The ability to adapt your business at pace and to disrupt, rather than being disrupted, is now a critical business capability of the Mutable Enterprise.

IT infrastructure is now a critical enabler of the Mutable Enterprise. It makes no difference if you are running your own data centre, have gone completely public-cloud or have some mix of the two, the decisions you do, or don't take, will have a profound effect on the success of your business. The speed with which new technologies are introduced, the way in which existing and new vendors jostle for position and dominance and the availability of a range of different delivery options make for complex, and sometimes confusing procurement scenarios.

Our infrastructure practice aims to articulate the infrastructure story to make it meaningful for the CIO. As a by-product, it will also help vendors to understand how to articulate their propositions, understand technology trends and adapt their business models and go-to-market strategies in the face of potentially disruptive new technologies and market entrants.

At its most basic Infrastructure is simply all the physical equipment needed to run applications and provide access to those applications to your customers, partners and employees. This includes servers, storage, networking equipment, personal computing devices, Internet of Things (IoT) devices and a range of output devices like printers, information screens etc. Infrastructure also includes the data centres in which the equipment resides.

This equipment is essentially dumb and needs, for simplicity's sake, a layer of operating and management software that sits between the physical hardware and the business application. Operating systems like Linux, Microsoft Windows, Ubuntu and specific proprietary operating software from IBM, Unisys and others, have been a part of the infrastructure layer for some time. The same can be said of a variety of systems and network management tools. In our section on emerging trends we will look at the other software elements that that now form key components of an infrastructure model.

Company Boards and Business Owners need to gain assurance about the resilience, agility and security of their infrastructure. Inevitably it is the high-profile failures in infrastructure like the recent data centre outages at British Airways, or the security breaches at TalkTalk that highlight the financial and reputational risks involved and the subsequent loss of Trust in the organisation. But, in a multi-channel, digital world where business model disruption and rapidly changing competitive environments are the norm, making the right decisions about infrastructure deployment and management is now a critical Board competency.

Chief Information Officers (CIO) and IT Directors face a range of decisions about the transformation of their IT infrastructure to meet the new demands of digital, multi-channel business models. What is the roadmap for the journey to the Cloud? What mix of public, private and hybrid-cloud deployment is right for the business? Where do I locate my infrastructure to gain the right balance of performance, latency, cost, information governance and security? How do I acquire and retain the right talent to implement and manage the infrastructure? What new skills do we need to manage multiple service provider relationships? Successfully managing these interconnected requirements while clearly and simply articulating how infrastructure is a major enabler of new digital business should give the CIO as stronger voice on the Board.

From a hardware perspective speed is king. Solid State Arrays arrays are rapidly becoming the standard for storage deployment within the data centre. Optical switches and large increases in networking speeds and bandwidth are opening up new interconnectivity options. Graphical Processor Units (GPU), once the exclusive domain of games consoles and specialist video applications are now used in a range of data centre scenarios, in particular, High-Performance Computing (HPC) and Machine Learning. What this is highlighting is that, despite, or perhaps because of, the advent of cloud computing the general-purpose server and storage array is no longer sufficient to meet the needs of most larger, digitally oriented businesses.

Arguably, the bigger and more important emerging trend is what is being dubbed by some, 'Big Software'. Software defined networks, software defined storage, software defined data centre are developments that separate the control plane from the data plane. This means that you no longer have to rely on expensive specialist hardware from specific vendors, but can use software to provide the same functionality using cheaper, industry standard processors and servers. This not only reduces cost, but has been proven to speed up the deployment and agility of new infrastructure.

Big Software also encompasses virtualisation. This can hardly be called an emerging trend as we have been used to the server virtualisation products of VMWare and Citrix for many years. However, new technologies like containers are providing alternatives that in some cases are more effective in their use of underlying server resources than traditional hypervisor based virtualisation offerings.

In amongst all the new technology developments, the age-old battle of "open vs proprietary" systems remain. Openstack and the Open Compute Project (OCP) offer community based standards and tools that allow competent IT teams to build their own infrastructure stacks that are cheaper than the "in-a-box" hyper-converged solutions from Cisco, Dell or HPE. But many businesses, without the necessary skills to architect, build and deploy infrastructure themselves, will feel that the price premium and risk of lock-in is worth it.

This theme re-emerges when looking at the use of public clouds. Significant concerns have been expressed about the potential of being locked in to AWS, Google or Microsoft Azure. In response, we are seeing the emergence of cloud orchestration and management tools that enable businesses to deploy and manage multiple different clouds.

At first glance, a list of key vendors appears to confirm a business as usual approach to market positioning and competitiveness. However, a closer examination of the portfolio and strategies of key players like IBM, HPE, Dell and Cisco seem to point to a market not only undergoing consolidation, but also realignment to address the challenge of new competitors and technologies.

HPE and Dell, under pressure from ODM vendors who have made inroads into the hyperscale and cloud service provider (CSP)market places have moved, through acquisition, to acquire new storage and software capabilities to compete more directly with Cisco's hyper-converged infrastructure offerings. This has left Nutanix as the only significant, independent, hyper-converged infrastructure vendor in the market place.

IBM have made similar moves to acquire new infrastructure software capabilities. However, their acquisition of Softlayer highlights the growing importance of physical data centres in the infrastructure stack. The growing importance of interconnectivity, low latency requirements, data sovereignty and the different processing and storage requirements of IoT has seen the emergence of global data centre operators like Equinix, Digital Realty and NTT and specialists like EdgeConnex as key players in any decisions on infrastructure deployment.

The growth of hyperscale cloud data centres, specialist high performance computing like video streaming and bit coin-mining, and the spread of IoT is also leaving its mark on the processor market. Intel retains over 90% of the data centre chip market, but is coming under pressure from Nvidia with its GPUs, ARM processors and AMD with a rejuvenated product set.

Cloud and DevOps have called forth a huge number of start-up organisations offering PaaS (Platform-as-a-Service) solutions, orchestration and management tools, cloud migration and consultancy services. Some have been acquired by major vendors, like Pivotal by Dell, others dropped out of some markets, like CloudBees exiting the PaaS market, while others have grown and remained independent such as Apprenda.

We expect this to remain a very dynamic market for take-overs and investment backed developments. We also envisage that the growth of software defined strategies will continue to put pressure on existing business models. One example is in load balancers where Cisco exited to concentrate on software, whilst traditional hardware players like f5 are coming under attack from new software oriented entrants like Avi Networks. While these developments offer exciting opportunities to reduce infrastructure costs and increase flexibility, making long term buying decisions will be complex and sometimes confusing.


Further Information (Icon) Further Information

Further resources to broaden your knowledge: