IBM aims at next data centre abstraction level

Written By:
Published:
Content Copyright © 2008 Bloor. All Rights Reserved.

One of the on-going trends in IT development is that this year’s new solution soon enough becomes next year’s component in a larger, more comprehensive solution. For example, last year’s systems management tools are now components in an enterprise-wide management environment, while last year’s strategies, such as server consolidation and virtualization, are now the tactics of a bigger, more comprehensive strategy.

This process is seen in full measure in the latest IBM announcements in the data centre arena. The company is aiming to bring together what have been two separate entities, the management of the assets, services and applications of a datacentre, and the one system that once was the datacentre but which has stood outside and apart from it since datacentre concepts moved on to UNIX, Linux and Windows Server. This is the mainframe, the essential back-end to all large business infrastructures, but rarely an integral, proactive contributor to front-line business operations.

The combined introduction of the New Enterprise Datacenter management environment and the launch of the new z/10 mainframe computer marks IBM’s attempt to move users to that next level of abstraction in systems deployment and management—the enterprise infrastructure as a single entity. As a combination they throw an interesting skew on a number of data centre issues, not least of which is the hot topic of green—including energy consumption, server utilisation and consolidation, real estate footprint, and the re-use and integration of existing software tools.

IBM uses this togetherness to put forward the notion of the ‘ensemble’ enterprise—the building of an integrated, service-based, enterprise-wide infrastructure capable of turning the whole enterprise into an entity that can play together. As a side issue it is interesting to see how musical terms are becoming the analogy of choice for complex IT infrastructures. The concept of SOA orchestration is now widely accepted, so the concept of the enterprise as an ensemble—a flexible, scalable collective that nonetheless works together for a common objective—seems quite appropriate.

The primary component of the ensemble approach is the New Enterprise Datacenter management environment. Key to this is what IBM is calling Intelligent Asset Management. This is based around a single Process Integration Dashboard, though a better word for it might be ‘engine’ as its core function from a user perspective is to act as the unifying management point for a comprehensive range of system management functions. The aim is to produce a process management tool that can integrate any existing, discrete management tools a user might employ, as well as being a complete management environment in its own right. In this context it could be, in part at least, defined as an integrator of integration tools. At the highest level its objective is to combine asset management with service provisioning and release management tools.

IBM is aiming to make this a flexible environment so that it meets a wide range of user options. To this end, it will come with a number of different user entry points aimed at matching the needs of users with one or more existing discrete management tools they wish to continue using. IBM does accept that this does raise management issues such as firewalls restricting the movement of partitions (LPARs) across the system. It also raises software licencing issues for ISVs. The winners here will be those that use approaches such as site licencing coupled to a flexible Service Level Agreement, while the losers will be those using any restrictive licence model.

The goals here are to provide far greater utilisation of existing assets, together with better management of both existing and new assets as they are added; increased disaster recovery capabilities; reducing outages by a targeted 80%, and the ability to do this across all the common platforms types that can be found in large datacentres.

An important target for IT managers is expected to be the way it exploits virtualisation tools to dynamically provision user-specified capabilities. This will allow users to effectively order the specific systems environment required, for delivery to specified client systems at a specified time. Once the work for which that system is completed, the virtualised system is shut down and the resources employed made available to the resource pool.

Although the New Enterprise Datacenter can manage a wide range of system platforms as players in that resource pool, one of the key components that IBM now sees as a player is the mainframe computer. The first iteration is the z/10EC (Enterprise Class) system, with a z/10BC (Business Class) version to follow. The key selling points IBM will be pushing are its virtualisation capabilities, its reduced operational costs, and its ability to be an integral part of the New Enterprise Datacenter. The current z/9 systems will continue as current products at least until 2011 when the next iteration—presumably known as the z/11—makes an appearance.

On the virtualisation front it is claimed possible to consolidate up to 4,000 servers onto a z/10, which is estimated will bring typical operational savings that really do rate as significant. These include an up to 80% reduction in energy consumption and costs, an 85% saving in floor space (which will undoubtedly be a big argument in high ticket real estate locations like London), and an 80% reduction in associated data centre labour costs through reduced support requirements. As a demonstration of the importance of such arguments IBM is pointing to how well they are playing in largely ‘green field’ markets such as India, China and Brazil. The company also expects a good response from the large body of first-world users with infrastructures based on old UNIX and x86 server types that are under-utilised and ripe for update and consolidation. These are seen as, in all probability, the long tail of the big surge in pre-Y2K panic buying.

The z/10 will feature new process acceleration technologies in the form of new specialty processors. For example, a new version of the IFL (Integrated Facility for Linux) processor offering approx 8x the performance is coming. The internal architecture of the z/10 also makes extensive use of InfiniBand for internal communications. For example, Sysplex will now be implemented in InfiniBand, making it both faster and more physically flexible. To date it has been the case that systems in a Sysplex architecture could be a maximum of 10 metres apart, but when implemented in InifiBand they can now be up to 100 metres apart.

The new processor used in the z/10 marks a big step forward in performance over the z/9. It is a close technological relative of the P6 Risc processor technology except that it is a quad-core device and uses a Cisc architecture. This is claimed to provide a maximum system performance of around 30,000 MIPS. It runs at 4.4GHz, which is a significant jump from the 1.7 GHz of the z/9 processor. Systems will also offer a 4x increase in storage capacity with an automated failover process. This uses IBM’s Basic HyperSwap for single site automated volume failover. Also much faster disaster recovery through 95% faster data resynchronisation using Geographically Dispersed Parallel Sysplex.

The advantages of integrating the z/10 as a frontline player in modern data centres can be seen in the suggestion from IBM that it will be used as the platform to run Business Intelligence applications directly. It has long been the repository of the raw data BI applications utilise, but the switch to frontline BI engine will mark a good example of how the future of the mainframe is likely to change.