HDS looks to Cloud Economics

Written By:
Published:
Content Copyright © 2009 Bloor. All Rights Reserved.

A recent discussion with Tony Reid UK Services Director at Hitachi Data Systems (HDS) has highlighted an issue that runs through the Cloud far more deeply than just the company’s stamping ground of large data storage systems. Fundamentally, the technology is no longer the important issue and instead the key issue for users, and therefore the primary sweet spot for vendors, is now the ability to define services levels at the next level of abstraction, in terms of quantifiable financial impacts—and particularly negative impacts—on the business.

In other words, Cloud economics—the business costs and benefits of using the Cloud—are becoming the focal point as the technology used to create it become increasingly standardised and commoditised.

This is really the way in which a service level should be portrayed to a business user, as it is the clearest way of justifying any capital expenditure that might be required. It is now the end users, the business units, which have to make the judgement on the value of services rather than IT. The obvious danger here, perhaps most clearly shown by HDS’ experience in the storage sector, is that Line of Business (LoB) managers will rate their processes as requiring the highest priority. The solution HDS has adopted is to make storage provision a billable item to each business unit—tier 3 at £n/Mbyte, tier 2 at £2n/Mbyte and tier 1 at £4n+.

But this in turn puts new pressures onto vendors and service providers to identify and define the levels of service adequacy needed in business process terms as well as cost, regardless of the type of service, the technology providing it, or whether it is sourced from the internal infrastructure of the business or from out in the Exostructure Cloud. The technical metrics which underpin that definition process should be of little direct interest.

For example, when it comes to storage it would not be sensible to store all data on the fastest and most expensive storage devices. There is a need for two, probably three, tiers of storage, depending on both business process needs and the cost/unit of storage. This matches appropriate service levels to the needs of the business processes. The primary issues then become not the definition of the tiers in terms of technology or just performance but rather how individual tiers are defined in terms of business issues such as recovery time objectives, recovery point objectives, backup policy objectives and every other factor that goes to make up a service class. And yes, that then does include performance but it is not the only or indeed the most important objective.

Giving LoB managers business-related rather than technology-related metrics on service levels then creates an overall service provision model into which Cloud-based services can fit with considerable ease, and on an equal footing with internally sourced services.

There is already a market in Cloud-based storage services, and indeed HDS does offer the capability itself to some of its customers. According to Reid, it has no plans to formally launch into that market at present, though the idea is not dismissed out of hand. HDS is well aware that users have entered a new era where virtualisation means they don’t know what tier their data is stored on or where it is, physically. What is more, they don’t actually need to know. Instead, what they need to know are the capabilities which are available with a defined service level, and the associated cost. They do not need to know the specifics of performance and technology of the tiers.

HDS does offer its Storage Economics service, however, which Reid claims can save large users some 25% of their storage costs if they are implementing tiered storage with virtualisation and dynamic provisioning. This is really only effective for users running 30 T/byte of storage and above. The savings are a combination of cost reduction (more cost-effective utilisation of existing resources) and cost avoidance (reducing the need for new, additional systems).

From the service providers’ perspective the inevitable move towards providing business-related service levels rather than technical metrics is that they have to face up to some new, inherent risks that they may not have considered relevant before. Reid identified the key one as it plays out in the storage services marketplace. If HDS offers a LoB manager a choice of storage tiers there is no risk; they get the tier they choose and pay the associated price. But if the service provider—be that internal IT or an external service provider—charges a price for a business-related service level then that service provider has to be sure of its judgement of the circumstances. It has to get it right. If the user has been offered tier 2 but really should be on tier 1 to meet the service level contracted then the service provider will have to bear the cost.

Widen that out to other service areas and it becomes apparent that service providers are going to require a deeper understanding of their customers’ business models than many current applications developers. Taking storage services as a role model again, a simple example might be where data associated with a Christmas marketing campaign will be on tier 3 or lower for 11 months of the year, but for one month it will need to be on tier 1. The key trick here, therefore, is the timely and appropriate movement of the data between tiers, a capability that is going to be a more common occurrence than might be expected, and not just for data storage. In order to meet business-related service levels, service providers will need to understand what services are required when, how frequently they change and a wide range of other factors that will go to make up a user’s business agility requirement. They must also ensure they have the technology capable of providing that agility in a timely and seamless fashion.

Without service providers taking such factors on board, Reid is of the opinion that The Cloud could prove to be a hindrance. It will in practice be important for many users to move data between tiers with complete transparency to the relevant services or applications, so any cloud services—whether internal or external—must be able to accommodate this capability to move data, applications code or complete services between external and internal services as is appropriate.

This however, will demand a level of compatibility and interoperability between service providers that is, broadly speaking, not in place yet. At present, it is fair to suggest that the Cloud might be better defined currently as a collection of ‘micro-atmospheres’ with a fair degree of incompatibility between them.

It is also a measure of why the ‘Cloud’ is not a workable definition of what a business wants to achieve. Each business, in practice, needs to build its own Exostructure, made up of both internal and external third party services, as they are needed. The ability to change and adapt is a vital part of this, which does mean a far greater level of service compatibility and interoperability between service providers will be necessary.

There are two important implications here for service providers here. One is the obligation to fully understand how to specify service levels in business process terms, together with engineering what is required in technical and performance terms to meet those service levels. The other is the creation of contract terms which ensure that some reasonable level of responsibility is put on the customers to understand their service requirements—or, more specifically, defend service providers against customers’ own incompetencies.

This does mean that the real trick will be in building the formulae that will help both sides map the users’ expectations of the service to the capabilities and requirements of the service provider. The latter will need to be able to define technology metrics into meaningful business processes for LoB Managers. The former will need to understand the detailed process steps, dependencies and implications their business requires. This must be coupled with a risk assessment of failure to meet a minimum business requirement in order to properly assess the efficacy of any proposed service level proposals in meeting specific business requirements.

Management across the Exostructure is often seen as a problem, but if it is clearly defined using business value metrics this need not be the case. It is essential that a user can manage their services in exactly the same way regardless of whether it is internally sourced or provided by an external service provider. This does mean that service providers must be able to integrate their offerings seamlessly with at least most of the management environments out in the user community. Expecting users to adapt their own environments to fit the management regime used by the service provider is probably a very short route to failure …a fact which does mean that full interoperability between management environments is the next big requirement.