jeudi 4 avril 2013

Cloud computing and the evolution of the data center

A lire sur:  http://www.techrepublic.com/blog/datacenter/cloud-computing-and-the-evolution-of-the-data-center/6139

Takeaway: Thoran Rodrigues explains how the futures of cloud computing and data center design must evolve together.
Cloud computing starts with data centers. While we can dream of a world in which anyone is allowed to sell their excess computing capacity as virtualized resources to anyone else, in a fully distributed cloud model, the fact of the matter is that today the cloud follows a centralized factory model: resources are provided by central “factories” — the huge data centers of Amazon, Rackspace, Microsoft, Google and others — and distributed to consumers over the Internet.
In a sense, large scale data centers are what made cloud computing viable. Before the advent of today’s incredibly large and modular datacenters, which require almost no hands-on management, selling computing resources to others would have been a nightmare. Not only would the cost of offering high availability be prohibitive for most customers, it might have very well been impossible to do so.
Scalability, availability, resiliency, and security are all features that one way or another have to be translated and incorporated into data center design, from the topology and architecture of the buildings that house them all the way to the software used on each individual sever. Otherwise, any promise made by a cloud provider simply cannot be upheld.

Designing for promises

If we look at the main promises made by cloud computing providers, especially in the area of Infrastructure-as-a-Service, we can see that most of them have dependencies on issues that arise even before a data center is built. For a customer, an uptime guarantee seems simple: the service is “up” if I can access and use it, and it’s “down” if for some reason I can’t. For the service provider, however, it’s much trickier: being “down” can be a result of anything from a failed server or disk to a major power outage. Many of the lower-level issues, such as power or network connectivity, are related to larger concerns such as where a data center is built.
In many countries, it might be impossible to have access to two separate external electrical energy providers. This may force a company to make a larger investment in generators or other backup energy sources, increasing operational costs. The same thing goes for networking: without access to multiple providers, a company may be forced to put in place its own infrastructure, or even resign itself to the fact that it may be unable to reach certain availability levels.
Scalability is another interesting issue. Taken to its limit, the idea of (almost) unlimited scalability offered by a cloud service provider means that they must be constantly increasing available capacity, in order to be able to handle any increase in demand. Infrastructure providers need to be adding hundreds or even thousands of servers every single day to their data centers, not only to replace failing equipment, but also to provision against future demand. The very idea of putting scalable resources into the hands of the end-users of the infrastructure creates a situation where you effectively don’t know how much of your resources a single user might take up (which is why most cloud providers impose a virtual limit on the number of servers a single account can spin up).
Even something that seems, on the surface, as simple as Amazon’s availability zones - multiple data centers located close to one another, with low latency connectivity between them - create complexities of design and management. Since, in Amazon’s case, uptime is measured for a combination of availability zones, instead of a single one, they must be spread apart enough that external problems affecting one do not harm the others, and yet close enough that the connection remains low latency. At the same time, they represent a new concept of data center: a cloud of data centers almost operating as a single one.

Walking hand in hand

As adoption grows, the cloud will continue to push the evolution of data centers in everything from architecture to software and control processes. Greater adoption not only forces data center operators to rethink their inner workings, but also to adapt to new and emerging needs. As usage grows, so does energy consumption, and as the environment becomes more heterogeneous, due to a greater variety of computing resources being offered, so grows management complexity. Simple measures, such as using outside air to replace air conditioning, can save millions of dollars for a company.
If cloud computing brings about the commoditization of computing resources, data centers need to be optimized to allow companies selling these resources to survive. Since the cloud is, and will be for the foreseeable future, dependent on data centers, the evolution of these two technologies is undeniably linked, and anyone that cares about one should also be extremely interested in the other.

Aucun commentaire:

Enregistrer un commentaire