Post do blog http://www.cloudcommons.com!
Posted By Robin Bloor
Member Since: 08.13.2010
Company: The Bloor Group
robinbloor | Aug 16, 2010
Why The Cloud? – An Economic View
There is an avalanche of reasons why organizations everywhere are thinking about and gradually adopting cloud computing. It spans everything from time-to-market to a desire for a virtual data centers, but in most cases it can be reduced to simple economics. The implication of cloud is that it will be a less expensive style of computing for most applications – if not now, then soon.
So let’s take a practical economic look at the cloud. This is what we’ve depicted in the graph below. It plots user populations v cost per user for different hardware environments. Admittedly it’s not precise, but that’s not the point here. We’re simply seeking a perspective of the cloud.
The first thing to note about the graph is that the Y-axis is logarithmic, which means that the curve is an awful lot steeper than it looks; much steeper. The Y-axis shows user populations from 1 to a billion. If the Y-axis wasn’t logarithmic, it would need to be about 6000 miles long, not 3 inches long.
The X-axis is not really even, so I didn’t put any measurements on it. It represents increasing costs per user. The curve itself represents different computer configurations running just one specific application. This is important for understanding the graph. Imagine exactly the same application running in all the environments listed at the side of the curve. Let’s provide some notes on the various environments indicated listed at the side of the curve. From right to left:
• Siloed Servers. Until recently many applications were deployed on dedicated “siloed” servers; one application per server. The primary reason for this is that it was easy to deploy applications that ran in the inexpensive server environments, Windows and Linux, on commodity hardware. However, in strict per user costs, applications run in this way are the most expensive per user because of the management overhead. There is very little economy of scale in this style of computing.
• VM Servers. The antidote to the cost of siloes is virtualization (for Linux and Windows servers.) This makes more effective use of resources and as long as it remains manageable, cuts the cost per user.
• Large Unix Clusters. These are less expensive per user because of the scaling involved and because you don’t get the overhead of multiple guest OSes. However only certain applications are able to run in such environments; large database or OLTP applications for example.
• Mainframes. The mainframe is still more scalable than large-scale Unix and the management costs (per user) are lower. However, we should note that it is distinctly more expensive per user to run applications in Linux partitions on the mainframe than to run applications natively. A mainframe consisting entirely of Linux partitions is really a VM server of a kind.
• Grids. This is where the cloud enters in. In the hundreds of thousands of users area you have the cloud operation of Salesforce.com. The main Salesforce CRM application is supposed to run on a grid of just over 1000 computers. At a component level, grids may also be large collections of virtual machines. However, the whole grid is configurable so that new instances of a VM can be deployed in a self-service manner. Small grids of say, hundreds of commodity servers, don’t deliver a lower per user cost than mainframes. But it is clearly heading in that direction. Right now small grids are being deployed by early adopters as private clouds. The importance of this cannot be understated. A private cloud provides a gateway to the public cloud if it is well designed.
• Large grids. In theory all cloud operations are large grids. Once you have large grids you start to really cut into data center costs and per user costs start tumbling. You can build the data center where electricity and floorspace is cheap and design it for efficient cooling, disaster recovery, etc. Here we’re talking about data centers with concurrent users at the million level.
• Massively scaled out grid. This is for user populations in the tens of millions. There are very few examples. Google search is an example where there really are tens of millions of concurrent users. Each search query is resolved by a purpose-built grid of up to 1000 servers and Google has many such grids which queries are routed to. Yahoo also has a massively scaled out email system. It caters to over 260 million users, of which tens of millions are active at a time.
Other Aspects of the Graph
At the top of the graph we’ve roughly indicated the fact that low per user costs are achieved by scaling out (the divide and conquer approach to catering for large user populations) rather than scaling up, which is what the IT world pursued for many years – a single centralized application tuned for efficiency. Once the numbers of users get large scaling out is a necessity. Scaling out is far more efficient if a whole environment is devoted to a single application
The dotted box indicates the area of corporate computing which is defined quite simply by the fact that it usually runs very mixed workloads with relatively low user populations. The same servers that are used in corporate environments could also be used just as easily in scaled-out arrangements, where workloads are not at all mixed.
And that’s pretty much the whole point of this illustration. It indicates a economic imperative. It makes it clear that no corporation that runs a variety of mixed workloads is ever going to achieve the economies of scale of cloud computing, not even if they have user populations of hundreds of thousands. So the “private cloud” is, in essence, a staging area for moving applications into the cloud and cutting costs over time.
This illustration also indicates how corporate computing is likely to move, from the smallest business to the largest. If you want to lower the costs per user, in general, you need to migrate to the left. That means heading for the cloud.