Archive for 30 de outubro de 2008

Expanding the cloud (Expandindo a nuvem)

outubro 30, 2008

Eis abaixo um artigo bastante interessante postado no blog de Werner Vogels, o CTO-Chief Technology Officer da Amazon.com.  Ele dá detalhes das preocupações que os desenvolvedores devem ter com a cloud e assinala um novo serviço da Amazon.com!

=================

Expanding the cloud

For many the “Cloud” in Cloud Computing signifies the notion of location independence; that somewhere in the internet services are provided and that to access them you do not need any specific knowledge of where they are located. Many applications have already been built using cloud services and they indeed achieve this location transparency; their customers do not have to worry about where and how the application is being served.

However for developers to do their job properly the cloud cannot be fully transparent. As much as we would like to make it easy and simple for everyone, building high-performance and highly reliable applications in the cloud requires that the developers have more control. For example a reality is that failures can happen; servers can crash and networks can become disconnected. Even if these are only temporary glitches and are transient errors, the developer of applications in the cloud really wants to make sure his or her application can continue to serve customers even in the face of these rare glitches. A similar issue is that of network latency; as much as we would like to see the cloud to be transparent, the transport of network packets is still limited to the speed of light (at best) and customers of cloud applications may experience a different performance depending on where they are located in relation to where the applications are running. We have seen that for many applications that works just fine, but there are developers who would like more control over how their customers are being served and for example would like to give all their customers low latency access, regardless of their location.

At Amazon we have been building applications on these cloud principles for several years now and we are very much aware of the tools that developers need to build applications that are required to meet very high standards with respect to scalability, reliability, performance and cost-effectiveness. We are also listening very closely to the feedback AWS customers are giving us to make sure we expose the right tools for them to do their job. We launched Amazon S3 in Europe to ensure that developers could build applications that could serve data out of a European storage cloud. We launched Regions and Availability Zones (combined with Elastic IPs) for Amazon EC2 such that developers would have better control over where their applications would be running to ensure high-availability. We are now ready to expand the cloud even further and bring the cloud storage to its customers’ doorstep.

Today we are announcing that we are expanding the cloud by adding a new service that will give developers and businesses the ability to serve data to their customers world-wide, using low-latency and high data transfer rates. Using a global network of edge locations this new service can deliver popular data stored in Amazon S3 to customers around the globe through local access.

We have developed this content delivery service using the robust AWS principles we know work well for our customers:

 

  • Cost-effective: no commitments and no minimum usage requirements. You only pay for what you use in a manner similar to the other Amazon Web Services.
  • Simple to use: one API call gets you going. You store the data you want to distribute in an Amazon S3 bucket and you use this API call to register this bucket with the content distribution service. The registration will provide you with a new domain name that you can use in url’s to access the data through this service with HTTP. When your customer accesses your content through your new url the data it refers to will be delivered through a network of edge servers.
  • Works well with other services: The service integrates seamlessly with Amazon S3 and the data/content served through the service can be accessed using the standard HTTP access techniques.
  • Reliable: Amazon S3 will give you durable storage of your data, and the network of edge locations on three continents used by the new service will deliver your content to your customers from the most appropriate location.

 

This is an important first step in expanding the cloud to give developers even more control over how their applications and their data are served by the cloud. The service is currently in private beta but we expect to have the service widely available before the end of the year. You can get a few more details and sign up to get notified when the service is becoming on this AWS page Also check Jeff Bar’s posting on the AWS weblog.

The Shape of the Cloud (A Forma da Nuvem)

outubro 30, 2008

Mais de Tim O´Reilly sobre a cloud computing!

=====================
The Shape of the Cloud

There’s an interesting argument going on about the business-structure futures of the Big Cloud that everyone assumes is in our future. Some links in the chain: Hugh Macleod, Tim O’Reilly, Nick Carr, and Tim again. A few points about this seem obvious; among other things, Amazon Web Services is reminding me powerfully of Altavista.

Here are a few things that I think are true; the last section gets back to AWS and Altavista.

Monopolies Don’t Require Lock-in · Google has (effectively) a monopoly on consumer search. They have no lock-in; anyone who wants to can switch their default search provider in a few seconds. One could write a book, and several people have, about how they maintain their grip on the market, but let’s skip that and just see this as an existence proof. ¶

Which is to say, history tells us that Hugh Macleod’s vision of a single insanely-huge cloud provider is perfectly believable.

Low Barriers to Entry Win · We should have learned this by now. You don’t have to look back very hard at recent decades to see examples of technologies which have become pervasive even though, when they started catching on, they weren’t nearly as good as the competition. The reason is, they were easy to learn and easy to deploy. I’m thinking, just for example, of Linux and PHP and HTML. ¶

My sense is that the effortless-deployment threshold for the Cloud is somewhere below the effort required for AWS’s EC2/S3 combo; perhaps something like what a smart modern PHP-hosting service provides.

Economies of Scale · I don’t think they’re that big a deal. To play in this game, you’re going to need a few tens of thousands of servers, bulk-bandwidth deals with key carriers, and multiple data centers on the ground in North America, Europe, and Asia. That’s really expensive to build. But once a provider has got past the basic threshold, I’m unconvinced that their cost per unit of service is going to drop much as they get bigger and bigger. ¶

My take-away is that the big Cloud operator doesn’t necessarily need to be a Microsoft or a Google or an IBM. There are multiple pathways by which an unheralded startup could end up on top of the heap.

CIOs Aren’t Stupid · They’re bearing the scars of decades of being locked-in by Microsoft on their desktop software, and by Oracle on their databases; the situation where your infrastructure vendors gain control over part of your budget. ¶

So you can bet that as larger outfits take a strategic view of cloud computing, they’re going to be appropriately paranoid about barriers to exit. There’s going to be a powerful demand for standards and for demonstrated interoperability.

A Historical Analogy · To me, the cloud-computing landscape feels like the Web search landscape in 1996. Back then, Everybody who’d thought about it had clued in that this was going to be really interesting and useful. There had been a few offerings, much better than nothing but not really hitting a sweet spot. [Disclosure: One of them was me.] Then Altavista launched and it was clearly better than anything else. Meanwhile, Larry and Sergey were at Stanford thinking about transitive functions over the graph of Web hyperlinks. ¶

Amazon Web Services smells like Altavista to me; a huge step in a good direction. But there are some very good Big Ideas waiting out there to launch, probably incubating right now in a garage or grad school.

Such an idea doesn’t have to come from a big company. And it doesn’t have to be proprietary to grow fast and dominate the landscape. It has to have no aroma of lock-in, it has to be obviously better, and most of all, more than anything else, it has to be really, really easy to get started with.

Updated: 2008/10/27

The Economics of Cloud Computing (A Economia da Computação Nuvem)

outubro 30, 2008

Já que começamos a tratar neste blog sobre a cloud computing, eis aqui um post sobre “a economia” deste tema. Aliás, o autor exagerou no emprego do termo (A Economia da …).  O mais apropriado seria um exercício econômico-financeiro em defesa da cloud computing!  O texto foi retirado de http://broadcast.oreilly.com/2008/10/the-economics-of-cloud-c.html.

=================

The Economics of Cloud Computing

By George Reese
October 24, 2008

Cloud computing has been “the next cool thing” for at least the past 18 months. The current economic climate, however, may be the thing that accelerates the maturity of the technology and drives mainstream adoption in 2009.

This economic crisis is very different from the normal ebbs of the business cycle we have grown accustomed to. The difference lies in how rapidly sources of capital have dried up—whether that capital is coming from venture capitalists, angel investors, or banks. Capital markets are frozen, and companies needing to make capital investments to continue operations or grow are facing a daunting challenge.

A lack of capital creates a lack of flexibility in leveraging technology to operate and grow a business. If you can’t get access to a bank loan, you have to use your company revenues to buy new servers. Using company revenues can damage cash flow, harm valuations, and put otherwise healthy businesses at risk.

Typically, when a company wants to grow its IT infrastructure, it has two options:

  • Build it in house and own or lease the equipment.
  • Outsource the infrastructure to a managed services provider.

In both scenarios, a company must purchase the infrastructure to support peak usage regardless of the normal system usage. One Valtira client, for example, has fairly low usage for most of the year, but sees usage equalling millions of page views/month for about 15 minutes each quarter. For both of the above options, they are faced with paying for an infrastructure necessary for only about 1 hour each year when something much more minimalist will support the rest of the year.

Let’s assume for this customer that two application servers backed by two database servers and balanced by a load balancer will solve the problem. The options look something like this:

  Internal IT Managed Services
Capital Investment $40,000 $0
Setup Costs $10,000 $5,000
Monthly Services $0 $4,000
Monthly Labor $3,200 $0
Cost Over 3 Years $149,000 $129,000

For this calculation, I assume fairly baseline, server-class systems with a good amount of RAM and on-board RAID5 such as a Dell 2950 and a good load balancer. The 3-year cost assumes a 10% cost of capital. I also assume a very cost-efficient managed services provider. Most of the big names will be at least three times more expensive than the numbers I am providing here.

Under this scenario, managed services saves you a nice 13.5% over the do it yourself approach (assuming you don’t get taken to the cleaners by one of the big managed services companies). Of course, it does not consider at all the impact of a server outage at 3am, which is where managed services will shine.

What is particularly appealing about managed services, however, is the lack of capital investment. The $40,000 up-front for an internal IT approach is a terrible burden in the current economic environment. Even if you can get credit, the cost of the loan makes that $40,000 much more expensive over three years than $40,000.

Good argument for managed services? Yes, but a better argument for the cloud.

The cloud enters the picture looking like this:

  Managed Services The Cloud
Capital Investment $0 $0
Setup Costs $5,000 $1,000
Monthly Services $4,000 $2,400
Monthly Labor $0 $1,000
Cost Over 3 Years $129,000 $106,000

Cloud savings over internal IT jump to 29% without getting into the discussion of buy for capacity versus buy what you use!

Between managed services and the cloud, the cloud provides 18% savings.

While 18% and 29% savings are nothing to sneeze at, they are just the start of the financial benefits of the cloud. It goes on.

  • No matter what your needs, your up-front cost is always $0
  • As the discrepancy between peak usage and standard usage grows, the cost difference between the cloud and other options becomes overwhelming.
  • The cloud option essentially includes a built-in SAN in the form of the Amazon Elastic Block Storage. The internal IT and managed services options would go up significantly if we added the cost of a SAN into the infrastructure.
  • Cheap redundancy! While the above environment is not quite a “high availability” environment, it is very highly redundant with systems spread across multiple data centers. The managed services and internal IT options, on the other hand, have single physical points of failure as the application servers and database servers are likely located in the same rack.

Let’s say, however, that you need 10 servers to handle peak usage for 1 hour each year and just 2 to operate the rest of the year. Ignoring the impact of the cost of capital:

  • Internal IT adds another $40,000 in total costs over 3 years.
  • Managed services adds another $144,000 in total costs over 3 years.
  • The Amazon Cloud adds about $24 in total costs over 3 years.

No, that was not a typo. That’s forty THOUSAND dollars against one hundred forty-four THOUSAND dollars against 24 dollars. And as I mentioned earlier, this setup is based on an actual Valtira client that was considering a dedicated managed services option before Valtira began deploying customers in the Amazon cloud. It is not some contrived example.

Obviously, most organizations have either seasonal peaks or daily peaks (or both) with a less dramatic cost differential; but the cost differential is still quite dramatic and quite impactful to the bottom line. In addition, the ability to pay for what you use makes it easy to engage in “proofs of concept” and other R&D that requires dedicated hardware.

O que é a Cloud Computing (Computação Nuvem)? (Cloud Computing: what does it mean?)

outubro 30, 2008

Um conceito que está ganhando muita evidência no meio das tecnologias de informação e comunicação é o de cloud computing (computação nuvem, ou na nuvem).  Mas o que isto significa e qual é o seu impacto nos negócios e no nosso cotidiano?

De forma bastante simples, o termo indica que no futuro nós não teremos, ou mesmo precisaremos, que nossos dados ou programas de software estejam em nossos próprios computadores. Ou seja, eles (os dados e os programas) ficarão dispostos em algum lugar ou em algum servidor de alguém (empresa ou organização) acessável via a Internet. Sendo assim, esta vasta “névoa”  de dados e servidores interconectados das pessoas é então chamada de “nuvem”.

Até aí tudo bem! A questão que vem sendo discutida mais recentemente é se esta vasta nuvem pode ser controlada ou não por alguma entidade, ou empresa, e isso ameaçar nosso controle sobre nossos dados e programas.

Dois grandes nomes de influência no mundo web entraram numa interessante discussão acerca desta questão: Tim O´Reilly (empresário inovador e dono de um importante blog, http://radar.oreilly.com) e Nicholas Carr, escritor que ganhou popularidade com seu livro “IT does matter?”, e que tem um blog muito popular ).

Tudo começou com um post que Tim O´Reilly escreveu em 26/10/2008, intitulado “Web 2.0 and Cloud Computing” (Web 2.0 e a Computação Nuvem).  Neste post ele se referia a um outro post, de Hugh Macleod, que havia criado certa polêmica com seu post The Cloud’s Best Kept Secret (O mais bem guardado segredo da nuvem). O argumento de Hugh era o seguinte: a cloud computing iria levar a um grande monopólio.

E O´Reilly estende um pouco mais este argumento de Macleod apontando o que este último afirmara:

“…ninguém parece estar falando sobre as Leis do Poder (melhor esclarecendo, a frequência distribuição de um conjunto de dados quaisquer, onde, por exemplo, um pequeno número de empresas pode capturar a maior parte de um negócio; grifos nossos).  Ninguém está dizendo que um dia uma única empresa pode possivelmente emergir para dominiar a Nuvem, da maneira como Google dominou o mercado de Busca, da maneira como a Microsoft veio a dominar o mercado de Software. Deixando questões de Monopólio de lado, você poderia imaginar esta empresa? Nós não estaríamos falando de um negócio multi-bilionário como é o de hohe da Microsoft ou Google. Nós estamos fando de algo que pode simplesmente superá-los. Nós estamos falando potencialmente de uma empresa multi-trilionária. Possivelmente a maior empresa que já existiu.”

 A partir daí O´Reilly passa a afirmar que o problema com esta análise é que ela não leva em consideração o que causa as leis do poder na atividade online.  Entender a dinâmica dos retornos crescentes na web é a essência do que O´Reilly denominou de Web 2.0 (para quem ainda não sabe, foi o O´Reilly que cunhou este termo).  Segundo ele, em última instância, na rede, as aplicações vencem se elas pegam melhor mais pessoas para usá-las.  Como ele havia apontado em 2005, Google, Amazon, Ebay, Craigslist, Wikipedia, e todas outras aplicações estrelas da Web 2.0 têm isto em comum.

Para O´Reilly a Cloud computing, pelo menos no senso que Macleod parece estar usando o termo, como um sinônimo do nível de infraestrutura da nuvem como melhor exemplificado por Amazon S3 e EC2, não tem esta categoria de dinâmica. 

Segundo O´Reilly é verdade que os bigger players (maiores jogadores) terão economias de escala no custo do equipamento, e especialmente o custo de energia, que não estão disponíveis para os smaller players (menores jogadores). Mas há poucos big players — Google, Microsoft, Amazon — para citar alguns, que estão nesta escala, com ou sem o jogo da cloud computing. Além disto, economias de escala não são o mesmo que retornos crescentes para os efeitos de redes pros usuários. Eles podem ser característicos de um mercado de comoditização que não necessariamente dá alavancagem econômica desproporcional aos vencedores.

Deste ponto em diante, O´Reilly passa a definir três tipos de cloud computing, a saber, Utility Computing (Computação de “Utilidade Pública”, no senso aportuguesado), Platform as a Service (Plataforma como um Serviço), e Cloud-based end-user applications (Aplicações de usuários finais baseadas na nuvem).

Tudo ia bem, até que no dia 26/10/2008 Nicholas Carr escreveu no seu blog o post intitulado
What Tim O’Reilly gets wrong about the cloud (O que Tim O´Reilly assumiu erradamente sobre a nuvem).  Segundo Carr, O´Reilly aponta importantes questões, mas sua análise é também “capenga”, defeituosa, e os defeitos do seu argumento são tão reveladores quanto sua força.

E qual é a crítica de Carr?  Ele começa pegando o exemplo do Google, que O´Reilly lista como o primeiro exemplo de um negócio que cresceu ao ponto de dominância graças ao efeito de rede.  E Carr se pergunta: “É o efeito de rede realmente o principal engenho alimentando a dominância do Google no mercado de busca?” Carr então argumenta que não é.  Aliás, sua crítica a O´Reilly, em essência, é que esta não é a única razão para o sucesso de empresas como Google.

No dia 27/10/2008, O´Reilly responde à crítica de Carr em um post intitulado “Network Effects in Data“.  Nele O´Reilly aponta o seguinte.  A dificuldade de Nick Carr entender seu argumento de que a cloud computing é provável de acabar com negócios de margens estreitas (de lucro), a não ser que empresas encontrem um modo de aumentar os efeitos de rede que estão no coração da Web 2.0, o fez entender que ele usou o termo “network effects” (efeitos de rede) de alguma forma diferentemente, e não na forma simplistica que muitos o entendem.

No mesmo dia 27/10/2008 Carr reafirma sua crítica a O´Reilly no post “Further musings on the network effect and the cloud (Mais reflexões sobre efeitos de rede e a nuvem).  E, como afirma Carr, já que O´Reilly continua a rejeitar seu argumento de que o sucesso de Google não pode ser explicado pelo efeito de rede, ele utiliza do Prof. Hal Varian, como sendo um dos maiores explicadores do efeito de rede e suas implicações, além do fato dele hoje ser um dos top estrategistas do Google.  E, evocando uma entrevista que o próprio Carr fez com Varian, ele reproduz o seguinte argumento do Varian no começo deste ano:

P: Como nós podemos explicar a posição entrincheirada do Google, mesmo que as diferenças e algorítmos de busca estejam somente agora sendo reconhecidas nas margens? Há mais efeitos de rede escondidos que o tornam melhor para todos nós usarmos o mesmo engenho de busca?

R: As forças tradicionais que dão suporte ao entrincheiramento de mercado, tais como efeito de rede, economias de escala, custos de mudança, não se aplicam ao Google. Para explicar o sucesso do Google você tem que ir atrás de um muito mais velho conceito de economia: aprender fazendo. Google tem feito busca na web por quase 10 anos, logo não é surpresa que nós nos darmos melhor que nossos competidores. E nós estamos trabalhando muito para manter isto desta maneira!

 É: pelo visto uma boa discussão!  Vamos ver onde ela termina (ou se já terminou)!