Archive for outubro \31\UTC 2008

A typology of network strategies (Uma tipologia de estratégias de rede)

outubro 31, 2008

A contenda sobre cloud computing parece que continua.  Nicholas Carr retomou sua crítica ao Tim O´Reilly ontem no seu blog com o post abaixo!

=======================
A typology of network strategies

October 30, 2008

This week’s pissing match – I mean, spirited conversation – between Tim O’Reilly and me regarding the influence of the network effect on online businesses may have at times seemed like a full-of-sound-and-fury-signifying-nothing academic to-and-fro. (Next topic: How many avatars can dance on the head of a pin?) But, beyond the semantics, I think the discussion has substantial practical importance. O’Reilly is absolutely right to push entrepreneurs, managers, and investors to think clearly about the underlying forces that are shaping the structure of online industries and influencing the revenue and profit potential of the companies competing in those industries. But clarity demands definitional precision: the more precise we are in distinguishing among the forces at work in online markets, the more valuable the analysis of those forces becomes. And my problem with O’Reilly’s argument is that I think he tries to cram a lot of very different forces into the category “network effect,” thereby sowing as much confusion as clarity.

Ten years ago, we saw a lot of fast-and-loose discussions of the network effect. Expectations of powerful network effects in online markets were used to justify outrageous valuations of dotcoms and other Internet companies. Disaster ensued, as the expectations were almost always faulty. Either they exaggerated the power of the network effect or they mistook other forces for the network effect. So defining the network effect and other related and unrelated market-shaping forces clearly does matter – for the people running online businesses and the people investing in them.

With that in mind, I’ve taken a crack at creating a typology of what I’ll call “network strategies.” By that, I mean the various ways a company may seek to benefit from the expanded use of a network, in particular on the Internet. The network may be its own network of users or buyers, or it may be a broader network, of which its users form a subset, or even the entire Net. I don’t pretend that this list is either definitive or comprehensive. I offer it as a starting point for discussion.

Network effect. The network effect is a consumption-side phenomenon. It exists when the value of a product or service to an individual user increases as the overall number of users increases. (That’s a very general definition; there has been much debate about the rate of increase in value as the network of users grows, which, while interesting, is peripheral to my purpose.) The Internet as a whole displays the network effect, as do many sites and services supplied through the Net, both generic (email) and proprietary (Twitter, Facebook, Skype, Salesforce.com). The effect has also heavily shaped the software business in general, since the ability to share the files created by a program is often very important to the program’s usefulness.

When you look at a product or service subject to the network effect, you can typically divide the value it provides to consumers into two categories: the intrinsic value of the product or service (when consumed in isolation) and the network-effect value (the benefit derived from the other users of the product or service). The photo site Flickr has, for example, an intrinsic value (a person can store, categorize, and touch up his own photos) and a network-effect value (related to searching, tagging, and using other people’s photos stored at Flickr). Sometimes, there is only a network-effect value (a fax machine or an email account in isolation is pretty much useless), but usually there’s both an intrinsic value and a network-effect value. Because of its value to individual users, the network effect typically increases the switching costs a user would incur in moving to a competing product or service or to a substitute product or service, hence creating a “lock-in” effect of some degree. Standards can dampen or eliminate the network-effect switching costs, and resulting lock-in effect, by transforming a proprietary network into part of a larger, open network. The once-strong network effect that locked customers into the Microsoft Windows PC operating system, for instance, has diminished as file standards and other interopability protocols have spread, though the Windows network effect has by no means been eliminated.

Data mining. Many of the strategies that O’Reilly lumps under “network effect” are actually instances of data mining, which I’ll define (fairly narrowly) as “the automated collection and analysis of information stored in the network as a byproduct of people’s use of that network.” The network in question can be the network of a company’s customers or it can be the wider Internet. Google’s PageRank algorithm, which gauges the value of a web page through an analysis of the links to that page that exist throughout the Net, is an example of data mining. Most ad-distribution systems also rely on data mining (of people’s clickstreams, for instance). Obviously, as the use of a network increases, the value of the data stored in that network grows as well, but the nature of that value is very different from the nature of the value provided by the network effect.

Digital sharecropping, or “user-generated content.” A sharecropping strategy involves harvesting the creative work of Internet users (or a subset of users) and incorporating it into a product or service. In essence, users become a pool of free or discount labor for a company or other producer. The line between data-mining and sharecropping can be blurry, since it could be argued that, say, the formulation of links is a form of creative work and hence the PageRank system is a form of sharecropping. For this typology, though, I’m distinguishing between the deliberate products of users’ work (sharecropping) and the byproducts of users’ activities (data mining). Sharecropping can be seen in Amazon’s harvesting of users’ product reviews, YouTube’s harvesting of users’ videos, Wikipedia’s harvesting of users’ writings and edits, Digg’s harvesting of users’ votes about the value of news stories, and so forth. It should be noted that while sharecropping involves an element of economic exploitation (with a company substituting unpaid labor for paid labor), the users themselves may not experience any sense of exploitation, since they may receive nonmonetary rewards for their work (YouTube users get a free medium for broadcasting their work, Wikipedia volunteers enjoy the satisfaction of contributing to what they see as a noble cause, etc.). Here again, the benefits of the strategy tend to increase as the use of the network increases.

Complements. A complements strategy becomes possible when the use of one product or service increases as the use of another product or service increases. As more people store their photographs online, for instance, the use of online photo-editing services will also increase. As more blogs are published, the use of blog search engines and feed readers will tend to increase as well. The iPhone app store encourages purchases of the iPhone (and purchases of the iPhone increase purchases at the app store). While Google pursues many strategies (in fact, all of the ones I’ll list here), its uber-strategy, I’ve argued, is a complements strategy. Google makes more money as all forms of Internet use increase.

Two-sided markets. Ebay makes money by operating a two-sided market, serving both buyers and sellers and earning money through transactional fees imposed on the sellers. Amazon, in addition to its central business of running a traditional one-sided retail store (buying goods from producers and selling them to customers), runs a two-sided market, charging other companies to use its site to sell their goods to customers. Google’s ad auction is a two-sided market, serving both advertisers and web publishers. There are a lot of more subtle manifestations of two-sided markets online as well. A blog network like the Huffington Post, for instance, has some characteristics of a two-sided market, as it profits by connecting, on the one hand, independent bloggers and, on the other, readers. Google News and even Mint also have attributes of two-sided markets. (Note that the network effect applies on both sides of two-sided markets, but it seems to me useful to give this strategy its own category since it’s unique and well-defined.)

Economies of scale, economies of scope, and experience. These three strategies are also tied to usage. The more customers or users a company has, the bigger its opportunity to reap the benefits of scale, scope, and experience. Because these strategies are so well established (and because I’m getting tired), I won’t bother to go into them. But I will point out that, because they strengthen with increases in usage, they are sometimes confused for the network effect in online businesses.

None of these strategies is new. All of them are available offline as well as online. But because of the scale of the Net, they often take new or stronger forms when harnessed online. Although the success of the strategies will vary depending on the particular market in which they’re applied, and on the way they’re combined to form a broader strategy, it may be possible to make some generalizations about their relative power in producing competitive advantage or increasing revenues or widening profit margins in online businesses. I’ll leave those generalizations for others to propose. In any case, it’s important to realize that they are all different strategies with different requirements and different consequences. Whether an entrepreneur or a manager (or an investor) is running a Web 2.0 business (whatever that is) or a cloud computing business (whatever that is), or an old-fashioned dotcom (whatever that is), the more clearly he or she distinguishes among the strategies and their effects, the higher the odds that he or she will achieve success – or at least avoid a costly failure.

Anúncios

Why America Needs an Economic Strategy (Porque os EUA precisam de uma Estratégia Econômica)

outubro 31, 2008

O Prof. Michael Porter, especialista mundial em estratégias empresariais, publicou ontem na revista Businessweek um excelente artigo sobre o quê na realidade os EUA estão precisando no momento. Imperdível!

 
Why America Needs an Economic Strategy

The Harvard Business School competitiveness guru offers his prescription for long-term prosperity

With the U.S. election just days away, it has never been more important to consider what the next President must do to keep America competitive. In this time of crisis, Washington has focused on the immediate and the short term. Lost are the more basic questions we really need to worry about: What is the fundamental competitive position of the U.S. in the global economy? And what must we do to remain strong when other nations are making rapid progress?

The stark truth is that the U.S. has no long-term economic strategy—no coherent set of policies to ensure competitiveness over the long haul. Strategy embodies clear priorities, based on understanding the strengths we need to preserve and the weaknesses that threaten our prosperity the most. Strategy addresses what to do, but also what not to do. In dealing with a crisis, experience teaches us that steps to address the immediate problem must support a long-term strategy. Yet it is far from clear that we are taking the steps most important to America’s long-term economic prosperity.

America’s political system, especially as it has evolved in recent times, almost guarantees an absence of strategic thinking at the federal level. Government leaders react to current events piecemeal, rather than developing a strategy that unfolds over years. Congress and the Executive Branch are organized around discrete policy areas, not around the overall goal of improving competitiveness. Neither candidate has put forward anything close to a strategy; rather, each has presented a set of disconnected policy proposals with political appeal. Both parties contribute to the problem by approaching the economy with long-held ideologies and policy positions, many of which no longer fit with today’s reality.

Now is the moment when the U.S. needs to break this cycle. The American economy has performed remarkably well, but our continued competitiveness has become fragile. Over the last two decades the U.S. has accounted for an incredible one-third of world economic growth. As the financial crisis hit, the rest of the American economy remained quite competitive, with many companies performing strongly in international markets. U.S. productivity growth has continued to be faster than in most other advanced economies, and exports have been the growth driver in the overall economy.

THE AGE OF ANXIETY

Yet our success has come with deep insecurities for many Americans, even before the crisis. The emergence of China and India as global players has sparked deep fears for U.S. jobs and wages, despite unemployment rates that have been low by historical standards. While the U.S. economy has been a stronger net job creator than most advanced countries, the high level of job churn (restructuring destroys about 30 million jobs per year) makes many Americans fear for their future, their pensions, and their health care. While the standard of living has risen over the last several decades for all income groups, especially when properly adjusted for family size, and while the U.S. remains the land where lower-income citizens have the best chance of moving up the economic ladder, inequality has risen. This has caused many Americans to question globalization.

To reconcile these conflicting perspectives, it’s necessary to assess where America really stands. The U.S. has prospered because it has enjoyed a set of unique competitive strengths. First, the U.S. has an unparalleled environment for entrepreneurship and starting new companies.

Second, U.S. entrepreneurship has been fed by a science, technology, and innovation machine that remains by far the best in the world. While other countries increase their spending on research and development, the U.S. remains uniquely good at coaxing innovation out of its research and translating those innovations into commercial products. In 2007, American inventors registered about 80,000 patents in the U.S. patent system, where virtually all important technologies developed in any nation are patented. That’s more than the rest of the world combined.

Third, the U.S. has the world’s best institutions for higher learning, and they are getting stronger. They equip students with highly advanced skills and act as magnets for global talent, while playing a critical role in innovation and spinning off new businesses.

Fourth, America has been the country with the strongest commitment to competition and free markets. This belief has driven the remarkable level of restructuring, renewal, and productivity growth in the U.S.

Fifth, the task of forming economic policy and putting it into practice is highly decentralized across states and regions. There really is not a single U.S. economy, but a collection of specialized regional economies—think of the entertainment complex in Hollywood or life sciences in Boston. Each region has its own industry clusters, with specialized skills and assets. Each state and region takes responsibility for competitiveness and addresses its own problems rather than waiting for the central government. This decentralization is arguably America’s greatest hidden competitive strength.

Sixth, the U.S. has benefited historically from the deepest and most efficient capital markets of any nation, especially for risk capital. Only in America can young people raise millions, lose it all, and return to start another company.

Finally, the U.S. continues to enjoy remarkable dynamism and resilience. Our willingness to restructure, take our losses, and move on will allow the U.S. to weather the current crisis better than most countries.

Yet what has driven America’s success is starting to erode. A series of policy failures has offset and even nullified its strengths just as other nations are becoming more competitive. The problem is not so much that other nations are threatening the U.S. but that the U.S. lacks a coherent strategy for addressing its own challenges.

An inadequate rate of reinvestment in science and technology is hampering America’s feeder system for entrepreneurship. Research and development as a share of GDP has actually declined, while it has risen in many other countries. Federal policymakers recognize this problem but have failed to act.

America’s belief in competition is waning. A creeping relaxation of antitrust enforcement has allowed mergers to dominate markets. Ironically, these mergers are often justified by “free market” rhetoric. The U.S. is seeing more intervention in competition, with protectionism and favoritism on the rise. Few Americans know that the U.S. ranks only 20th among countries in openness to capital flows, 21st on low trade barriers, and 35th on absence of distortions from taxes and subsidies, according to the 2008 Global Competitiveness Report. We are fast becoming the kind of distorted economy we have long criticized.

Lack of regulatory oversight and capital requirements, in the name of liberalization and well-meaning efforts to extend credit to lower-income citizens, has undermined our financial markets. America underregulates in some areas while it overregulates in others.

U.S. colleges and universities are precious assets, but we have no serious plan to improve access to them by our citizens. America now ranks 12th in tertiary (college or higher) educational attainment for 25- to 34-year-olds. We have made no progress in this vital area over the past 30 years, unlike almost every other country. This is an ominous trend in an economy that must have the skills to justify its high wages. Instead of mounting a serious program to provide access to higher education, like the G.I. Bill and National Science Foundation programs of earlier years, Congress grandstands over the rate of endowment spending in our best universities.

The federal government has also failed to recognize and support the decentralization and regional specialization that drive our economy. Washington still acts as if the federal level is where the action is. Beltway bureaucrats spend many billions of dollars on top-down, highly fragmented federal economic development programs. Yet these programs are not designed to support regional clusters, nor do they send money where it will have the greatest impact in each region. For example, distressed urban communities, where poverty in America is concentrated, are starved of the infrastructure spending needed for job development. Again, no strategic thinking.

At a time when insecurity and job turnover are higher than ever, the U.S. also has abdicated its responsibility to provide a credible transitional safety net for Americans. It is no wonder Americans are becoming more populist, more protectionist, and more tolerant of harmful intervention in the economy. The job training system is ineffective and receives less and less funding each year. Pension security is eroding, and the most obvious step required to strengthen Social Security—slowly adjusting upward the retirement age—has not been taken. Improving access to affordable health insurance is a major worry for all Americans. Washington could take basic steps such as equalizing the tax deductibility of individually purchased insurance to assist those not covered by their employers. Yet the government has failed to do so.

HIGH COSTS, BIG HASSLES

Federal polices have hobbled America’s entrepreneurial strength by needlessly driving up the cost and complexity of doing business, especially for smaller companies. Cumbersome regulation of employment, the environment, and product liability needs to give way to better approaches involving less cost and litigation, yet special interests block reform. The U.S. has become a high-tax country not only in terms of rates but also administrative hassle. Infrastructure bottlenecks, due to neglect and poorly directed spending, are driving up costs in an economy increasingly dependent on logistics. The U.S. is energy-inefficient, but public policies fail to promote energy conservation. Health-care costs are too high, but there is no serious effort to provide more integrated and efficient care.

Collectively, these unnecessary costs of doing business, coupled with skill gaps, are becoming significant enough to drive investments out of the country, including investments by American companies. Instead of addressing the real reasons for offshore investment, the parties spar over closing tax “loopholes,” even though U.S. corporate rates are among the highest in the world. Where is the strategic thinking?

Trade and foreign investment are fundamental to the success of the U.S. economy, but America has lost its focus and credibility in shaping the international trading system. Our economy today depends on advanced services and selling intellectual property—our ideas, our software, our media. Yet rampant intellectual property theft and high barriers to competition in services tilt the world trading system against a knowledge-based economy.

With no strategy, the U.S. has failed to work effectively with other advanced countries to address these issues and has failed to assist poorer countries so they feel more confident about opening markets and internal reform. The U.S. has abdicated its strategic role in developing Latin America, our most natural trading partner. We have failed to engage meaningfully in Africa, the Middle East, and Asia to help countries improve the lot of their citizens. Our foreign aid is still tied to the purchase of U.S. goods and services, rather than the actual needs of countries. Congress fails to pass trade agreements with countries highly committed to our economic principles, such as Colombia.

A final strategic failure is in many ways the most disconcerting. All Americans know that the public education system is a serious weakness. Fewer may realize that citizens retiring today are better educated than the young people entering the workforce. In the global economy, just being an American is no longer enough to guarantee a good job at a good wage. Without world-class education and skills, Americans must compete with workers in other countries for jobs that could be moved anywhere. Unless we significantly improve the performance of our public schools, there is no scenario in which many Americans will escape continued pressure on their standard of living. And legal and illegal immigration of low-skilled workers cannot help but make the problem worse for less-skilled Americans.

The problem is not money—America spends a great deal on public education, just as we do on health care. The real problem is the structure of our education system. The states, for example, need to consolidate some of the 14,000 local school districts whose existence almost guarantees inefficiency and inequality of education across communities. Instead, government leaders haggle over incremental changes.

SAME OLD ARGUMENTS

We need a strategy supported by the majority to secure America’s economic future. Yet Americans hear the same old divisive arguments. Republicans keep repeating simplistic free-market thinking, even though the absence of all regulation makes no sense. Self-reliance is preached as if no transitional safety net is needed. Some Republicans even argue passionately that the country should have no strategy because that would be “industrial policy.” Yet the real issue is not picking industry winners and losers but improving the business environment for all American companies, something we cannot do without identifying our top priorities. Overall, Republicans seem to think business can thrive without healthy social conditions.

Democrats, meanwhile, keep talking as if they want to penalize investment and economic success. They defend unions obstructing change in areas like education, cling to cumbersome regulatory approaches, and resist ways to get litigation costs for business in line with other countries. Democrats equivocate on trade in an irreversibly global economy. They seem to think social progress can be achieved only at the expense of business.

To make America competitive, we have to get beyond this thinking. Political leaders, business leaders, and civil society must begin a respectful, fact-based dialogue about our challenges. We need to focus on competitive reality, not defending past policies.

A strategy would address each of the areas I have discussed. If we are honest with ourselves, we would admit the U.S. is not making real progress on any of them today. Efforts under way by both parties are largely canceling each other out. A strategy would direct our spending to priority investments that also put money into the economy, such as educational assistance and logistical infrastructure, rather than tax rebates. With a strategy, we would stop counterproductive and expensive practices such as farm subsidies and spending earmarks.

Is such strategic thinking possible, given America’s political system? It happens in other countries—Denmark and South Korea are just two where I have participated in serious efforts by national leaders, both public and private, to come together and chart a long-term plan. This almost never occurs in the U.S., except around single issues.

We will need some new structures to govern strategically. I served on the last public-private President’s Commission on Industrial Competitiveness—in 1983! This time we need one that is less politically motivated. Congress would benefit from a bipartisan joint planning group to coordinate an overall set of priorities. More up or down votes on comprehensive legislative programs are needed to allow a shift to a coherent set of policies and away from lots of separate bills.

The new Administration will have an historic opportunity to adopt a strategic approach to the U.S.’s economic future, something that would bring the parties together. America is at its best when it recognizes problems and accepts collective responsibility for dealing with them. All Americans should hope that the next President and Congress rise to the challenge.

Porter, the Bishop William Lawrence University Professor at Harvard Business School, is a leading authority on competitive strategy and the competitiveness of nations and regions. Professor Porter’s work is recognized in governments, corporations, nonprofits, and academic circles around the world.

Expanding the cloud (Expandindo a nuvem)

outubro 30, 2008

Eis abaixo um artigo bastante interessante postado no blog de Werner Vogels, o CTO-Chief Technology Officer da Amazon.com.  Ele dá detalhes das preocupações que os desenvolvedores devem ter com a cloud e assinala um novo serviço da Amazon.com!

=================

Expanding the cloud

For many the “Cloud” in Cloud Computing signifies the notion of location independence; that somewhere in the internet services are provided and that to access them you do not need any specific knowledge of where they are located. Many applications have already been built using cloud services and they indeed achieve this location transparency; their customers do not have to worry about where and how the application is being served.

However for developers to do their job properly the cloud cannot be fully transparent. As much as we would like to make it easy and simple for everyone, building high-performance and highly reliable applications in the cloud requires that the developers have more control. For example a reality is that failures can happen; servers can crash and networks can become disconnected. Even if these are only temporary glitches and are transient errors, the developer of applications in the cloud really wants to make sure his or her application can continue to serve customers even in the face of these rare glitches. A similar issue is that of network latency; as much as we would like to see the cloud to be transparent, the transport of network packets is still limited to the speed of light (at best) and customers of cloud applications may experience a different performance depending on where they are located in relation to where the applications are running. We have seen that for many applications that works just fine, but there are developers who would like more control over how their customers are being served and for example would like to give all their customers low latency access, regardless of their location.

At Amazon we have been building applications on these cloud principles for several years now and we are very much aware of the tools that developers need to build applications that are required to meet very high standards with respect to scalability, reliability, performance and cost-effectiveness. We are also listening very closely to the feedback AWS customers are giving us to make sure we expose the right tools for them to do their job. We launched Amazon S3 in Europe to ensure that developers could build applications that could serve data out of a European storage cloud. We launched Regions and Availability Zones (combined with Elastic IPs) for Amazon EC2 such that developers would have better control over where their applications would be running to ensure high-availability. We are now ready to expand the cloud even further and bring the cloud storage to its customers’ doorstep.

Today we are announcing that we are expanding the cloud by adding a new service that will give developers and businesses the ability to serve data to their customers world-wide, using low-latency and high data transfer rates. Using a global network of edge locations this new service can deliver popular data stored in Amazon S3 to customers around the globe through local access.

We have developed this content delivery service using the robust AWS principles we know work well for our customers:

 

  • Cost-effective: no commitments and no minimum usage requirements. You only pay for what you use in a manner similar to the other Amazon Web Services.
  • Simple to use: one API call gets you going. You store the data you want to distribute in an Amazon S3 bucket and you use this API call to register this bucket with the content distribution service. The registration will provide you with a new domain name that you can use in url’s to access the data through this service with HTTP. When your customer accesses your content through your new url the data it refers to will be delivered through a network of edge servers.
  • Works well with other services: The service integrates seamlessly with Amazon S3 and the data/content served through the service can be accessed using the standard HTTP access techniques.
  • Reliable: Amazon S3 will give you durable storage of your data, and the network of edge locations on three continents used by the new service will deliver your content to your customers from the most appropriate location.

 

This is an important first step in expanding the cloud to give developers even more control over how their applications and their data are served by the cloud. The service is currently in private beta but we expect to have the service widely available before the end of the year. You can get a few more details and sign up to get notified when the service is becoming on this AWS page Also check Jeff Bar’s posting on the AWS weblog.

The Shape of the Cloud (A Forma da Nuvem)

outubro 30, 2008

Mais de Tim O´Reilly sobre a cloud computing!

=====================
The Shape of the Cloud

There’s an interesting argument going on about the business-structure futures of the Big Cloud that everyone assumes is in our future. Some links in the chain: Hugh Macleod, Tim O’Reilly, Nick Carr, and Tim again. A few points about this seem obvious; among other things, Amazon Web Services is reminding me powerfully of Altavista.

Here are a few things that I think are true; the last section gets back to AWS and Altavista.

Monopolies Don’t Require Lock-in · Google has (effectively) a monopoly on consumer search. They have no lock-in; anyone who wants to can switch their default search provider in a few seconds. One could write a book, and several people have, about how they maintain their grip on the market, but let’s skip that and just see this as an existence proof. ¶

Which is to say, history tells us that Hugh Macleod’s vision of a single insanely-huge cloud provider is perfectly believable.

Low Barriers to Entry Win · We should have learned this by now. You don’t have to look back very hard at recent decades to see examples of technologies which have become pervasive even though, when they started catching on, they weren’t nearly as good as the competition. The reason is, they were easy to learn and easy to deploy. I’m thinking, just for example, of Linux and PHP and HTML. ¶

My sense is that the effortless-deployment threshold for the Cloud is somewhere below the effort required for AWS’s EC2/S3 combo; perhaps something like what a smart modern PHP-hosting service provides.

Economies of Scale · I don’t think they’re that big a deal. To play in this game, you’re going to need a few tens of thousands of servers, bulk-bandwidth deals with key carriers, and multiple data centers on the ground in North America, Europe, and Asia. That’s really expensive to build. But once a provider has got past the basic threshold, I’m unconvinced that their cost per unit of service is going to drop much as they get bigger and bigger. ¶

My take-away is that the big Cloud operator doesn’t necessarily need to be a Microsoft or a Google or an IBM. There are multiple pathways by which an unheralded startup could end up on top of the heap.

CIOs Aren’t Stupid · They’re bearing the scars of decades of being locked-in by Microsoft on their desktop software, and by Oracle on their databases; the situation where your infrastructure vendors gain control over part of your budget. ¶

So you can bet that as larger outfits take a strategic view of cloud computing, they’re going to be appropriately paranoid about barriers to exit. There’s going to be a powerful demand for standards and for demonstrated interoperability.

A Historical Analogy · To me, the cloud-computing landscape feels like the Web search landscape in 1996. Back then, Everybody who’d thought about it had clued in that this was going to be really interesting and useful. There had been a few offerings, much better than nothing but not really hitting a sweet spot. [Disclosure: One of them was me.] Then Altavista launched and it was clearly better than anything else. Meanwhile, Larry and Sergey were at Stanford thinking about transitive functions over the graph of Web hyperlinks. ¶

Amazon Web Services smells like Altavista to me; a huge step in a good direction. But there are some very good Big Ideas waiting out there to launch, probably incubating right now in a garage or grad school.

Such an idea doesn’t have to come from a big company. And it doesn’t have to be proprietary to grow fast and dominate the landscape. It has to have no aroma of lock-in, it has to be obviously better, and most of all, more than anything else, it has to be really, really easy to get started with.

Updated: 2008/10/27

The Economics of Cloud Computing (A Economia da Computação Nuvem)

outubro 30, 2008

Já que começamos a tratar neste blog sobre a cloud computing, eis aqui um post sobre “a economia” deste tema. Aliás, o autor exagerou no emprego do termo (A Economia da …).  O mais apropriado seria um exercício econômico-financeiro em defesa da cloud computing!  O texto foi retirado de http://broadcast.oreilly.com/2008/10/the-economics-of-cloud-c.html.

=================

The Economics of Cloud Computing

By George Reese
October 24, 2008

Cloud computing has been “the next cool thing” for at least the past 18 months. The current economic climate, however, may be the thing that accelerates the maturity of the technology and drives mainstream adoption in 2009.

This economic crisis is very different from the normal ebbs of the business cycle we have grown accustomed to. The difference lies in how rapidly sources of capital have dried up—whether that capital is coming from venture capitalists, angel investors, or banks. Capital markets are frozen, and companies needing to make capital investments to continue operations or grow are facing a daunting challenge.

A lack of capital creates a lack of flexibility in leveraging technology to operate and grow a business. If you can’t get access to a bank loan, you have to use your company revenues to buy new servers. Using company revenues can damage cash flow, harm valuations, and put otherwise healthy businesses at risk.

Typically, when a company wants to grow its IT infrastructure, it has two options:

  • Build it in house and own or lease the equipment.
  • Outsource the infrastructure to a managed services provider.

In both scenarios, a company must purchase the infrastructure to support peak usage regardless of the normal system usage. One Valtira client, for example, has fairly low usage for most of the year, but sees usage equalling millions of page views/month for about 15 minutes each quarter. For both of the above options, they are faced with paying for an infrastructure necessary for only about 1 hour each year when something much more minimalist will support the rest of the year.

Let’s assume for this customer that two application servers backed by two database servers and balanced by a load balancer will solve the problem. The options look something like this:

  Internal IT Managed Services
Capital Investment $40,000 $0
Setup Costs $10,000 $5,000
Monthly Services $0 $4,000
Monthly Labor $3,200 $0
Cost Over 3 Years $149,000 $129,000

For this calculation, I assume fairly baseline, server-class systems with a good amount of RAM and on-board RAID5 such as a Dell 2950 and a good load balancer. The 3-year cost assumes a 10% cost of capital. I also assume a very cost-efficient managed services provider. Most of the big names will be at least three times more expensive than the numbers I am providing here.

Under this scenario, managed services saves you a nice 13.5% over the do it yourself approach (assuming you don’t get taken to the cleaners by one of the big managed services companies). Of course, it does not consider at all the impact of a server outage at 3am, which is where managed services will shine.

What is particularly appealing about managed services, however, is the lack of capital investment. The $40,000 up-front for an internal IT approach is a terrible burden in the current economic environment. Even if you can get credit, the cost of the loan makes that $40,000 much more expensive over three years than $40,000.

Good argument for managed services? Yes, but a better argument for the cloud.

The cloud enters the picture looking like this:

  Managed Services The Cloud
Capital Investment $0 $0
Setup Costs $5,000 $1,000
Monthly Services $4,000 $2,400
Monthly Labor $0 $1,000
Cost Over 3 Years $129,000 $106,000

Cloud savings over internal IT jump to 29% without getting into the discussion of buy for capacity versus buy what you use!

Between managed services and the cloud, the cloud provides 18% savings.

While 18% and 29% savings are nothing to sneeze at, they are just the start of the financial benefits of the cloud. It goes on.

  • No matter what your needs, your up-front cost is always $0
  • As the discrepancy between peak usage and standard usage grows, the cost difference between the cloud and other options becomes overwhelming.
  • The cloud option essentially includes a built-in SAN in the form of the Amazon Elastic Block Storage. The internal IT and managed services options would go up significantly if we added the cost of a SAN into the infrastructure.
  • Cheap redundancy! While the above environment is not quite a “high availability” environment, it is very highly redundant with systems spread across multiple data centers. The managed services and internal IT options, on the other hand, have single physical points of failure as the application servers and database servers are likely located in the same rack.

Let’s say, however, that you need 10 servers to handle peak usage for 1 hour each year and just 2 to operate the rest of the year. Ignoring the impact of the cost of capital:

  • Internal IT adds another $40,000 in total costs over 3 years.
  • Managed services adds another $144,000 in total costs over 3 years.
  • The Amazon Cloud adds about $24 in total costs over 3 years.

No, that was not a typo. That’s forty THOUSAND dollars against one hundred forty-four THOUSAND dollars against 24 dollars. And as I mentioned earlier, this setup is based on an actual Valtira client that was considering a dedicated managed services option before Valtira began deploying customers in the Amazon cloud. It is not some contrived example.

Obviously, most organizations have either seasonal peaks or daily peaks (or both) with a less dramatic cost differential; but the cost differential is still quite dramatic and quite impactful to the bottom line. In addition, the ability to pay for what you use makes it easy to engage in “proofs of concept” and other R&D that requires dedicated hardware.

O que é a Cloud Computing (Computação Nuvem)? (Cloud Computing: what does it mean?)

outubro 30, 2008

Um conceito que está ganhando muita evidência no meio das tecnologias de informação e comunicação é o de cloud computing (computação nuvem, ou na nuvem).  Mas o que isto significa e qual é o seu impacto nos negócios e no nosso cotidiano?

De forma bastante simples, o termo indica que no futuro nós não teremos, ou mesmo precisaremos, que nossos dados ou programas de software estejam em nossos próprios computadores. Ou seja, eles (os dados e os programas) ficarão dispostos em algum lugar ou em algum servidor de alguém (empresa ou organização) acessável via a Internet. Sendo assim, esta vasta “névoa”  de dados e servidores interconectados das pessoas é então chamada de “nuvem”.

Até aí tudo bem! A questão que vem sendo discutida mais recentemente é se esta vasta nuvem pode ser controlada ou não por alguma entidade, ou empresa, e isso ameaçar nosso controle sobre nossos dados e programas.

Dois grandes nomes de influência no mundo web entraram numa interessante discussão acerca desta questão: Tim O´Reilly (empresário inovador e dono de um importante blog, http://radar.oreilly.com) e Nicholas Carr, escritor que ganhou popularidade com seu livro “IT does matter?”, e que tem um blog muito popular ).

Tudo começou com um post que Tim O´Reilly escreveu em 26/10/2008, intitulado “Web 2.0 and Cloud Computing” (Web 2.0 e a Computação Nuvem).  Neste post ele se referia a um outro post, de Hugh Macleod, que havia criado certa polêmica com seu post The Cloud’s Best Kept Secret (O mais bem guardado segredo da nuvem). O argumento de Hugh era o seguinte: a cloud computing iria levar a um grande monopólio.

E O´Reilly estende um pouco mais este argumento de Macleod apontando o que este último afirmara:

“…ninguém parece estar falando sobre as Leis do Poder (melhor esclarecendo, a frequência distribuição de um conjunto de dados quaisquer, onde, por exemplo, um pequeno número de empresas pode capturar a maior parte de um negócio; grifos nossos).  Ninguém está dizendo que um dia uma única empresa pode possivelmente emergir para dominiar a Nuvem, da maneira como Google dominou o mercado de Busca, da maneira como a Microsoft veio a dominar o mercado de Software. Deixando questões de Monopólio de lado, você poderia imaginar esta empresa? Nós não estaríamos falando de um negócio multi-bilionário como é o de hohe da Microsoft ou Google. Nós estamos fando de algo que pode simplesmente superá-los. Nós estamos falando potencialmente de uma empresa multi-trilionária. Possivelmente a maior empresa que já existiu.”

 A partir daí O´Reilly passa a afirmar que o problema com esta análise é que ela não leva em consideração o que causa as leis do poder na atividade online.  Entender a dinâmica dos retornos crescentes na web é a essência do que O´Reilly denominou de Web 2.0 (para quem ainda não sabe, foi o O´Reilly que cunhou este termo).  Segundo ele, em última instância, na rede, as aplicações vencem se elas pegam melhor mais pessoas para usá-las.  Como ele havia apontado em 2005, Google, Amazon, Ebay, Craigslist, Wikipedia, e todas outras aplicações estrelas da Web 2.0 têm isto em comum.

Para O´Reilly a Cloud computing, pelo menos no senso que Macleod parece estar usando o termo, como um sinônimo do nível de infraestrutura da nuvem como melhor exemplificado por Amazon S3 e EC2, não tem esta categoria de dinâmica. 

Segundo O´Reilly é verdade que os bigger players (maiores jogadores) terão economias de escala no custo do equipamento, e especialmente o custo de energia, que não estão disponíveis para os smaller players (menores jogadores). Mas há poucos big players — Google, Microsoft, Amazon — para citar alguns, que estão nesta escala, com ou sem o jogo da cloud computing. Além disto, economias de escala não são o mesmo que retornos crescentes para os efeitos de redes pros usuários. Eles podem ser característicos de um mercado de comoditização que não necessariamente dá alavancagem econômica desproporcional aos vencedores.

Deste ponto em diante, O´Reilly passa a definir três tipos de cloud computing, a saber, Utility Computing (Computação de “Utilidade Pública”, no senso aportuguesado), Platform as a Service (Plataforma como um Serviço), e Cloud-based end-user applications (Aplicações de usuários finais baseadas na nuvem).

Tudo ia bem, até que no dia 26/10/2008 Nicholas Carr escreveu no seu blog o post intitulado
What Tim O’Reilly gets wrong about the cloud (O que Tim O´Reilly assumiu erradamente sobre a nuvem).  Segundo Carr, O´Reilly aponta importantes questões, mas sua análise é também “capenga”, defeituosa, e os defeitos do seu argumento são tão reveladores quanto sua força.

E qual é a crítica de Carr?  Ele começa pegando o exemplo do Google, que O´Reilly lista como o primeiro exemplo de um negócio que cresceu ao ponto de dominância graças ao efeito de rede.  E Carr se pergunta: “É o efeito de rede realmente o principal engenho alimentando a dominância do Google no mercado de busca?” Carr então argumenta que não é.  Aliás, sua crítica a O´Reilly, em essência, é que esta não é a única razão para o sucesso de empresas como Google.

No dia 27/10/2008, O´Reilly responde à crítica de Carr em um post intitulado “Network Effects in Data“.  Nele O´Reilly aponta o seguinte.  A dificuldade de Nick Carr entender seu argumento de que a cloud computing é provável de acabar com negócios de margens estreitas (de lucro), a não ser que empresas encontrem um modo de aumentar os efeitos de rede que estão no coração da Web 2.0, o fez entender que ele usou o termo “network effects” (efeitos de rede) de alguma forma diferentemente, e não na forma simplistica que muitos o entendem.

No mesmo dia 27/10/2008 Carr reafirma sua crítica a O´Reilly no post “Further musings on the network effect and the cloud (Mais reflexões sobre efeitos de rede e a nuvem).  E, como afirma Carr, já que O´Reilly continua a rejeitar seu argumento de que o sucesso de Google não pode ser explicado pelo efeito de rede, ele utiliza do Prof. Hal Varian, como sendo um dos maiores explicadores do efeito de rede e suas implicações, além do fato dele hoje ser um dos top estrategistas do Google.  E, evocando uma entrevista que o próprio Carr fez com Varian, ele reproduz o seguinte argumento do Varian no começo deste ano:

P: Como nós podemos explicar a posição entrincheirada do Google, mesmo que as diferenças e algorítmos de busca estejam somente agora sendo reconhecidas nas margens? Há mais efeitos de rede escondidos que o tornam melhor para todos nós usarmos o mesmo engenho de busca?

R: As forças tradicionais que dão suporte ao entrincheiramento de mercado, tais como efeito de rede, economias de escala, custos de mudança, não se aplicam ao Google. Para explicar o sucesso do Google você tem que ir atrás de um muito mais velho conceito de economia: aprender fazendo. Google tem feito busca na web por quase 10 anos, logo não é surpresa que nós nos darmos melhor que nossos competidores. E nós estamos trabalhando muito para manter isto desta maneira!

 É: pelo visto uma boa discussão!  Vamos ver onde ela termina (ou se já terminou)!

Multi-Media Plataforms (Plataformas Multi-Mídia)

outubro 29, 2008

A Nielsen, empresa de pesquisa do mundo digital, divulgou em julho deste ano no seu site dados de uma pesquisa sobre consumo de três mídias nos EUA: televisão, internet e artefatos móveis, como celulares.  O resultado é interessante!

============ 

The Nielsen Company released the first comparable U.S. figures showing video and TV usage across the “three screens”: Television, Internet and Mobile devices. Nielsen’s findings show that screen time of the average American continues to increase with TV users watching more TV than ever before (127 hrs, 15 min per month), while also spending 9% more time using the Internet (26 hrs, 26 min per month) from last year. At the same time, a small but growing number of Internet and mobile phone users are watching video online (2 hrs, 19 min per month), as well as using their cell phones to watch video (3 hrs, 15 min per month).

To obtain a full copy of Nielsen’s Three Screen Report, go to http://www.nielsen.com/pdf/3_Screen_Report_May08_FINAL.pdf. These figures will be updated quarterly and available to the public at www.nielsen.com.

Introducing Google Earth for iPhone (Introduzindo Google Earth para o Iphone)

outubro 29, 2008

Novidade no blog oficial do Google!

Introducing Google Earth for iPhone

10/27/2008 12:01:00 AM

Even before we introduced Google Earth back in 2005, the team had long dreamed of being able to carry the Earth around in your pocket. Well, today that dream becomes a reality as we introduce Google Earth for iPhone and iPod touch. With just a swipe of your finger you can fly from Peoria to Paris to Papua New Guinea, or anywhere in between. It may be small, but it brings all the power of Google Earth to the palm of your hand, including all of the same global imagery and 3D terrain. You can even browse any of our 8 million Panoramio photos or read Wikipedia articles.

With Google Earth for iPhone, you can:
• Tilt your iPhone to adjust your view to see mountainous terrain
• View the Panoramio layer and browse the millions of geo-located photos from around the world
• View geo-located Wikipedia articles
• Use the ‘Location’ feature to fly to your current location
• Search for cities, places and business around the globe with Google Local Search

It’s available today in 18 languages and 22 countries in the iTunes App Store. To learn more, check out this video tour and read the blog post on the Lat Long Blog.

http://www.blogpulse.com/

outubro 29, 2008

Acabo de ver (12:00hs do dia 29/10/2008) em Blogpulse os números relativos à blogosphere, e percebo que são impressionantes!

=================

Total identified blogs: 93,356,026
New blogs in last 24 hours: 73,748
Blog posts indexed in last 24 hours:
785,433

Insights on China’s Digital Market (Sacada sobre o mercado digital da China)

outubro 29, 2008

Estava fazendo uma breve imersão na Economics of Internet Search (Economia da Busca na Internet), quando depare com os números abaixo, cujo post veio da Nielsen Online!

==================

Insights on China’s Digital Market

Charlie Buchwalter — Tags: , , — @ October 13, 2008 10:39 am

What’s happening in the world’s largest Internet market? Last week we announced our JV in China – CR-Nielsen – the first company authorized to support the delivery of standardized Internet measurement services in China. The top sites are dominated by Chinese brands, with global brands like Google and Yahoo! present with their local sites. Overall, the numbers are quite significant. The top site, Baidu, was visited by slightly more than 171 million unique browsers during a single week. And while there is no exact comparison between unique browsers and unique visitors – historically our standard audience measurement metric – 171 million of anything during a single week is a big number! So while we are just starting to scratch the surface in this growing market, we know more today than we did yesterday – and we look forward to delivering insights into this important market.

Tags: , ,


%d blogueiros gostam disto: