Archive for novembro \30\UTC 2010

Palestra sobre Liberdade e Prosperidade

novembro 30, 2010

Vejam esta palestra sobre Liberdade e Prosperidade, que peguei de um post de um colega (Cláudio Shikida) economista:

http://gustibusgustibus.wordpress.com/

29/11/2010

Paulo Rabello de Castro

Muita gente aprenderia um bocado se abrisse a cabeça para as palavras do Paulo Rabello. Claro, você não precisa concordar com tudo, mas a fala dele mostra como o liberalismo brasileiro, realmente, pode ter um futuro brilhante. Basta que as pessoas percebam, de vez, que liberalismo é a essência da defesa do próprio indivíduo contra as arbitrariedades de toda espécie.

Novamente: vale a pena ouvir toda a palestra.

http://www.youtube.com/watch?v=26lbonJ1BdE (1 parte)

http://www.youtube.com/watch?v=yrhfaxFRV28 (2 parte)

http://www.youtube.com/watch?v=wcRrMTjq6RQ&feature=sub (3 parte)

Anúncios

Thoughts on QE2 (Prof. Robert Barro)

novembro 27, 2010

 

Eis aqui a opinião do Prof. Roberto Barro, como sempre oportuna, sobre a política americana de afrouxamento monetário (etapa 2), que saiu recentemente em The Economist!

===========

Thoughts on QE2

Nov 23rd 2010, 17:52 by Robert Barro | Harvard University

Robert Barro is a professor of economics at Harvard and a senior fellow of Stanford’s Hoover Institution.

A LOT has been written recently, pro and con, about the Fed’s new round of quantitative easing, dubbed QE2. But, frankly, much of the discussion on both sides lacks a coherent analytical framework for thinking about the key issues. I try here to provide such a framework.

The Fed, personified by its chairman, Ben Bernanke, is concerned about the weak economic recovery and, particularly, by the possibility of future deflation. To counter this tendency, the Fed plans a new round of monetary expansion. The main conclusions that I reach are:

In the present environment, where short-term nominal interest rates are essentially zero, expansionary open-market operations involving Treasury bills would do nothing (a point with which the Fed concurs).

Expansionary open-market operations featuring long-term Treasury bonds (QE2) might be expansionary. However, this operation is equivalent to the Treasury shortening the maturity of its outstanding debt. It is unclear why the Fed, rather than the Treasury, should be in the debt-maturity business.

The most important issue, of which the Fed is keenly aware, involves the exit strategy for avoiding inflation once the economy has improved and short-term nominal interest rates are no longer zero. The conventional exit strategy relies on contractionary open-market operations, but the worry is that this strategy would hold back an economic recovery. The Fed believes that paying higher interest rates on reserves gives it an added instrument that will help the economy recover more vigorously while avoiding inflation. I think this view is incorrect. I find that:

In an exit strategy, raising interest rates on reserves to match rising interest rates on Treasury bills is equivalent to a contractionary open-market operation whereby the Fed cuts reserves along with its holdings of bills. Therefore, increasing interest rates on reserves is just as contractionary as the standard exit strategy.

We can compare instead with an exit strategy whereby the Fed reduces the quantity of reserves and its holdings of long-term Treasury bonds. This operation is equivalent to the above strategy plus a lengthening of the maturity of the Treasury’s outstanding debt, something the Treasury can accomplish or avoid without help from the Fed.

As a background, the Fed has, since August 2008, expanded its balance sheet by around $1 trillion. Thus, the Fed has roughly $1 trillion more in assets (dominated by mortgage-backed securities, but that can be the topic of a different column). On the liability side of the Fed’s ledger, excess reserves that pay close to zero interest have expanded by about $1 trillion. Institutions are willing to hold this vast amount of non-interest-bearing claims because of the weak economy; in particular, the financial crisis dramatically increased the demand for low-risk assets, such as reserves held at the Fed. Because of this rise in demand, the dramatic expansion of the quantity of “money” has not yet been inflationary.

For institutions that can hold reserves at the Fed, excess reserves are essentially equivalent to Treasury bills. Therefore, interest rates paid on these two forms of assets have to be nearly the same; in the present environment, both rates are close to zero. If the Fed carries out a conventional expansionary open-market operation, whereby it buys more bills while creating more reserves, the private sector ends up with fewer bills and correspondingly more reserves. Since institutions regard these two claims as essentially the same, there are no effects on the economy; that is, no effects on the price level, real GDP, and so on.

If the Fed does QE2, then it essentially adds to the conventional open-market operation a sale of Treasury bills and a purchase of long-term Treasury bonds. Bills and bonds are not the same, as evidenced by the difference in yields—bills are paying 0.1% while ten-year bonds are paying almost 3%. The hope is that the smaller quantity of long-term Treasury bonds outstanding (outside of the Fed) will tend to raise their price or, equivalently, lower the long-term yield. This reduction in long-term rates might spur aggregate demand. This reasoning may be correct but, as already noted, it has to be the same as the Treasury changing the maturity structure of its debt; that is, funding with more short-term and less long-term debt.

The exit strategy comes into play when and if the economy has improved and, hence, institutions no longer have an enormous demand for low-risk excess reserves that pay zero interest. If the Fed kept the interest rate on reserves at near zero and had no contractionary open-market operations, the extra $1 trillion of reserves would become highly inflationary. To avoid the inflation, the standard policy would be contractionary open-market operations that reduce the quantity of “money”.

The Fed thinks it can improve on the exit strategy by instead raising the interest rate paid on reserves. For example, if rates on Treasury bills rise to 2%, the Fed could pay around 2% on reserves to induce institutions to maintain the excess reserves of $1 trillion held at the Fed. However, at that point, it would still be true that open-market operations involving reserves and bills would not matter. That is, the Fed’s selling off $1 trillion of Treasury bills (if it had that much) in exchange for $1 trillion of reserves would have no effect. This reasoning implies that the exit strategy of raising the interest rate on reserves in tandem with the rise in interest rates on bills is equivalent to the standard contractionary open-market policy. That is, the effects on the real economy are the same.

In practice, the alternative to raising interest on reserves is not a massive sale of Treasury bills (which the Fed does not possess) but, rather, selling off a large portion of the assets accumulated since August 2008. After QE2, this would likely be mostly Treasury bonds but it could also be mortgage-backed securities. When compared to selling bills, the sale of bonds has the reverse of the effect discussed before—the extra bonds would likely require a reduction in price, corresponding to a higher long-term yield and, thereby, an added contractionary force. But, again, the Treasury could offset this effect by changing the maturity structure of its outstanding debt (by shifting toward bills and away from bonds).

My conclusion is that QE2 may be a short-term expansionary force, thereby lessening concerns about deflation. However, the Treasury can produce identical effects by changing the maturity structure of its outstanding debts. The downside of QE2 is that it intensifies the problems of an exit strategy aimed at avoiding the inflationary consequences of the Fed’s vast monetary expansion. The Fed is over-confident about its ability to manage the exit strategy; in particular, it is wrong to view increases in interest rates paid on reserves as a new and more effective instrument for accomplishing a painless exit.

 

 

Long Live the Web: A Call for Continued Open Standards and Neutrality

novembro 23, 2010

Artigo provocativo, e bem fundamentado, do criador da Web Tim Berners-Lee na Scientific American (http://www.scientificamerican.com)!

==========

Long Live the Web: A Call for Continued Open Standards and Neutrality

The Web is critical not merely to the digital revolution but to our continued prosperity—and even our liberty. Like democracy itself, it needs defending

By Tim Berners-Lee November 22, 2010 20

Image: Illustration by John Hendrix

In Brief

  • The principle of universality allows the Web to work no matter what hardware, software, network connection or language you use and to handle information of all types and qualities. This principle guides Web technology design.
  • Technical standards that are open and royalty-free allow people to create applications without anyone’s permission or having to pay. Patents, and Web services that do not use the common URIs for addresses, limit innovation.
  • Threats to the Internet, such as companies or governments that interfere with or snoop on Internet traffic, compromise basic human network rights.
  • Web applications, linked data and other future Web technologies will flourish only if we protect the medium’s basic principles.

The world wide web went live, on my physical desktop in Geneva, Switzerland, in December 1990. It consisted of one Web site and one browser, which happened to be on the same computer. The simple setup demonstrated a profound concept: that any person could share information with anyone else, anywhere. In this spirit, the Web spread quickly from the grassroots up. Today, at its 20th anniversary, the Web is thoroughly integrated into our daily lives. We take it for granted, expecting it to “be there” at any instant, like electricity.

The Web evolved into a powerful, ubiquitous tool because it was built on egalitarian principles and because thousands of individuals, universities and companies have worked, both independently and together as part of the World Wide Web Consortium, to expand its capabilities based on those principles.

The Web as we know it, however, is being threatened in different ways. Some of its most successful inhabitants have begun to chip away at its principles. Large social-networking sites are walling off information posted by their users from the rest of the Web. Wireless Internet providers are being tempted to slow traffic to sites with which they have not made deals. Governments—totalitarian and democratic alike—are monitoring people’s online habits, endangering important human rights.

If we, the Web’s users, allow these and other trends to proceed unchecked, the Web could be broken into fragmented islands. We could lose the freedom to connect with whichever Web sites we want. The ill effects could extend to smartphones and pads, which are also portals to the extensive information that the Web provides.

Why should you care? Because the Web is yours. It is a public resource on which you, your business, your community and your government depend. The Web is also vital to democracy, a communications channel that makes possible a continuous worldwide conversation. The Web is now more critical to free speech than any other medium. It brings principles established in the U.S. Constitution, the British Magna Carta and other important documents into the network age: freedom from being snooped on, filtered, censored and disconnected.

Yet people seem to think the Web is some sort of piece of nature, and if it starts to wither, well, that’s just one of those unfortunate things we can’t help. Not so. We create the Web, by designing computer protocols and software; this process is completely under our control. We choose what properties we want it to have and not have. It is by no means finished (and it’s certainly not dead). If we want to track what government is doing, see what companies are doing, understand the true state of the planet, find a cure for Alzheimer’s disease, not to mention easily share our photos with our friends, we the public, the scientific community and the press must make sure the Web’s principles remain intact—not just to preserve what we have gained but to benefit from the great advances that are still to come.

Universality Is the Foundation
Several principles are key to assuring that the Web becomes ever more valuable. The primary design principle underlying the Web’s usefulness and growth is universality. When you make a link, you can link to anything. That means people must be able to put anything on the Web, no matter what computer they have, software they use or human language they speak and regardless of whether they have a wired or wireless Internet connection. The Web should be usable by people with disabilities. It must work with any form of information, be it a document or a point of data, and information of any quality—from a silly tweet to a scholarly paper. And it should be accessible from any kind of hardware that can connect to the Internet: stationary or mobile, small screen or large.

These characteristics can seem obvious, self-maintaining or just unimportant, but they are why the next blockbuster Web site or the new homepage for your kid’s local soccer team will just appear on the Web without any difficulty. Universality is a big demand, for any system.

Decentralization is another important design feature. You do not have to get approval from any central authority to add a page or make a link. All you have to do is use three simple, standard protocols: write a page in the HTML (hypertext markup language) format, name it with the URI naming convention, and serve it up on the Internet using HTTP (hypertext transfer protocol). Decentralization has made widespread innovation possible and will continue to do so in the future.

The URI is the key to universality. (I originally called the naming scheme URI, for universal resource identifier; it has come to be known as URL, for uniform resource locator.) The URI allows you to follow any link, regardless of the content it leads to or who publishes that content. Links turn the Web’s content into something of greater value: an interconnected information space.

Several threats to the Web’s universality have arisen recently. Cable television companies that sell Internet connectivity are considering whether to limit their Internet users to downloading only the company’s mix of entertainment. Social-networking sites present a different kind of problem. Facebook, LinkedIn, Friendster and others typically provide value by capturing information as you enter it: your birthday, your e-mail address, your likes, and links indicating who is friends with whom and who is in which photograph. The sites assemble these bits of data into brilliant databases and reuse the information to provide value-added service—but only within their sites. Once you enter your data into one of these services, you cannot easily use them on another site. Each site is a silo, walled off from the others. Yes, your site’s pages are on the Web, but your data are not. You can access a Web page about a list of people you have created in one site, but you cannot send that list, or items from it, to another site.

The isolation occurs because each piece of information does not have a URI. Connections among data exist only within a site. So the more you enter, the more you become locked in. Your social-networking site becomes a central platform—a closed silo of content, and one that does not give you full control over your information in it. The more this kind of architecture gains widespread use, the more the Web becomes fragmented, and the less we enjoy a single, universal information space.

A related danger is that one social-networking site—or one search engine or one browser—gets so big that it becomes a monopoly, which tends to limit innovation. As has been the case since the Web began, continued grassroots innovation may be the best check and balance against any one company or government that tries to undermine universality. GnuSocial and Diaspora are projects on the Web that allow anyone to create their own social network from their own server, connecting to anyone on any other site. The Status.net project, which runs sites such as identi.ca, allows you to operate your own Twitter-like network without the Twitter-like centralization.

Open Standards Drive Innovation
Allowing any site to link to any other site is necessary but not sufficient for a robust Web. The basic Web technologies that individuals and companies need to develop powerful services must be available for free, with no royalties. Amazon.com, for example, grew into a huge online bookstore, then music store, then store for all kinds of goods because it had open, free access to the technical standards on which the Web operates. Amazon, like any other Web user, could use HTML, URI and HTTP without asking anyone’s permission and without having to pay. It could also use improvements to those standards developed by the World Wide Web Consortium, allowing customers to fill out a virtual order form, pay online, rate the goods they had purchased, and so on.

By “open standards” I mean standards that can have any committed expert involved in the design, that have been widely reviewed as acceptable, that are available for free on the Web, and that are royalty-free (no need to pay) for developers and users. Open, royalty-free standards that are easy to use create the diverse richness of Web sites, from the big names such as Amazon, Craigslist and Wikipedia to obscure blogs written by adult hobbyists and to homegrown videos posted by teenagers.

Openness also means you can build your own Web site or company without anyone’s approval. When the Web began, I did not have to obtain permission or pay royalties to use the Internet’s own open standards, such as the well-known transmission control protocol (TCP) and Internet protocol (IP). Similarly, the Web Consortium’s royalty-free patent policy says that the companies, universities and individuals who contribute to the development of a standard must agree they will not charge royalties to anyone who may use the standard.

Open, royalty-free standards do not mean that a company or individual cannot devise a blog or photo-sharing program and charge you to use it. They can. And you might want to pay for it if you think it is “better” than others. The point is that open standards allow for many options, free and not.

Indeed, many companies spend money to develop extraordinary applications precisely because they are confident the applications will work for anyone, regardless of the computer hardware, operating system or Internet service provider (ISP) they are using—all made possible by the Web’s open standards. The same confidence encourages scientists to spend thousands of hours devising incredible databases that can share information about proteins, say, in hopes of curing disease. The confidence encourages governments such as those of the U.S. and the U.K. to put more and more data online so citizens can inspect them, making government increasingly transparent. Open standards also foster serendipitous creation: someone may use them in ways no one imagined. We discover that on the Web every day.

In contrast, not using open standards creates closed worlds. Apple’s iTunes system, for example, identifies songs and videos using URIs that are open. But instead of “http:” the addresses begin with “itunes:,” which is proprietary. You can access an “itunes:” link only using Apple’s proprietary iTunes program. You can’t make a link to any information in the iTunes world—a song or information about a band. You can’t send that link to someone else to see. You are no longer on the Web. The iTunes world is centralized and walled off. You are trapped in a single store, rather than being on the open marketplace. For all the store’s wonderful features, its evolution is limited to what one company thinks up.

Other companies are also creating closed worlds. The tendency for magazines, for example, to produce smartphone “apps” rather than Web apps is disturbing, because that material is off the Web. You can’t bookmark it or e-mail a link to a page within it. You can’t tweet it. It is better to build a Web app that will also run on smartphone browsers, and the techniques for doing so are getting better all the time.

Some people may think that closed worlds are just fine. The worlds are easy to use and may seem to give those people what they want. But as we saw in the 1990s with the America Online dial-up information system that gave you a restricted subset of the Web, these closed, “walled gardens,” no matter how pleasing, can never compete in diversity, richness and innovation with the mad, throbbing Web market outside their gates. If a walled garden has too tight a hold on a market, however, it can delay that outside growth.

Keep the Web separate from the Internet
Keeping the web universal and keeping its standards open help people invent new services. But a third principle—the separation of layers—partitions the design of the Web from that of the Internet.

This separation is fundamental. The Web is an application that runs on the Internet, which is an electronic network that transmits packets of information among millions of computers according to a few open protocols. An analogy is that the Web is like a household appliance that runs on the electricity network. A refrigerator or printer can function as long as it uses a few standard protocols—in the U.S., things like operating at 120 volts and 60 hertz. Similarly, any application—among them the Web, e-mail or instant messaging—can run on the Internet as long as it uses a few standard Internet protocols, such as TCP and IP.

Manufacturers can improve refrigerators and printers without altering how electricity functions, and utility companies can improve the electrical network without altering how appliances function. The two layers of technology work together but can advance independently. The same is true for the Web and the Internet. The separation of layers is crucial for innovation. In 1990 the Web rolled out over the Internet without any changes to the Internet itself, as have all improvements since. And in that time, Internet connections have sped up from 300 bits per second to 300 million bits per second (Mbps) without the Web having to be redesigned to take advantage of the upgrades.

Electronic Human Rights
Although internet and web designs are separate, a Web user is also an Internet user and therefore relies on an Internet that is free from interference. In the early Web days it was too technically difficult for a company or country to manipulate the Internet to interfere with an individual Web user. Technology for interference has become more powerful, however. In 2007 BitTorrent, a company whose “peer-to-peer” network protocol allows people to share music, video and other files directly over the Internet, complained to the Federal Communications Commission that the ISP giant Comcast was blocking or slowing traffic to subscribers who were using the BitTorrent application. The FCC told Comcast to stop the practice, but in April 2010 a federal court ruled the FCC could not require Comcast to do so. A good ISP will often manage traffic so that when bandwidth is short, less crucial traffic is dropped, in a transparent way, so users are aware of it. An important line exists between that action and using the same power to discriminate.

This distinction highlights the principle of net neutrality. Net neutrality maintains that if I have paid for an Internet connection at a certain quality, say, 300 Mbps, and you have paid for that quality, then our communications should take place at that quality. Protecting this concept would prevent a big ISP from sending you video from a media company it may own at 300 Mbps but sending video from a competing media company at a slower rate. That amounts to commercial discrimination. Other complications could arise. What if your ISP made it easier for you to connect to a particular online shoe store and harder to reach others? That would be powerful control. What if the ISP made it difficult for you to go to Web sites about certain political parties, or religions, or sites about evolution?

Unfortunately, in August, Google and Verizon for some reason suggested that net neutrality should not apply to mobile phone–based connections. Many people in rural areas from Utah to Uganda have access to the Internet only via mobile phones; exempting wireless from net neutrality would leave these users open to discrimination of service. It is also bizarre to imagine that my fundamental right to access the information source of my choice should apply when I am on my WiFi-connected computer at home but not when I use my cell phone.

A neutral communications medium is the basis of a fair, competitive market economy, of democracy, and of science. Debate has risen again in the past year about whether government legislation is needed to protect net neutrality. It is. Although the Internet and Web generally thrive on lack of regulation, some basic values have to be legally preserved.

No Snooping
Other threats to the web result from meddling with the Internet, including snooping. In 2008 one company, Phorm, devised a way for an ISP to peek inside the packets of information it was sending. The ISP could determine every URI that any customer was browsing. The ISP could then create a profile of the sites the user went to in order to produce targeted advertising.

Accessing the information within an Internet packet is equivalent to wiretapping a phone or opening postal mail. The URIs that people use reveal a good deal about them. A company that bought URI profiles of job applicants could use them to discriminate in hiring people with certain political views, for example. Life insurance companies could discriminate against people who have looked up cardiac symptoms on the Web. Predators could use the profiles to stalk individuals. We would all use the Web very differently if we knew that our clicks can be monitored and the data shared with third parties.

Free speech should be protected, too. The Web should be like a white sheet of paper: ready to be written on, with no control over what is written. Earlier this year Google accused the Chinese government of hacking into its databases to retrieve the e-mails of dissidents. The alleged break-ins occurred after Google resisted the government’s demand that the company censor certain documents on its Chinese-language search engine.

Totalitarian governments aren’t the only ones violating the network rights of their citizens. In France a law created in 2009, named Hadopi, allowed a new agency by the same name to disconnect a household from the Internet for a year if someone in the household was alleged by a media company to have ripped off music or video. After much opposition, in October the Constitutional Council of France required a judge to review a case before access was revoked, but if approved, the household could be disconnected without due process. In the U.K., the Digital Economy Act, hastily passed in April, allows the government to order an ISP to terminate the Internet connection of anyone who appears on a list of individuals suspected of copyright infringement. In September the U.S. Senate introduced the Combating Online Infringement and Counterfeits Act, which would allow the government to create a blacklist of Web sites—hosted on or off U.S. soil—that are accused of infringement and to pressure or require all ISPs to block access to those sites.

In these cases, no due process of law protects people before they are disconnected or their sites are blocked. Given the many ways the Web is crucial to our lives and our work, disconnection is a form of deprivation of liberty. Looking back to the Magna Carta, we should perhaps now affirm: “No person or organization shall be deprived of the ability to connect to others without due process of law and the presumption of innocence.”

When your network rights are violated, public outcry is crucial. Citizens worldwide objected to China’s demands on Google, so much so that Secretary of State Hillary Clinton said the U.S. government supported Google’s defiance and that Internet freedom—and with it, Web freedom—should become a formal plank in American foreign policy. In October, Finland made broadband access, at 1 Mbps, a legal right for all its citizens.

Linking to the Future
As long as the web’s basic principles are upheld, its ongoing evolution is not in the hands of any one person or organization—neither mine nor anyone else’s. If we can preserve the principles, the Web promises some fantastic future capabilities.

For example, the latest version of HTML, called HTML5, is not just a markup language but a computing platform that will make Web apps even more powerful than they are now. The proliferation of smartphones will make the Web even more central to our lives. Wireless access will be a particular boon to developing countries, where many people do not have connectivity by wire or cable but do have it wirelessly. Much more needs to be done, of course, including accessibility for people with disabilities and devising pages that work well on all screens, from huge 3-D displays that cover a wall to wristwatch-size windows.

A great example of future promise, which leverages the strengths of all the principles, is linked data. Today’s Web is quite effective at helping people publish and discover documents, but our computer programs cannot read or manipulate the actual data within those documents. As this problem is solved, the Web will become much more useful, because data about nearly every aspect of our lives are being created at an astonishing rate. Locked within all these data is knowledge about how to cure diseases, foster business value and govern our world more effectively.

Scientists are actually at the forefront of some of the largest efforts to put linked data on the Web. Researchers, for example, are realizing that in many cases no single lab or online data repository is sufficient to discover new drugs. The information necessary to understand the complex interactions between diseases, biological processes in the human body, and the vast array of chemical agents is spread across the world in a myriad of databases, spreadsheets and documents.

One success relates to drug discovery to combat Alzheimer’s disease. A number of corporate and government research labs dropped their usual refusal to open their data and created the Alzheimer’s Disease Neuroimaging Initiative. They posted a massive amount of patient information and brain scans as linked data, which they have dipped into many times to advance their research. In a demonstration I witnessed, a scientist asked the question, “What proteins are involved in signal transduction and are related to pyramidal neurons?” When put into Google, the question got 233,000 hits—and not one single answer. Put into the linked databases world, however, it returned a small number of specific proteins that have those properties.

The investment and finance sectors can benefit from linked data, too. Profit is generated, in large part, from finding patterns in an increasingly diverse set of information sources. Data are all over our personal lives as well. When you go onto your social-networking site and indicate that a newcomer is your friend, that establishes a relationship. And that relationship is data.

Linked data raise certain issues that we will have to confront. For example, new data-integration capabilities could pose privacy challenges that are hardly addressed by today’s privacy laws. We should examine legal, cultural and technical options that will preserve privacy without stifling beneficial data-sharing capabilities.

Now is an exciting time. Web developers, companies, governments and citizens should work together openly and cooperatively, as we have done thus far, to preserve the Web’s fundamental principles, as well as those of the Internet, ensuring that the technological protocols and social conventions we set up respect basic human values. The goal of the Web is to serve humanity. We build it now so that those who come to it later will be able to create things that we cannot ourselves imagine.

Six Degrees of Separation

novembro 23, 2010

Eu já conhecia o princípio dos Six Degrees of Separation (Seis Degraus de Separação), mas ainda não tinha visto o vídeo preparado para apresentá-lo , que é muito revelador da teoria subjacente (a Teoria das Redes).  Muito bom!

PS: O link onde o obtive é:

http://www.viddler.com/explore/JasDhaliwal/videos/24/20.13

Analyzing SaaS Revenue Forecasts and Market Growth by Segment

novembro 20, 2010

Post interessante do blog http://softwarestrategiesblog.com!

================

Analyzing SaaS Revenue Forecasts and Market Growth by Segment

Posted by: Louis Columbus | November 18, 2010

A recent report published by Standard & Poor’s Equity Research Services on the computer software industry makes for interesting reading.

Zaineb Bokhari, Application Software Analyst authored the 47-page report.  He has shown how the software industry is going through a fundamental restructuring due to the impact of SaaS, open source, and the many variations in licensing programs.

His analysis also shows how these factors taken together form a powerful catalyst of long-term disruption to business models.  At one point, the study predicts the end of perpetual licensing due to the time-to-value contributions of SaaS-based applications.  The report is available for download to Equity Research Services clients, including many college and university online libraries that have Standard & Poor’s subscriptions.

Here are the key take-aways from reading this report:

  • IDC expects the market for global packaged software to grow at a compound annual growth rate (CAGR) of 5.8% from 2009 to 2014. Over the same period, IDC projects software-as-a-service will grow at a 25.3% CAGR.

  • Standard & Poor’s reports that the SaaS category is still dwarfed by traditional packaged software, which IDC sized at $272 billion in 2009 (versus $13.1 billion for SaaS).
  • According to IDC’s forecasts, the size of the SaaS will rise from just under 5% of the size of the packaged software market in 2009 to more than 11% by 2014.
  • Standard & Poor’s expects corporate spending on enterprise software and related maintenance to grow in the low to mid-single-digit range (i.e., 3%–6%) in 2010, with some segments expanding at above-average levels.
  • Overall revenues from SaaS delivery models reached $13.1 billion in 2009, a growth rate of 34.2% from 2008 according to IDC. IDC expects this revenue to rise to $40.5 billion by 2014, a compound annual growth rate (CAGR) of 25.3%. This is dwarfed by the $272 billion that IDC believes was spent globally on software in 2009.
  • Application development and deployment is projected to have the most rapid growth of all segments, attaining a 39.2% CAGR from 2009 – 2014. Please see the table below, Worldwide Software-as-a-Service Revenues Forecast by Segment for a year-by-year breakout of this category.
  • Infrastructure software is forecasted to grow at a 27.4% CAGR through 2014, totaling 11,345 instances by 2014 according to IDC.  The year-by-year breakouts are also included in the following table.
  • Applications are expected to have 20.4% CAGR through 2014 based in units and attain a 50.8% market share of all SaaS segments by 2014.

Bottom line: Enterprise software pricing models must change to stay in step with customers’ expectations of more value for their maintenance and licensing fees. The evolving economics of cloud and SaaS-based application deployment are in the process of permanently re-ordering enterprise software.

The Effects of Government-Sponsored Venture Capital: International Evidence

novembro 19, 2010

Vejam que paper interessante que saiu no National Bureau of Economic Research- NBER! Sua conclusão mais interessante é:

Thus, a little bit of government support appears to be a good thing but too much government support has the opposite effect.

==============

The Effects of Government-Sponsored Venture Capital: International Evidence

James A. Brander, Qianqian Du, Thomas F. Hellmann

NBER Working Paper No. 16521
Issued in November 2010
NBER Program(s):   CF

This paper examines the impact of government-sponsored venture capitalists (GVCs) on the success of enterprises. Using international enterprise-level data, we identify a surprising non-monotonicity in the effect of GVC on the likelihood of exit via initial public offerings (IPOs) or third party acquisitions. Enterprises that receive funding from both private venture capitalists (PVCs) and GVCs outperform benchmark enterprises financed purely by private venture capitalists if only a moderate fraction of funding comes from GVCs. However, enterprises underperform if a large fraction of funding comes from GVCs. Instrumental variable regressions suggest that endogeneity in the form of unobservable selection effects cannot account for these effects of GVC financing. The underperformance result appears to be largely driven by investments made in times when private venture capital is abundant. The outperformance result applies only to venture capital firms that are supported but not owned outright by governments.

This paper is available as PDF (182 K) or via email.

Axis of Depression (Prof. Paul Krugman)

novembro 19, 2010

Desta vez o Prof. Paul Krugman foi “na veia”.  Excelente post de ontem em sua coluna em The New York Times!

=============
Axis of Depression


By PAUL KRUGMAN
Published: November 18, 2010

What do the government of China, the government of Germany and the Republican Party have in common? They’re all trying to bully the Federal Reserve into calling off its efforts to create jobs. And the motives of all three are highly suspect. …
It’s no mystery why China and Germany are on the warpath against the Fed. Both nations are accustomed to running huge trade surpluses. But for some countries to run trade surpluses, others must run trade deficits — and, for years, that has meant us. The Fed’s expansionary policies, however, have the side effect of somewhat weakening the dollar, making U.S. goods more competitive, and paving the way for a smaller U.S. deficit. And the Chinese and Germans don’t want to see that happen.
For the Chinese government, by the way, attacking the Fed has the additional benefit of shifting attention away from its own currency manipulation, which keeps China’s currency artificially weak — precisely the sin China falsely accuses America of committing.
But why are Republicans joining in this attack?
Mr. Bernanke and his colleagues seem stunned to find themselves in the cross hairs. They thought they were acting in the spirit of none other than Milton Friedman, who blamed the Fed for not acting more forcefully during the Great Depression — and who, in 1998, called on the Bank of Japan to “buy government bonds on the open market,” exactly what the Fed is now doing.

Republicans, however, will have none of it, raising objections that range from the odd to the incoherent.
The odd: on Monday, a somewhat strange group of Republican figures — who knew that William Kristol was an expert on monetary policy? — released an open letter to the Fed warning that its policies “risk currency debasement and inflation.” These concerns were echoed in a letter the top four Republicans in Congress sent Mr. Bernanke on Wednesday. Neither letter explained why we should fear inflation when the reality is that inflation keeps hitting record lows.
And about dollar debasement: leaving aside the fact that a weaker dollar actually helps U.S. manufacturing, where were these people during the previous administration? The dollar slid steadily through most of the Bush years, a decline that dwarfs the recent downtick. Why weren’t there similar letters demanding that Alan Greenspan, the Fed chairman at the time, tighten policy?
Meanwhile, the incoherent: Two Republicans, Mike Pence in the House and Bob Corker in the Senate, have called on the Fed to abandon all efforts to achieve full employment and focus solely on price stability. Why? Because unemployment remains so high. No, I don’t understand the logic either.
So what’s really motivating the G.O.P. attack on the Fed? Mr. Bernanke and his colleagues were clearly caught by surprise, but the budget expert Stan Collender predicted it all. Back in August, he warned Mr. Bernanke that “with Republican policy makers seeing economic hardship as the path to election glory,” they would be “opposed to any actions taken by the Federal Reserve that would make the economy better.” In short, their real fear is not that Fed actions will be harmful, it is that they might succeed.
Hence the axis of depression. No doubt some of Mr. Bernanke’s critics are motivated by sincere intellectual conviction, but the core reason for the attack on the Fed is self-interest, pure and simple. China and Germany want America to stay uncompetitive; Republicans want the economy to stay weak as long as there’s a Democrat in the White House.
And if Mr. Bernanke gives in to their bullying, they may all get their wish.

When is economic growth good for the poor?

novembro 18, 2010

Interessante post de Lane Kenworthy em seu blog (http://lanekenworthy.net).  Dica de Mark Thoma (http://economistsview.typepad.com/)!

==========

When is economic growth good for the poor?

November 17, 2010

In a good society, the living standards of the least well-off rise over time.

One way to achieve that is rising redistribution: government steadily increases the share of the economy (the GDP) that it transfers to poor households. But there is a limit to this strategy. If the pie doesn’t increase in size, a country can redistribute until everyone has an equal slice but then no further improvement in incomes will be possible. For the absolute incomes of the poor to rise, we need economic growth.

We also need that growth to trickle down to the poor. Does it?

The following charts show what happened in the United States and Sweden from the late 1970s to the mid 2000s. On the vertical axes is the income of households at the tenth percentile of the distribution — near, though not quite at, the bottom. On the horizontal axes is GDP per capita. The data points are years for which there are cross-nationally comparable household income data.

Both countries enjoyed significant economic growth. But in the U.S. the incomes of low-end households didn’t improve much, apart from a brief period in the late 1990s. In Sweden growth was much more helpful to the poor.

In Austria, Belgium, Denmark, Finland, France, Ireland, the Netherlands, Norway, Spain, and the United Kingdom, the pattern during these years resembles Sweden’s. In Australia, Canada, Germany, Italy, and Switzerland it looks more like the American one. (More graphs here.)

What accounts for this difference in the degree to which economic growth has boosted the incomes of the poor? We usually think of trickle down as a process of rising earnings, via more work hours and higher wages. But in almost all of these countries (Ireland and the Netherlands are exceptions) the earnings of low-end households increased little, if at all, over time. Instead, as the next chart shows, it is increases in net government transfers — transfers received minus taxes paid — that tended to drive increases in incomes.

None of these countries significantly increased the share of GDP going to government transfers. What happened is that some nations did more than others to pass the fruits of economic growth on to the poor.

Trickle down via transfers occurs in various ways. In some countries pensions, unemployment compensation, and related benefits are indexed to average wages, so they tend to rise automatically as the economy grows. Increases in other transfers, such as social assistance, require periodic policy updates. The same is true of tax reductions for low-income households.

Should we bemoan the fact that employment and earnings aren’t the key trickle-down mechanism? No. At higher points in the income distribution they do play more of a role. But for the bottom ten percent there are limits to what employment can accomplish. Some people have psychological, cognitive, or physical conditions that limit their earnings capability. Others are constrained by family circumstances. At any given point in time some will be out of work due to structural or cyclical unemployment. And in all rich countries a large and growing number of households are headed by retirees.

Income isn’t a perfect measure of the material well-being of low-end households. We need to supplement it with information on actual living conditions, and researchers and governments now routinely collect such data. Unfortunately, they aren’t available far enough back in time to give us a reliable comparative picture of changes. For that, income remains our best guide. What the income data tell us is that the United States has done less well by its poor than many other affluent nations, because we have failed to keep government supports for the least well-off rising in sync with our GDP.

Quantitative Easing Explained (again)!

novembro 16, 2010

Eis aqui uma excelente explicação do conceito Quantitative Easing (Afrouxamento Quantitativo) que o mundo inteiro está ouvindo falar nos últimos 2 anos.

Este blog já tratou deste conceito duas vezes:

1) https://jccavalcanti.wordpress.com/2009/03/24/quantitative-easing-explained-again-afrouxamento-quantitativo-explicado-novamente/;

2) https://jccavalcanti.wordpress.com/2008/12/24/quantitative-easing-afrouxamento-quantitativo/

Hoje trazemos mais um vídeo explicativo da segunda rodada de afrouxamento quantitativo nos EUA, abaixo!

 

Network Neutrality or Internet Innovation?

novembro 15, 2010

Artigo interessante!

=========

Network Neutrality or Internet Innovation?

Date: 2010-04
By: Yoo, Christopher S.
URL: http://d.repec.org/n?u=RePEc:reg:wpaper:578&r=ict

Over the past two decades, the Internet has undergone an extensive re-ordering of its topology that has resulted in increased variation in the price and quality of its services. Innovations such as private peering, multihoming, secondary peering, server farms, and content delivery networks have caused the Internet’s traditionally hierarchical architecture to be replaced by one that is more heterogeneous. Relatedly, network providers have begun to employ an increasingly varied array of business arrangements and pricing. This variation has been interpreted by some as network providers attempting to promote their self interest at the expense of the public. In fact, these changes reflect network providers’ attempts to reduce cost, manage congestion, and maintain quality of service. Current policy proposals to constrain this variation risk harming these beneficial developments.


%d blogueiros gostam disto: