Archive for julho \31\+00:00 2009

Forecasts vs mechanisms in economics

julho 31, 2009

Post interessante do blog reproduzido do blog de Mark Thoma (!

July 31, 2009

Forecasts vs mechanisms in economics

Forecasts vs mechanisms in economics, by Chris Dillow:

This discussion between Edmund Conway and Andrew Lilico on the Today programme on the alleged crisis in economics seems to me to rest upon a misunderstanding of what economics is.

Conway says the crisis has been “an earthquake for economic thought” and Lilico says we need “new theories.“  This, though, seems to regard economics as a settled but inadequate body of knowledge and theory. It’s not. It is instead a vast number of diverse insights. What’s more, all of the insights that help explain the current economic crisis were, in truth, well known to economists before 2007, for example:

  1. Risk cannot be simply described by a bell curve. But we learnt about tail risk on October 19 1987. And we learnt from the collapse of LTCM in 1998 that correlation risk, liquidity risk and counterparty risk are all significant.
  2. Assets can be mispriced. But we’ve known about bubbles for centuries – since at least 1637. Their existence does not disprove the efficient market hypothesis; as I’ve said, the EMH is not the rational investor hypothesis. Nor, contrary to Conway’s implicit claim, is the EMH inconsistent with the possibility that behaviour can be swayed by emotions; the EMH allows for the possibility of time-varying risk premia*
  3.  Long periods of economic stability can lead to greater risk-taking. We’ve known this since (at least) Hyman Minsky.
  4.  Banks can suffer catastrophic losses – which are correlated across banks. We learnt this – not for the first time – in the Latin American debt crisis of the early 80s and in the crises in Japan and the Nordic countries in the early 90s. Banking crises are a regular feature of even developed economies.
  5.  Institutions, such as banks, can be undermined by badly designed incentives.  But there’s a huge literature on the principal-agent problem.
  6. The current crisis, then, has not thrown up much that economists didn’t know.

    Instead, our problem is a different one. It’s that what we have are lots of mechanisms, capable of explaining why things happen and the links between them. What we don’t have are laws which generate predictions. In his book, Nuts and Bolts for the Social Sciences, Jon Elster stressed this distinction. The social sciences, he said:

Can isolate tendencies, propensities and mechanisms and show that they have implications for behaviour that are often surprising and counter-intuitive. What they are more able to do is to state necessary and sufficient conditions under which the various mechanisms are switched on.

This is precisely the problem economists had in 2007. We knew that there were mechanisms capable of generating disaster. What we didn’t know is whether these were switched on. The upshot is that, although we didn’t predict the crisis, we can more or less explain it after the fact. As Elster wrote:

Sometimes we can explain without being able to predict, and sometimes predict without being able to explain. True, in many cases one and the same theory will enable us to do both, but I believe that in the social sciences this is the exception rather than rule.

The interesting question is: will it remain the exception? My hunch is that it will; economists will never be able to produce laws which yield systemically successful forecasts.  

What’s more, I am utterly untroubled by this. The desire for such laws is as barmy as the medieval search for the philosopher’s stone. If you need to foresee the future, you are doing something badly wrong.

* The basic insight of efficient market theory is that you cannot out-perform the market except by taking extra risk. I am sick and tired of hearing people who still have to work for a living trying to deny this.

The End of Rational Economics

julho 27, 2009

Post interessante da revista Harvard Business Review online!

The End of Rational Economics

Your company has been operating on the premise that people—customers, employees, managers—make logical decisions. It’s time to abandon that assumption.

In 2008, a massive earthquake reduced the financial world to rubble. Standing in the smoke and ash, Alan Greenspan, the former chairman of the U.S. Federal Reserve once hailed as “the greatest banker who ever lived,” confessed to Congress that he was “shocked” that the markets did not operate according to his lifelong expectations. He had “made a mistake in presuming that the self-interest of organizations, specifically banks and others, was such that they were best capable of protecting their own shareholders.”

We are now paying a terrible price for our unblinking faith in the power of the invisible hand. We’re painfully blinking awake to the falsity of standard economic theory—that human beings are capable of always making rational decisions and that markets and institutions, in the aggregate, are healthily self-regulating. If assumptions about the way things are supposed to work have failed us in the hyperrational world of Wall Street, what damage have they done in other institutions and organizations that are also made up of fallible, less-than-logical people? And where do corporate managers, schooled in rational assumptions but who run messy, often unpredictable businesses, go from here?

We are finally beginning to understand that irrationality is the real invisible hand that drives human decision making. It’s been a painful lesson, but the silver lining may be that companies now see how important it is to safeguard against bad assumptions. Armed with the knowledge that human beings are motivated by cognitive biases of which they are largely unaware (a true invisible hand if there ever was one), businesses can start to better defend against foolishness and waste.

The emerging field of behavioral economics offers a radically different view of how people and organizations operate. In this article I will examine a small set of long-held business assumptions through a behavioral economics lens. In doing so I hope to show not only that companies can do a better job of making their products and services more effective, their customers happier, and their employees more productive but that they can also avoid catastrophic mistakes.

Behavioral Economics 101

Drawing on aspects of both psychology and economics, the operating assumption of behavioral economics is that cognitive biases often prevent people from making rational decisions, despite their best efforts. (If humans were comic book characters, we’d be more closely related to Homer Simpson than to Superman.) Behavioral economics eschews the broad tenets of standard economics, long taught as guiding principles in business schools, and examines the real decisions people make—how much to spend on a cup of coffee, whether or not to save for retirement, deciding whether to cheat and by how much, whether to make healthy choices in diet or sex, and so on. For example, in one study where people were offered a choice of a fancy Lindt truffle for 15 cents and a Hershey’s kiss for a penny, a large majority (73%) chose the truffle. But when we offered the same chocolates for one penny less each—the truffle for 14 cents and the kiss for nothing—only 31% of participants selected it. The word “free,” we discovered, is an immensely strong lure, one that can even turn us away from a better deal and toward the “free” one.

For the past few decades, behavioral economics has been largely considered a fringe discipline—a somewhat estranged little cousin of standard economics. Though practitioners of traditional economics reluctantly admitted that people may behave irrationally from time to time, they have tended to stick to their theoretical guns. They have argued that experiments conducted by behavioral economists and psychologists, albeit interesting, do not undercut rational models because they are carried out under controlled conditions and without the most important regulator of rational behavior: the large, competitive environment of the market. Then, in October 2008, Greenspan made his confession. Belief in the ultimate rationality of humans, organizations, and markets crumbled, and the attendant dangers to business and public policy were fully exposed.

Unlike the FDA, for example, which forces medical practitioners and pharmaceutical companies to test their assumptions before sending treatments into the marketplace, no entity requires business (and also the public sector) to get at the truth of things. Accordingly, it’s up to firms to begin investigating basic beliefs about customers, employees, operations, and policies. When organizations acknowledge and anticipate irrational behavior, they can learn to offset it and avoid damaging results. Let’s take a closer look at a few examples.

The Dark Side of Teamwork

A few years ago, my colleagues and I found that most individuals, operating on their own and given the opportunity, will cheat—but just a little bit, all the while indulging in rationalization that allows them to live with themselves. (See “How Honest People Cheat,” HBR, February 2008.) We also found that the simple act of asking people to think of their ethical foundations—say, the Ten Commandments—or their own moral code before they had the opportunity to cheat eliminated the dishonesty.

Copyright © 2009 Harvard Business School Publishing Corporation. All rights reserved.

Too much networking?

julho 25, 2009

 Gostei muito deste post, que foi uma indicação do Guy Kawasaki em seu próprio twitter, e que saiu no blog Alltop, criado pelo próprio Kawasaki.  A ilustração é perfeita!


 Too much networking?
Posted: Thursday, July 23, 2009 7:27 PM by Alan Boyle

<!– Alan Boyle –>

Duane Hoffmann /
Open-source communities may suffer from “an overabundance of connections,”
an information policy researcher suggests in the journal Science.

Are geeks guilty of groupthink? A network expert argues that less social networking would produce more radical innovation on the Internet.

“An overabundance of connections over which information can travel too cheaply can reduce diversity, foster groupthink, and keep radical ideas from taking hold,” Viktor Mayer-Schönberger, director of the Information + Innovation Policy Research Center at the National University of Singapore, writes in this week’s issue of the journal Science.

That may be one of the reasons why much of the open-source software currently being produced is rarely altered in anything more than an incremental manner, Mayer-Schönberger says.

“The basic point that I’m trying to make is … how do we get to the next stage of the Internet, the new-generation Internet, the radical innovation, rather than another dot release on the Firefox browser?” he told me today.

Mayer-Schönberger is focusing on the open-source software community because the perils of groupthink are well-known in the commercial world. Once a complicated piece of software catches hold in the market, there’s often a “lock-in effect” that freezes out radical changes in that software.

“Every radical change gives the users the opportunity to make a switch to a potential competitor’s product, and as a vendor, you don’t like that – except if you’re Apple,” Mayer-Schönberger said. (Apple can rely on a dedicated following that will be quick to adopt radical innovations, such as the iPhone, he explained.) 

Open-source software – like the Firefox Web browser, for example, or the Linux operating system – can be freely modified and redistributed by anyone, which would seem to encourage innovation. But Mayer-Schönberger invokes network theory to contend that the open-source community is so interconnected, using the very tools they helped develop, that fresh ideas don’t have as much time to develop before they’re assimilated (or disposed of) by fellow programmers.

Companies such as Apple, Google, IBM and Microsoft (which is a partner in the joint venture) get around the groupthink trap by creating incubators for research and innovation, modeled after Lockheed Martin’s storied “Skunk Works.” The key is to have limited linkages between the idea incubators and the larger enterprise, Mayer-Schönberger said.

The payoffs from the innovations that are allowed to hatch outweigh the costs of maintaining the incubators. Mayer-Schönberger said examples of such payoffs range from the original IBM PC and Apple Macintosh to the atom bomb (which was created through a government-funded incubator known as the Manhattan Project).

When there’s no incentive for developing unorthodox products or services, and when the network in charge of creating the main product remains highly interconnected, the lock-in effect is more likely to take hold.

The locked-in Internet … and Facebook
“The most prominent example is not commercial software, but the Internet, or more precisely the protocols underlying this dominant network infrastructure,” Mayer-Schönberger wrote in his Science policy paper. “It is too costly and risky for a commercial competitor to create and market a set of radically improved, but incompatible protocols. This is true for the peer-producing, open-source community as well.”

Facebook’s rise illustrates the pluses and minuses of open source, he told me. “By opening the API (application programming interface) they created a lot of room for experimentation,” he said. “That sealed the fate of MySpace and pushed Facebook forward.”

Today, there’s an entire software ecosystem that relies on Facebook’s open API (including Mafia Wars and “25 Random Things”), and that could lead to the lock-in effect. “Changing the API is much harder now because it has an ecosystem that lives off it,” Mayer-Schönberger said. “So it’s a double-edged sword, in a way.”

There’s a rule of thumb that says a network becomes more valuable as it adds more connections, but Mayer-Schönberger said that trend could bog down innovation. “It would be terrible if we reach a basically steady state in the open-source community where we have version 11.5.17 and we change to 11.5.18, and everyone thinks it’s a step forward,” he said.

The prescription for geek groupthink
So what is to be done? On one level, the National Science Foundation is already fostering a “Skunk Works” Internet by supporting next-generation network development programs known as NetSE and GENI.

Mayer-Schönberger said governments could go further by offering incentives for the creation of smaller, competing development teams. This would reduce “their connectedness to the thinking of the status quo through their social networks,” he said. The idea sounds a little bit like an X Prize for information infrastructure.

Another strategy, aimed specifically at the open-source community, would be to break a big project into smaller components, and then let separate teams compete to deal with those components. “That is what propelled Firefox into the forefront,” Mayer-Schönberger told me. “When they broke (the browser program) into small modules, then they had competitive open-source projects on all those subcomponents.”

Mayer-Schönberger was reluctant to extend his analysis to individual behavior, but he said it might be worth your while to take a look at your own social networking. “Think hard about whether those 1,125 Facebook friends are really friends. Think about how many are hangers-on or chance encounters – and perhaps take them off,” he said.

You can build diversity into your own social networking by keeping a division between the various aspects of your life – for example, by using LinkedIn for professional contacts and Facebook for personal contacts.

“We now tend to converge our personal and professional lives, and that’s not necessarily a good idea,” Mayer-Schönberger said. “Having multiple and slightly overlapping networks is better than having one large converged network.”

Reality check
Does Mayer-Schönberger’s view make sense? I asked one of the pioneers in network science, Northeastern University’s Albert-Laszlo Barabasi, for a reality check. He told me the idea of reducing connectedness to encourage innovation was intriguing and provocative.

“I think many people sympathize with the idea that if you could start from scratch, you would have a much different Internet, and a better one,” he said.

However, the reality is that the present-day Internet has so much critical mass that there’d have to be a radical reason for introducing radical change. Maybe a nuclear war. Maybe a totally new communication medium. In any case, it’d have to be something big, Barabasi said.

“If you were to ask me, I think we will be using this Internet for quite a while,” Barabasi said. “If there will be a new Internet, it’s not going to emerge from a private company, because the Internet is just too big for that. … It may require serious social engineering to get away from the status quo.”

10 years of network theory
In the same issue of Science, Barabasi traces the research that’s been conducted into the nature of the Internet, the World Wide Web and other scale-free networks over the past 10 years. (That’s how long I’ve been writing about Barabasi and his colleagues.)

“I think the big thing that has happened is that networks are everywhere now. … They have pretty much invaded all fields of inquiry,” Barabasi told me. Other articles in Science discuss how network theory has been applied to cellular biology, biodiversity, economics and social-ecological systems, how technologies (and diseases) spread, how to fight terrorists and how to get along.

The amazing thing is that researchers are finding strong parallels between the workings of the cell and the workings of the Web. “Because they ended up being so similar to each other, the results that were obtained by studying the World Wide Web could be transferred to the study of the cell,” Barabasi said.

One of the questions surrounding the past decade of network theory may well be why scientists didn’t notice even earlier how much different types of networks had in common. “In a way, the computer made it possible to get large enough data sets to see these features,” Barabasi said. “It was the Internet that allowed biologists to share with each other so we actually had maps.”

Today, network theory can be applied to a wide range of questions: How do we avoid geek groupthink? Who will be our future business competitors? How does the brain work? Why does Ashton Kutcher have so many Twitter followers? (I think the last question is the biggest mystery of all.)

“I could not imagine 10 years ago how big networks would become,” Barabasi told me. “Which is probably a good thing.”

What went wrong with economics

julho 24, 2009

 A nova revista The Economist vem com uma matéria sobre a disciplina de Economia. É uma crítica contundente sobre a reputação deste campo do conhecimento em função da crise que o mundo se encontra.

A matéria tem causado um certo impacto na comunidade de economistas (ver este blog da área), mas não tão grande quanto se poderia imaginar (pode ser alegado que é época de férias nos países do primeiro mundo, portanto, os economistas estariam descansando!).

O certo é que, como profissional da área, não posso negar que existem sérios problemas.  No entanto, encontrar culpados nesta hora seria a mesma coisa de culpar os metereologistas por não previrem tsunamis, ou furações!

De qualquer forma, há muito que ser repensando nesta disciplina, principalmente no entendimento de como se comportam as empresas modernas, em função das transformações que sofreram recentemente, e uma delas é a sua forte dependência de informação, sistemas de informação, e tecnologias de informação. Ignorar este fato é continuar a incorrer em erros que podem muito bem ser corrigidos!


What went wrong with economics
Jul 16th 2009
From The Economist print edition
And how the discipline should change to avoid the mistakes of the past
Illustration by Jon Berkerly

OF ALL the economic bubbles that have been pricked, few have burst more spectacularly than the reputation of economics itself. A few years ago, the dismal science was being acclaimed as a way of explaining ever more forms of human behaviour, from drug-dealing to sumo-wrestling. Wall Street ransacked the best universities for game theorists and options modellers. And on the public stage, economists were seen as far more trustworthy than politicians. John McCain joked that Alan Greenspan, then chairman of the Federal Reserve, was so indispensable that if he died, the president should “prop him up and put a pair of dark glasses on him.”

In the wake of the biggest economic calamity in 80 years that reputation has taken a beating. In the public mind an arrogant profession has been humbled. Though economists are still at the centre of the policy debate—think of Ben Bernanke or Larry Summers in America or Mervyn King in Britain—their pronouncements are viewed with more scepticism than before. The profession itself is suffering from guilt and rancour. In a recent lecture, Paul Krugman, winner of the Nobel prize in economics in 2008, argued that much of the past 30 years of macroeconomics was “spectacularly useless at best, and positively harmful at worst.” Barry Eichengreen, a prominent American economic historian, says the crisis has “cast into doubt much of what we thought we knew about economics.”

In its crudest form—the idea that economics as a whole is discredited—the current backlash has gone far too far. If ignorance allowed investors and politicians to exaggerate the virtues of economics, it now blinds them to its benefits. Economics is less a slavish creed than a prism through which to understand the world. It is a broad canon, stretching from theories to explain how prices are determined to how economies grow. Much of that body of knowledge has no link to the financial crisis and remains as useful as ever.

And if economics as a broad discipline deserves a robust defence, so does the free-market paradigm. Too many people, especially in Europe, equate mistakes made by economists with a failure of economic liberalism. Their logic seems to be that if economists got things wrong, then politicians will do better. That is a false—and dangerous—conclusion.

Rational fools

These important caveats, however, should not obscure the fact that two central parts of the discipline—macroeconomics and financial economics—are now, rightly, being severely re-examined (see article, article). There are three main critiques: that macro and financial economists helped cause the crisis, that they failed to spot it, and that they have no idea how to fix it.

The first charge is half right. Macroeconomists, especially within central banks, were too fixated on taming inflation and too cavalier about asset bubbles. Financial economists, meanwhile, formalised theories of the efficiency of markets, fuelling the notion that markets would regulate themselves and financial innovation was always beneficial. Wall Street’s most esoteric instruments were built on these ideas.

But economists were hardly naive believers in market efficiency. Financial academics have spent much of the past 30 years poking holes in the “efficient market hypothesis”. A recent ranking of academic economists was topped by Joseph Stiglitz and Andrei Shleifer, two prominent hole-pokers. A newly prominent field, behavioural economics, concentrates on the consequences of irrational actions.

So there were caveats aplenty. But as insights from academia arrived in the rough and tumble of Wall Street, such delicacies were put aside. And absurd assumptions were added. No economic theory suggests you should value mortgage derivatives on the basis that house prices would always rise. Finance professors are not to blame for this, but they might have shouted more loudly that their insights were being misused. Instead many cheered the party along (often from within banks). Put that together with the complacency of the macroeconomists and there were too few voices shouting stop.

Blindsided and divided

The charge that most economists failed to see the crisis coming also has merit. To be sure, some warned of trouble. The likes of Robert Shiller of Yale, Nouriel Roubini of New York University and the team at the Bank for International Settlements are now famous for their prescience. But most were blindsided. And even worrywarts who felt something was amiss had no idea of how bad the consequences would be.

That was partly to do with professional silos, which limited both the tools available and the imaginations of the practitioners. Few financial economists thought much about illiquidity or counterparty risk, for instance, because their standard models ignore it; and few worried about the effect on the overall economy of the markets for all asset classes seizing up simultaneously, since few believed that was possible.

Macroeconomists also had a blindspot: their standard models assumed that capital markets work perfectly. Their framework reflected an uneasy truce between the intellectual heirs of Keynes, who accept that economies can fall short of their potential, and purists who hold that supply must always equal demand. The models that epitomise this synthesis—the sort used in many central banks—incorporate imperfections in labour markets (“sticky” wages, for instance, which allow unemployment to rise), but make no room for such blemishes in finance. By assuming that capital markets worked perfectly, macroeconomists were largely able to ignore the economy’s financial plumbing. But models that ignored finance had little chance of spotting a calamity that stemmed from it.

What about trying to fix it? Here the financial crisis has blown apart the fragile consensus between purists and Keynesians that monetary policy was the best way to smooth the business cycle. In many countries short-term interest rates are near zero and in a banking crisis monetary policy works less well. With their compromise tool useless, both sides have retreated to their roots, ignoring the other camp’s ideas. Keynesians, such as Mr Krugman, have become uncritical supporters of fiscal stimulus. Purists are vocal opponents. To outsiders, the cacophony underlines the profession’s uselessness.

Add these criticisms together and there is a clear case for reinvention, especially in macroeconomics. Just as the Depression spawned Keynesianism, and the 1970s stagflation fuelled a backlash, creative destruction is already under way. Central banks are busy bolting crude analyses of financial markets onto their workhorse models. Financial economists are studying the way that incentives can skew market efficiency. And today’s dilemmas are prompting new research: which form of fiscal stimulus is most effective? How do you best loosen monetary policy when interest rates are at zero? And so on.

But a broader change in mindset is still needed. Economists need to reach out from their specialised silos: macroeconomists must understand finance, and finance professors need to think harder about the context within which markets work. And everybody needs to work harder on understanding asset bubbles and what happens when they burst. For in the end economists are social scientists, trying to understand the real world. And the financial crisis has changed that world.

The Fed´s Exit Strategy

julho 23, 2009

Artigo muito bom do Presidente do Banco Central dos EUA (Federal Reserve), Ben Bernanke, no Wall Street Journal, sobre a estratégia de saída do Fed!


JULY 21, 2009, 8:13 A.M. ET
The Fed’s Exit Strategy
The depth and breadth of the global recession has required a highly accommodative monetary policy. Since the onset of the financial crisis nearly two years ago, the Federal Reserve has reduced the interest-rate target for overnight lending between banks (the federal-funds rate) nearly to zero. We have also greatly expanded the size of the Fed’s balance sheet through purchases of longer-term securities and through targeted lending programs aimed at restarting the flow of credit.

These actions have softened the economic impact of the financial crisis. They have also improved the functioning of key credit markets, including the markets for interbank lending, commercial paper, consumer and small-business credit, and residential mortgages.

My colleagues and I believe that accommodative policies will likely be warranted for an extended period. At some point, however, as economic recovery takes hold, we will need to tighten monetary policy to prevent the emergence of an inflation problem down the road. The Federal Open Market Committee, which is responsible for setting U.S. monetary policy, has devoted considerable time to issues relating to an exit strategy. We are confident we have the necessary tools to withdraw policy accommodation, when that becomes appropriate, in a smooth and timely manner.

But as the economy recovers, banks should find more opportunities to lend out their reserves. That would produce faster growth in broad money (for example, M1 or M2) and easier credit conditions, which could ultimately result in inflationary pressures—unless we adopt countervailing policy measures. When the time comes to tighten monetary policy, we must either eliminate these large reserve balances or, if they remain, neutralize any potential undesired effects on the economy.

To some extent, reserves held by banks at the Fed will contract automatically, as improving financial conditions lead to reduced use of our short-term lending facilities, and ultimately to their wind down. Indeed, short-term credit extended by the Fed to financial institutions and other market participants has already fallen to less than $600 billion as of mid-July from about $1.5 trillion at the end of 2008. In addition, reserves could be reduced by about $100 billion to $200 billion each year over the next few years as securities held by the Fed mature or are prepaid. However, reserves likely would remain quite high for several years unless additional policies are undertaken.

Even if our balance sheet stays large for a while, we have two broad means of tightening monetary policy at the appropriate time: paying interest on reserve balances and taking various actions that reduce the stock of reserves. We could use either of these approaches alone; however, to ensure effectiveness, we likely would use both in combination.

Congress granted us authority last fall to pay interest on balances held by banks at the Fed. Currently, we pay banks an interest rate of 0.25%. When the time comes to tighten policy, we can raise the rate paid on reserve balances as we increase our target for the federal funds rate.

Banks generally will not lend funds in the money market at an interest rate lower than the rate they can earn risk-free at the Federal Reserve. Moreover, they should compete to borrow any funds that are offered in private markets at rates below the interest rate on reserve balances because, by so doing, they can earn a spread without risk.

Thus the interest rate that the Fed pays should tend to put a floor under short-term market rates, including our policy target, the federal-funds rate. Raising the rate paid on reserve balances also discourages excessive growth in money or credit, because banks will not want to lend out their reserves at rates below what they can earn at the Fed.

Considerable international experience suggests that paying interest on reserves effectively manages short-term market rates. For example, the European Central Bank allows banks to place excess reserves in an interest-paying deposit facility. Even as that central bank’s liquidity-operations substantially increased its balance sheet, the overnight interbank rate remained at or above its deposit rate. In addition, the Bank of Japan and the Bank of Canada have also used their ability to pay interest on reserves to maintain a floor under short-term market rates.


Despite this logic and experience, the federal-funds rate has dipped somewhat below the rate paid by the Fed, especially in October and November 2008, when the Fed first began to pay interest on reserves. This pattern partly reflected temporary factors, such as banks’ inexperience with the new system.

However, this pattern appears also to have resulted from the fact that some large lenders in the federal-funds market, notably government-sponsored enterprises such as Fannie Mae and Freddie Mac, are ineligible to receive interest on balances held at the Fed, and thus they have an incentive to lend in that market at rates below what the Fed pays banks.

Under more normal financial conditions, the willingness of banks to engage in the simple arbitrage noted above will tend to limit the gap between the federal-funds rate and the rate the Fed pays on reserves. If that gap persists, the problem can be addressed by supplementing payment of interest on reserves with steps to reduce reserves and drain excess liquidity from markets—the second means of tightening monetary policy. Here are four options for doing this.

First, the Federal Reserve could drain bank reserves and reduce the excess liquidity at other institutions by arranging large-scale reverse repurchase agreements with financial market participants, including banks, government-sponsored enterprises and other institutions. Reverse repurchase agreements involve the sale by the Fed of securities from its portfolio with an agreement to buy the securities back at a slightly higher price at a later date.

Second, the Treasury could sell bills and deposit the proceeds with the Federal Reserve. When purchasers pay for the securities, the Treasury’s account at the Federal Reserve rises and reserve balances decline.

The Treasury has been conducting such operations since last fall under its Supplementary Financing Program. Although the Treasury’s operations are helpful, to protect the independence of monetary policy, we must take care to ensure that we can achieve our policy objectives without reliance on the Treasury.

Third, using the authority Congress gave us to pay interest on banks’ balances at the Fed, we can offer term deposits to banks—analogous to the certificates of deposit that banks offer their customers. Bank funds held in term deposits at the Fed would not be available for the federal funds market.

Fourth, if necessary, the Fed could reduce reserves by selling a portion of its holdings of long-term securities into the open market.

Each of these policies would help to raise short-term interest rates and limit the growth of broad measures of money and credit, thereby tightening monetary policy.

Overall, the Federal Reserve has many effective tools to tighten monetary policy when the economic outlook requires us to do so. As my colleagues and I have stated, however, economic conditions are not likely to warrant tighter monetary policy for an extended period. We will calibrate the timing and pace of any future tightening, together with the mix of tools to best foster our dual objectives of maximum employment and price stability.

—Mr. Bernanke is chairman of the Federal Reserve.

Cade aplica multa recorde de R$ 352 milhões a AmBev

julho 22, 2009

Acabo de receber (pela rede da ABEI-Associação Brasileira de Economia Industrial) a notícia abaixo.  Pode repercutir muito esta posição do CADE!


Cade aplica multa recorde de R$ 352 milhões a AmBev
22/07/2009 – 14h03
da Folha Online, em Brasília
O Cade (Conselho Administrativo de Defesa Econômica) condenou por unanimidade nesta quarta-feira a AmBev a pagar multa de R$ 352,6 milhões por prejudicar a concorrência no mercado de cerveja. A multa é a maior da história do conselho –até agora a maior multa havia sido aplicada contra a Gerdau, de R$ 156 milhões por formação de cartel na venda de aço.
A AmBev foi condenada por exigir exclusividade dos seus produtos em pontos de venda e inibir a venda de outras marcas. O Cade entendeu que isso prejudicou as outras marcas de cerveja e o consumidor. O valor corresponde a 2% do faturamento bruto da empresa no ano de 2003, anterior à instauração do processo.
Cade rejeita acordo e pode multar AmBev por prejudicar concorrência <>
Mercado de cerveja cresce mesmo no inverno e com crise <>
“Os consumidores são os mais prejudicados. Não terão eles nem a variedade nem os preços desejados”, afirmou o relator do processo, Fernando de Magalhães Furlan.
Dona da Antarctica e Brahma, a Ambev foi condenada após propor acordo com o Cade <  >
Furlan criticou ainda a AmBev dizendo que ela, como líder, tem responsabilidade sobre atos que repercutem em todo o mercado. A empresa tem mais de 70% do mercado de cerveja e produz, entre outras, Skol, Brahma e Antarctica.
“A representada sempre atuou no limite da legalidade”, completou Furlan.
O conselho determinou ainda que a AmBev pare com os programas de fidelidade que exigem exclusividade, sob pena de multa diária de R$ 53,2 mil.
Reportagem da Folha desta quarta-feira informa que a empresa só se livraria da condenação se algum integrante do conselho pedisse vistas ao processo (íntegra disponível para assinantes) <  > .
O processo contra a AmBev foi aberto em 2004 depois de denúncia da concorrente Schincariol contra os programas de fidelização de pontos de vendas “Tô Contigo” e “Festeja”. A Schincariol acusava a Ambev de oferecer a bares, mercearias e supermercados acordos de exclusividade, descontos e bonificações para que os pontos de venda comercializassem as bebidas da empresa, prejudicando, assim, a venda das marcas concorrentes.
Segundo a Schincariol, os programas da AmBev reduziram a participação de mercado das cervejas Nova Schin e Kaiser em 20% cada, elevando a participação da marcas da Ambev em 8,5%, tendo a Antarctica aumentado sua participação em 56,37%.
Segundo relatório da SDE (Secretaria de Direito Econômico), do Ministério da Justiça, responsável pela instrução do processo, os programas de fidelização podem prejudicar a concorrência, fechar mercados e elevar os custos das marcas rivais. A secretária diz que há fortes indícios de que os programas prejudicam a concorrência, “dificultando o acesso de novas cervejarias ao mercado e criando dificuldade ao funcionamento dos concorrentes já estabelecidos por meio da exclusividade dos pontos de vendas”.
A SDE fez várias inspeções e até uma pesquisa elaborada pelo Ibope com pontos de vendas para levantar irregularidades. Para o órgão, haveria a imposição de exclusividade aos vendedores que entrassem no programa ou a limitação na comercialização de marcas concorrentes. Em troca, os vendedores poderiam comprar as cervejas AmBev por preços mais baixos. De acordo com o relatório, a empresa chegava a fiscalizar os freezers dos pontos de venda para checar se não havia marcas concorrentes.
Segundo a secretaria, o programa “Festeja” determinava que os pontos de venda reduzissem o preço das cervejas da AmBev em pelo menos R$ 0,11 durante a semana e R$ 0,21 nos fins de semana, impondo aos vendedores margens de lucros menores.
A secretaria, assim como a Seae (Secretaria de Acompanhamento Econômico), do Ministério da Fazenda, e a procuradoria do Cade recomendaram ao conselho a condenação da AmBev. A multa poderia chegar a 30% do faturamento da companhia.
A AmBev alegou que os programas de fidelização <  >  eram legais e que beneficiavam o consumidor e ao ponto de venda. “Ao primeiro, porque poderia adquirir produtos com desconto, e, ao segundo, por receber material publicitário específico que lhe permitiria alavancar suas vendas”, afirmou a empresa.
Segundo a AmBev, não houve nenhum tipo de sanção aos pontos de vendas que aderiram aos programas e continuaram vendendo outras marcas. A empresa admitiu, porém, que, na primeira fase do programa “Tô Contigo”, se algum ponto de venda comercializasse outras marcas, “era desligado porque não mais se enquadrava no perfil.
Ontem, a AmBev ofereceu ao Cade a assinatura de um TCC (Termo de Compromisso de Cessação de Prática), mas o conselho se recusou a assinar o acordo.

Krugman´s New Cross (Nova Cruz de Krugman)

julho 22, 2009

A crise financeira internacional tem quebrado vários paradigmas e introduzido novas práticas. No entanto, tem também mobilizado vários acadêmicos para pensar em novas formas de interpretar o que está acontecendo.

Eis aqui um rico debate da macroeconomia que saiu dos jornais acadêmicos (que são lentos na difusão de conhecimento, para não dizer caros!) e ganhou a bloguesfera!

O post vem do blog, que se intitula Economic Perspectives from Kansas City, nos EUA.


Sunday, July 19, 2009

Krugman’s New Cross Confirms It: Job Guarantee Policies Are Needed as Macroeconomic Stabilizers

By Daniel Negreiros Conceição

Krugman’s new explanation of business cycles in the form of a GSFB-PSFB (government sector financial balances – private sector financial balances) model (see his post entry here) has invoked a sort of call-to-arms by some dedicated Keynesian economists. Though those of us who have acquired the habit of looking at both sides of every nominal flow in the economy may see less novelty in Krugman’s new framework, the cross itself is still a very nice tool for presenting our arguments. And, if Krugman’s cross actually succeeds in replacing the unfortunate IS-LM interpretation of Keynes’ General Theory, it will have moved the New Keynesian approach in a constructive new direction.

I will not rehash the details of the Krugman cross in this post, since other bloggers have already explained the cross and some of its implications (see, e.g., Rob Parenteau’s, Scott Fullwiler’s, and Bill Mitchell’s). Instead, I want to draw out a particular (and potentially revolutionary) implication of the model.

I believe that what Krugman was able to finally see, with the help of his cross, is nothing but the restatement of Keynes’s old paradox of thrift for a closed economy with a government sector able to run a deficit (though one can also represent the original paradox of thrift taught in intermediate macroeconomics if the GSFB – government sector financial balances – curve coincides with the horizontal zero deficit/surplus line, i.e. if governments run a balanced budget). Much like the simple exposition of the paradox of thrift for a closed economy without a government sector where aggregate savings are unavoidably equal to the size of aggregate investments, in an economy with a government sector the value of the private sector surplus is unavoidably determined by the government deficit. This means that when the desired level of private sector surplus rises as a share of each level of GDP, the tautological stubbornness of the accounting identity forces the adjustment to take place in one of two ways (or a combination of both). Either:

(1) The government deficit rises so that the private sector is able to achieve its new desired level of surplus at the current level of GDP


(2) GDP must fall until the GS (government sector) deficit reaches the new desired level of PS (private sector) surplus as a share of GDP

This is the exact equivalent to what Keynes argued with regard to an increase in the propensity to save. When people decide that they want to save more as a share of their incomes (or alternatively, when capitalists decide to increase the mark up over wage costs), for a given level of aggregate investment, aggregate incomes must fall so that the same aggregate savings becomes a greater share of the reduced aggregate income. As Rob Parenteau argued, “If Paul [Krugman] recalls his reading of Keynes’ General Theory (and he is to be applauded for being one of the few New Keynesians to actually read Keynes in the original), this is one of the reasons Keynes argues incomes adjust to close gaps between intended investment and planned saving.”

It is time to play a little with the shapes of the curves. Here I believe that Krugman’s analysis is especially useful for explaining policy prescriptions advanced by Post Keynesian economists. First of all, in the GSFB-PSFB model I see none of the interdependency problems and stock/flow inconsistencies that exist in the IS-LM model (so far…). As many of the bloggers (Felipe Rezende here) have demonstrated, under the current set of economic policies in the US, GS deficits tend to rise substantially when GDP falls also substantially. In Krugman’s framework this is represented as a relatively steep GSFB curve. The steeper the curve, the faster the deficit increases for a given reduction of GDP bellow a given threshold (given by the level of GDP where the GSFB curve intercepts the zero surplus/deficit horizontal axis). However, it is also true that the same government who responds promptly to a fall in GDP by raising its deficit significantly also responds aggressively to an increase in GDP above the threshold by raising its surplus since the curve is equally steep going down below the intercept as it is going up above it. Maybe we should represent the curve as having different steepness below and above the threshold, but this is less important as it depends on more sophisticated assumptions about government policies. While Prof. Krugman makes the interesting and convincing argument that the fact that today’s GSFB curve is much steeper than that of the early 1930s (meaning that GS deficits in the 30s did not rise significantly despite a great reduction in aggregate incomes) has kept us from experiencing another Great Depression, we have just begun to experiment with the shapes of the curve.

First of all, let us look at extreme situations:

(1) In the absence of government deficits or surpluses in the economy (i.e. if governments were blindly committed to having balanced budgets), the horizontal GSFB curve would coincide with the zero deficit/surplus line and changes in desired PS surplus out of savings would necessarily lead to aggregate income adjustments so that new equilibrium would be reached at the new intercept where PS surplus was zero. Income fluctuations would be the most violent under these conditions since changes in spending preferences by the private sector would lead to full income adjustments. Note that any horizontal GSFB curve would produce such effect. In other words, macroeconomic instability is the result not of the unwillingness of governments to run a deficit (indeed a horizontal GSFB curve above the horizontal axis could represent any size of GDP independent GS deficits), but of governments not adjusting the size of their deficits to changes in spending propensities out of given incomes.

(2) What if governments decided to determine the level of GDP for the economy (hopefully at full employment)? Then governments could accommodate any change in the level of desired PS surplus by raising and reducing its deficit accordingly so that GDP never needed to adjust. This would be represented by a vertical (or, more realistically, almost vertical if some GDP adjustments still took place) GSFB curve at full employment GDP. These are the kind of policies we should be looking for as automatic stabilizers: policies that make the GDFB as close to vertical as possible at full employment GDP. The most effective way to achieve it: an employer of last resort policy where changes in desired PS surplus at full employment GDP that lead to falling aggregate expenditures and employment in the private sector would be largely compensated by increases in government transfers to the newly hired workers in the form of wages. Even though it is not necessarily the case that the deficit brought about by such policy will be exactly equal to the new desired PS surplus at full employment GDP so that the GSFB is completely vertical, in addition to other stabilizers such policy will significantly raise the steepness of the curve.

Krugman has hit on something of great importance. I hope others will think through the implications of his approach and not allow the momentum to wane.

Ineficiência na alocação de recursos para C&T e Inovação no Brasil

julho 21, 2009

Um recente estudo aponta para a questão da ineficiência na alocação de recursos para a C&T no Brasil. O estudo pode ser encontrado em .

O estudo chega às seguintes considerações finais:

1) Embora diversos segmentos da área de C&T utilizem o argumento da falta de recursos como um fator que nas últimas décadas tem freado o desenvolvimento do setor, observa-se pela análise do período de 2002 a 2006, que esta argumentação não tem sustentação técnica, uma vez que a arrecadação dos fundos setoriais cresceu significativamente no interstício.

2) O Governo Federal, no período analisado, não aplicou a totalidade dos recursos arrecadados pelos fundos setoriais de C&T nas respectivas áreas.

3) O fundo setorial verde amarelo foi uma exceção em relação a política de C&T, tendo a realização da despesa acima de sua arrecadação nos anos de 2004 a 2006.

4) A gestão orçamentária e financeira dos fundos setoriais foi sofrível e de baixo desempenho.

Cabe então, fazer uma reflexão sobre a política de ciência e tecnologia conduzida pelo Governo Federal. E como uma contribuição para trabalhos futuros apresenta-se os seguintes questionamentos:

1) Por que os instrumentos legais vigentes, criados para dar sustentação às políticas de C&T, não atendem com eficiência os segmentos para os quais foram criados? São fatores relacionados à gestão orçamentária e financeira ou são fatores ligados à gestão técnico burocrática dos gestores desses fundos?

2) A Lei do Bem, com relação à aplicação dos recursos da área de C&T diretamente pelas empresas, será uma maneira viável para suplantar o viés da gestão orçamentária e financeira do Governo Federal?

Open Government & Innovations Conference

julho 20, 2009

Amanhã e depois vai ocorrer nos EUA a Open Government & Innovations Conference (  Patrocinada pelo Departamento de Defesa americano, tem tudo para trazer novidades.  Tim O´Reilly é um dos principais speakers!


Facilitated by the Department of Defense, the Open Government & Innovations program will feature real-world case studies and insights presented by the government leaders who, themselves, are leveraging social media tools and Web 2.0 technologies to define and create a more open and innovative United States government.

Leaders from a wide array of government agencies will share how they are building collaborative alliances and penning new policies to achieve President Obama’s vision for a more transparent, participatory and collaborative government.

The Department of Defense’s leadership defining critical themes and topics for OGI’s sessions and keynotes, enables this conference to uniquely resonate with the government audience from the inside out.

The Open Government Innovations Conference (OGI) is an opportunity to collaboratively explore how government can use—and is already using—social media tools and social software to achieve President Obama’s call for government transparency, participation, collaboration and innovation.

On July 21 & 22, 2009, thought leaders from government and industry will convene at the Walter E. Washington Convention Center in Washington to share ideas and case studies about how federal, state and local government can use emerging technologies to create a more efficient and effective government—Government 2.0 by:
o Collaborating across government agencies
o Engaging citizens
o Partnering with industry

A velha nova política industrial

julho 18, 2009

Eis abaixo um excelente artigo que saiu ontem no jornal Valor Econômico. É uma crítica contundente, porém diplomática, e acima de tudo, com rigor intelectual, desmontando os principais argumentos de quem defende cegamente a “Política Industrial” no país hoje!


A velha nova política industrial

Sem a definição de objetivos claros, corre-se o risco de enriquecer
 uns poucos
Mansueto Almeida
Se vamos incentivar setores em que já somos competitivos, como ocorre, não precisamos de política industrial

Depois da retração do Estado na promoção de atividades produtivas que marcou os anos 90, a década identificada com o Consenso de Washington, a atual presencia o forte retorno do Estado na promoção de atividades econômicas por meio do que se denomina políticas industriais. Atualmente, todos os países da América Latina, em graus diferentes, adotam algum tipo de política industrial.

Essas políticas incluem desde incentivos ao crescimento de micro e pequenas empresas de uma mesma atividade (os clusters ou APLs), passando por incentivos à inovação, até políticas direcionadas à concentração de setores e formação de grandes grupos empresariais. Atualmente, ser a favor de política industrial é ser moderno e ser contra essas políticas é ser atrasado, confiar excessivamente na mão benevolente do mercado que, segundo alguns, nos levou à crise atual.

O problema é que as políticas adotadas pelo governo brasileiro, na prática, não correspondem àquelas amplamente anunciadas nos documentos oficiais, não há indicadores para que essas políticas sejam avaliadas e, em muitos casos, as ações adotadas são contraditórias à própria definição de “política industrial”. Vejamos alguns exemplos do que está acontecendo no Brasil.

Primeiro, países que adotam política industrial têm como objetivo fomentar o desenvolvimento de novos setores e criar vantagens comparativas. Se vamos incentivar setores em que já somos competitivos, não precisamos de política industrial. Acontece que mais da metade dos empréstimos do BNDES direcionam-se para os setores de baixa e média-baixa tecnologia, setores nos quais já somos competitivos. Dados do Ipea mostram que, em 2007, 60% (R$ 15,2 bilhões) dos empréstimos do BNDES para a indústria foram para setores de baixa e média-baixa tecnologia. Esse percentual aumentou em relação a 2002, que era de 46,5%. Em 2008, dos 10 maiores empréstimos do BNDES para indústria, oito deles foram para frigoríficos, agroindústria e usinas de álcool.

Segundo, além de incentivar prioritariamente setores em que já somos competitivos, o BNDES tem atuado de forma agressiva na concentração de vários setores da economia, apoiando processo de fusões e aquisições. O livro da economista do MIT Alice Amsden (The Rise of The Rest, 2001) mostra, na página 200, que o Brasil era o único país entre os emergentes industrializados que no final dos anos 80 não possuía um único grupo privado nacional na indústria, entre os 50 maiores grupos privados desses países. Nossos grandes grupos privados estavam todos no setor financeiro e no ramo de construção. Segundo essa análise, o Brasil era prejudicado porque, ao contrário de países como a Coreia do Sul, as empresas brasileiras não tinham porte para diversificar e transferir tecnologias entre empresas em setores diferentes, mas pertencentes ao mesmo grupo empresarial. Mas, mesmo no caso da Coreia do Sul, o incentivo à formação dos grandes grupos privados industriais estava ligado à exigência de diversificação para setores industriais não tradicionais. No caso do Brasil, tem-se a impressão que estamos criando apenas frigoríficos e usinas de cana maiores, como se o objetivo final fosse apenas ser grande.

Terceiro, a literatura de política industrial mostra que, para minimizar o risco de esse tipo de política enriquecer uns poucos a custo de muitos, é recomendável ter objetivos claros e um mix adequado de incentivos e mecanismos de punição (a cenoura e o porrete) – ver Alice Amsden (Asia The Next Giant, 1989). Outros autores, como o professor de Harvard Dani Rodrik, preferem destacar que o governo deve saber a hora de sair, quando as atividades (novas) incentivadas não mostrarem os resultados esperados. Em todos os documentos da política industrial brasileira, não se sabe quais os critérios que poderiam levar à descontinuidade dos incentivos: queremos premiar o espírito empreendedor sem o ônus da contrapartida. Alguns poderiam argumentar que existem metas na Política de Desenvolvimento Produtivo (PDP), mas essas metas não sinalizam quando um grupo empresarial perderia os incentivos concedidos pelo Estado. Estamos repetindo erros do passado.

Por último, os países que adotaram políticas industriais, como Coreia do Sul, Japão, Taiwan etc. contavam no setor público com uma equipe de elite que se relacionava com o setor privado e não se deixava capturar pelas empresas incentivadas, um processo denominado pelo sociólogo Peter Evans de “autonomia e parceria”. Acontece que, atualmente, os ministérios setoriais estão lotados de jovens brilhantes mas muito novos, que nunca colocaram o pé em uma fábrica. Autônomos, mas sem experiência para serem “parceiros”. O maior dos absurdos foi a criação de um agência de desenvolvimento industrial (ABDI) fora do governo para coordenar a política industrial do próprio governo. Como de fato não o faz, a ABDI se transformou em mais um órgão de estudos.

Depois de refletir sobre todos os pontos acima, notei que a política industrial é, de fato, avaliada nos bares de Brasília. Se você conhecer técnicos do BNDES ou dos ministérios, entre um copo de cerveja e outro vai descobrir que a ABDI foi proposta para ser um órgão de governo ligado à Casa Civil, que não existe um único documento oficial explicando a estratégia do BNDES de formação de grandes grupos privados em setores “tradicionais”, que o programa de mais R$ 10 bilhões de recuperação da indústria naval no Brasil é fortemente questionado por técnicos e mesmo por alguns diretores do BNDES; e que ninguém sabe, de fato, como a política industrial é avaliada. Infelizmente, por não termos como acompanhar a política industrial, o seu sucesso depende da sapiência de alguns técnicos bem intencionados. Por que o governo não aproveita o clamor atual por maior transparência e começa a divulgar de forma mais clara a sua real política industrial?
Mansueto Almeida é economista e Técnico de Planejamento e Pesquisa do Ipea.

%d blogueiros gostam disto: